text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# FET drain voltage spike in push-pull converter
I have returned to my 150W inverter simulation and have a new problem I can't decipher. This is the design:
It runs, it allegedly has good efficiency, and it creates a stable 180V output voltage. I've included a leakage inductance of 198nH on each primary winding to simulate stored energy in the transformer windings. I'm getting ringing on the drains which I expect, and I've added an RC snubber to reduce them. However, at the point of turn-off for the mosfets there is an enormous drain voltage spike up to 100V. It might be higher, but the Vds breakdown is 100V and I suspect the spike is even higher but the models are simply clamping it there.
I have experimented with diodes across drain-source, freewheeling diodes across drain-Vin, but the former does nothing and the latter severely distorts the switching waveform. When I check the primary input waveform of the transformer after the leakage it does not have these spikes:
I'm not sure how else to dissipate this stored leakage energy without interfering with operation. Since it happens at turn-off I wonder if there is a better drive scheme I can be doing? The whole idea of simulation was to identify issues that could destroy components but I'm not sure how to reduce this spike. Any help is appreciated.
• the Rdson is 23 mΩ and the R/L=T values matter ! 1 uOhm? dead-time is important Mar 19 at 1:38
• What is the pulse voltage across the snubber resistor, R1. Is it constant with the magnetizing current at turn-off? There would also be some self-capacitance of the transformer that would reduce the pulse. The avalanche energy rating for the MOSFET is probably large enough so no damage would result in real-life. There is a diode, D1, not connected to anything. Unconnected components sometimes mess up SPICE. Mar 19 at 1:45
• Why do you have 2 trafos? What coupling do they have? Why the 1u resistors, they can't be realistic. And why waste a full bridge when you could have had only two diodes? Also, do yourself, and others, a favour and use the already existent symbols for diode and MOSFET, it'll make for a much clearer schematic. As you probably know, if they're subcircuits instead of models, you need to change the prefix to X, in rest it works just like a .model. Mar 19 at 7:34
|
{}
|
### Articles
Araştırma Makalesi
5. Para-Quaternionic Structures on the 3-Jet Bundle
Araştırma Makalesi
9. The Application of Kolmogorov’s theorem in the one-default model
Araştırma Makalesi
13. On Polynomials and Their Polar Derivative
Araştırma Makalesi
15. On The Darboux Vector Belonging To Involute Curve A Different View
|
{}
|
## Dynamical Systems with Elastic Reflections
Series:
Dynamical Systems Working Seminar
Friday, October 27, 2017 - 15:00
1 hour (actually 50 minutes)
Location:
Skiles 154
,
Georgia Tech
This presentation is about the results of a paper by Y. Sinai in 1970. Here, I will talk about dynamical systems which resulting from the motion of a material point in domains with strictly convex boundary, that is, such that the operator of the second quadratic form is negative-definite at each point of the boundary, where the boundary is taken to be equipped with the field of inward normals. It was proved that such systems are ergodic and are K-systems. The basic method of investigation is the construction of transversal foliations for such systems and the study of their properties.
|
{}
|
Tuesday
March 28, 2017
Posted by on .
se differential, i.e., linear approximation, to approximate (8.4)^(1/3) as follows:
Let f(x)=(x )^(1/3). The linear approximation to f(x) at x=8 can be written in the form y=mx+b where m is: and where b is:
Using this, we find our approximation for (8.4)^(1/3) is
• calculus - ,
y = x^(1/3)
dy/dx = (1/3) x^-(1/3)
at x = 8
y(8) = 2
dy/dx = (1/3)/2
y(x+h) = y(x) + h dy/dx
y(8.4) = 2 + .4 (1/3)/2
## Related Questions
More Related Questions
Post a New Question
|
{}
|
Development and analytical validation of a next-generation sequencing based microsatellite instability (MSI) assay
Oncotarget. 2019; 10:5181-5193. https://doi.org/10.18632/oncotarget.27142
Metrics: PDF 258 views | Full Text 778 views | ?
Abstract
Sarabjot Pabla1,*, Jonathan Andreas1,*, Felicia L. Lenzo1, Blake Burgher1, Jacob Hagen1, Vincent Giamo1, Mary K. Nesline1, Yirong Wang1, Mark Gardner1, Jeffrey M. Conroy1,2, Antonios Papanicolau-Sengos1, Carl Morrison1,2 and Sean T. Glenn1,2,3
1 OmniSeq Inc., Buffalo, NY 14203, USA
2 Center for Personalized Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, NY 14263, USA
3 Department of Molecular and Cellular Biology, Roswell Park Comprehensive Cancer Center, Buffalo, NY 14263, USA
* These authors contributed equally to this work
Correspondence to:
Sean T. Glenn, email: sean.glenn@omniseq.com
Keywords: next-generation sequencing; NGS; microsatellite instability; MSI
Abbreviations: NGS: next-generation sequencing
Received: June 10, 2019 Accepted: July 29, 2019 Published: August 27, 2019
ABSTRACT
Background
We have developed and analytically validated a next-generation sequencing (NGS) assay to classify microsatellite instability (MSI) in formalin-fixed paraffin-embedded (FFPE) tumor specimens.
Methodology
The assay relies on DNA-seq evaluation of insertion/deletion (indel) variability at 29 highly informative genomic loci to estimate MSI status without the requirement for matched-normal tissue. The assay has a clinically relevant five-day turnaround time and can be conducted on as little as 20 ng genomic DNA with a batch size of up to forty samples in a single run.
Results
The MSI detection method was developed on a training set (n = 94) consisting of 22 MSI-H, 24 MSS, and 47 matched normal samples and tested on an independent test set of 24 MSI-H and 24 MSS specimens. Assay performance with respect to accuracy, reproducibility, precision as well as control sample performance was estimated across a wide range of FFPE samples of multiple histologies to address pre-analytical variability (percent tumor nuclei), and analytical variability (batch size, run, day, operator). Analytical precision studies demonstrated that the assay is highly reproducible and accurate as compared to established gold standard PCR methodology and has been validated through NYS CLEP.
Significance
This assay provides clinicians with robust and reproducible NGS-based MSI testing without the need of matched normal tissue to inform clinical decision making for patients with solid tumors.
Introduction
Microsatellite instability (MSI) is a well-described phenomenon characterized by the altered length of short repetitive regions of DNA referred to as microsatellites. The usual setting of microsatellite instability is deactivation of a mismatch repair system protein [1]. Typically, to determine MSI five microsatellites are tested, usually two mononucleotide repeat markers (BAT-25, BAT-26) and three dinucleotide repeat markers (D2S123, D5S346, and D17S250). After amplification, fragment analysis chromatograms for each microsatellite are manually reviewed to assess differences between tumor and normal samples from the same patient in order to identify length differences, or so-called instability, in each microsatellite. A case with instability in at least 2 of 5 microsatellites is defined as microsatellite unstable “high” or “MSI-H” [2].
Approximately 20% of colorectal adenocarcinomas (CRA) and 30% of endometrial endometrioid adenocarcinomas (EEM) are microsatellite unstable, with the majority being sporadic in nature. A minority of cases are associated with Lynch syndrome, which is characterized by early-onset CRA with right-sided predominance that can be synchronous or metachronous, and an increased incidence of extracolonic neoplasms, including EEM. Although CRA and EEM account for the majority of microsatellite unstable tumors, microsatellite instability has a low but substantial incidence in various other tumors [37].
Beyond its function as a screening test to identify patients with Lynch syndrome, MSI status is a critical biomarker of response to checkpoint inhibitors [8, 9]. MSI testing has been FDA-approved as a companion diagnostic for nivolumab monotherapy and nivolumab/ipilimumab combination therapy in CRA. In addition, MSI testing is an FDA-approved companion diagnostic for pembrolizumab immunotherapy across all solid tumors [1012].
An important weakness of the traditional fragment analysis approach for the detection of MSI status is its inherent need to test a tumor sample in parallel with matched normal tissue, which is often not clinically available. Consequently, the requirement of normal DNA can limit the number of patients that can have testing performed due to the difficulty to obtain normal tissue leading to suboptimal turnaround time or an inability to complete microsatellite testing. Although there are additional published studies that use NGS testing to evaluate MSI status [1315] and those that use conventional fragment analysis without matched normal, [16, 17] we have developed the first agile NGS platform that is Clinical Laboratory Improvement Amendments (CLIA) certified and New York State Clinical Laboratory Evaluation Program (NYS-CLEP) approved for clinical MSI testing in patients using next-generation sequencing (NGS) that can be utilized across all solid tumors without the need of matched normal tissue [18].
Assay development
A broad pool of microsatellite instability markers previously identified by NGS in > 300 solid tumors of various histologies was considered as potential targets for inclusion in the MSI NGS assay [14]. To confirm use of these microsatellite repeat regions as viable MSI NGS markers, we examined NGS data from 28 cases assayed by WES representing 7 MSI-H, 7 microsatellite instability low (MSI-L), and 14 microsatellite stable (MSS). Variant calling of the WES BAM files resulted in 233,269 indel loci detected across the 28 samples. For each indel with a specific reference allele, alternate allele and homopolymer repeat number, a fisher’s exact test was performed to test for difference in proportion for MSI positive (MSI-H) cases and MSI negative cases (MSI-L and MSS). Stringent filtering was further applied, where, unique homopolymer indels (alt allele length range 5–7 bp) with very highly significant fisher’s exact test P value < 0.0001 that were present in ≥ 80% (at least 6 out of 7) MSI-H cases but not in MSS cases were identified. The resultant set of 40 loci representing 21 chromosomes were included in the MSI NGS panel design (Supplementary Table 1).
Development of MSI NGS caller
MSI NGS Caller was developed using a training dataset of 94 FFPE samples which included 22 MSI-H and 25 MSS samples, and matched normal, as previously determined by gold standard MSI-PCR [19, 20]. Indels were called from mapped BAM files using TNScope v201711.02 (Sentieon Inc., Mountain View, CA, USA). For each homopolymer locus, the number of peaks (count of various indel lengths at same locus) and average indel length (mean of indel lengths at each locus) was calculated (Supplementary Table 2). Out of 522 loci, 29 loci consistently generated peaks data for > 80% of the cases in the training set (Supplementary Table 3), including BAT-25 and BAT-26 PCR markers. As a result, these 29 highly prevalent loci were chosen for further validation and MSI analyses, wherein, for each locus, the number of peaks and average indel lengths of the 94 training cases were used as input for principal component analyses (Supplementary Table 2, subset). PCA was used to visualize a clear separation of MSI-H cases from MSS as well as matched normal (Figure 1A). Next, unsupervised clustering using “k-means” clustering algorithm was performed with k = 3 (3 centroids). “k” was set at 3 to capture a wide spread of the MSI-H group in two separate clusters with only one cluster expected for MSS and matched normal cases (Supplementary Table 4). K-means algorithm works iteratively to cluster each data point to one of K groups based on the 58 features similarity. Each centroid of a cluster is a collection of feature values which define the resulting cluster. Resulting cluster 1 and cluster 2 were assigned as “MSI-H” and cluster 3 was assigned as “MSS”. For classifying test data set as well as other study samples, this training k-means cluster model was used, wherein, 58 features of the test samples were assigned class label of the closest centroid based on its Euclidean distance from all three centroids.
Figure 1: Development and assessment of the MSI NGS caller. (A) Principle component analysis was used to visualize separation of 94 MSS, MSI-H and normal training cases that were run by MSI-PCR and MSI NGS. (B) 11 out of 24 samples with MSI-PCR data fell between a ±3 centroid distance between cluster 1 and 3. Four had discordant MSI-PCR and MSI NGS calls (black). The inconclusive range was set at > –3 to < 3 (dashed lines). (C and D) The average number of peaks identified in MSI-H, MSS and normal samples by MSI NGS and MSI-PCR for the two shared Bethesda markers.
Defining an inconclusive range
As with all assays that require a set threshold to determine outcome of reporting, true clinical samples that reside close to this decision boundary have the ability to be inaccurately called. Specifically, to the development of this MSI NGS assay, an inconclusive range is necessary to protect against both false positive reporting of MSS cases as MSI-H (subjecting patients to unnecessary treatment) and false negative reporting (lending to missed therapy). As the centroid distances for cluster 1 (MSI-H) and cluster 3 (MSS) get closer together, the ability of the assay to resolve MSS from MSI-H decreases. Comparing MSI-PCR and MSI NGS calls and reviewing the proximity of the cluster 1 (MSI-H) and cluster 3 (MSS) centroid distances to each other allowed for the identification of discordant reporting compared to the MSI-PCR gold standard. Of the 24 samples with MSI-PCR data, 11 cases resided between a ±3 centroid distance between cluster 1 and 3. Of these 11 samples, 4 had discordant calls when comparing MSI-PCR to MSI NGS accounting for an approximate discordance of 37% within the ±3 centroid range (Figure 1B). A critical observation when reviewing the disparate samples is the fact that two of the MSI NGS samples were reported as MSI-H but were reported by MSI-PCR as MSS (false positive reporting), which has critical implications in treatment of patients and further emphasizes the need for an inconclusive range in testing. Alternatively 13 samples that had a centroid cluster difference > 3 between centroid 1 and 3 had 100% concordance. Therefore, the boundary for the inconclusive range of the MSI NGS assay was set at > –3.0 to < 3.0.
Assessment of MSI NGS caller
To evaluate the performance of the MSI NGS caller, it was first applied to the training set of 94 samples (Supplementary Table 5). Within this cohort eight samples fell into the pre-defined inconclusive category accounting for ~8.5% of all samples tested (Supplementary Table 5, highlighted samples). Of these eight inconclusive samples, two (25%) were MSI-H by MSI-PCR (Supplementary Table 5, bolded samples). For the remaining 86 samples, the MSI caller performed with an accuracy of 100% on this cohort with no false reporting (Table 1). Performance was then assessed using a separate validation set of 47 cases with 23 MSI-H and 24 MSS previously tested using the same clinically approved MSI PCR assay (Supplementary Table 6). Within the validation set six samples fell into the inconclusive category (~12.8%) (Supplementary Table 6, highlighted samples). The MSI caller performed with an accuracy of 100% on this separate validation cohort with no false positive or false negatives reported (Table 1).
Table 1: Performance of MSI method on training and validation cohorts
Training Cohort (94 cases) TP FP TN FN Inconclusive Total Sensitivity Specificity PPV NPV Accuracy 20 0 66 0 8 94 100% 100% 100% 100% 100% Validation Cohort (47 cases) TP FP TN FN Inconclusive Total Sensitivity Specificity PPV NPV Accuracy 17 0 24 0 6 47 100% 100% 100% 100% 100%
To further corroborate concordance with the MSI-PCR assay, we used the two shared “Bethesda” markers10 BAT-25 and BAT-26 included in the NGS assay for samples with matched normal available. The number of peaks by NGS and PCR were calculated for tumor and matched normal cases determined as unstable for BAT-25 by PCR assay (Supplementary Table 7). This analysis showed 17 out of 18 (94%) BAT-25 unstable cases with difference in number of peaks (or unique indels present at each loci) for both PCR and NGS assay, demonstrating very high concordance between the two assays. Similarly, 19 out of 20 (95%) BAT-26 unstable cases showed difference in number of peaks for both PCR and NGS assay (Supplementary Table 8). The numerical shift in difference in number of peaks by both methods can be attributed to greater sensitivity of the NGS assay coupled with a calling algorithm designed to accurately call indels in repeat regions. The average number of peaks by NGS and PCR for both BAT-25 and BAT-26 for MSI-H, MSS and Normal groups supports a potential increased sensitivity offered by the NGS assay (Figure 1C and 1D).
Analytical validation
As part of the analytical validation the robustness, or measure of the MSI NGS assay’s ability to remain unaffected by small variations in procedural parameters, was evaluated. To determine assay robustness, the MSI NGS limit of detection (LOD) was evaluated using serial dilutions of MSI-H Control cell line DNA with MSS Control cell line DNA as well varying levels of tumor DNA with matched normal DNA to assess proportion of malignant tissue required for testing. Additionally, studies to evaluate potential interferents, including variable nucleic acid input and batch size, on microsatellite instability detection were performed (Table 2). These studies included multiple solid tumor specimens representing both microsatellite stable (MSS) and MSI-H phenotypes in addition to a no template control (NTC), MSI-H control (MSI-CTL), and MSS control (MSS-CTL) sample.
Table 2: Summary of assay robustness studies
Study sectionDesign summaryDemonstration
Serial Dilutions (LOD)MSI-CTRL DNA mixed with MSS-CTRL DNAEvaluate effect of synthetic percent tumor nuclei (range: 100% MSI-H to 100% MSS) on MSI calling, determine LOD
Tumor Content (LOD)4 MSI-H Tumor DNA samples mixed with matched Normal DNAEvaluate effect of percent tumor nuclei (100, 75, 50, 40, 30, 20, 10%) on MSI calling, determine LOD
Variability in DNA input quantity5 MSI-H samples serially diluted for DNA inputEvaluate effect of DNA input (50, 20, 10, 5 ng) on MSI calling
Batch size40 libraries (20 MSI-H and 20 MSS) tested as 5, 10, 20, 40 batch sizesEvaluate effect of batch size on MSI calling as a result of number of samples sequenced per run
Level of detection (LOD)
To begin to determine the level of detection of the MSI NGS assay serial dilutions of MSI-H positive DNA extracted from commercially available cell lines were diluted with normal; i.e., non-MSI DNA. The MSI-H DNA contribution ranged from 0.0098% to 100% with the MSI NGS assay having the ability to identify and call MSI-H status down to 2.5% input levels (Figure 2A). As expected, the shift from MSI-H to MSS calling was in alignment of the centroid distance (difference between cluster 1 and 3) falling below the 3.0 threshold that defines our inconclusive range further confirming that the sensitivity of the assay lacks resolution in this inconclusive range.
Figure 2: The effects of small variations in procedural parameters on the robustness of MSI NGS calls. NGS calls are plotted as a relative distance to the boundaries of the inconclusive cluster difference (dashed red lines). (A) MSI NGS call at decreasing amounts of MSI-H positive DNA mixed with normal DNA. (B–D) MSI NGS calls across decreasing tumor content (B), varying amounts of DNA input (C) and sequencing batch sizes (D). RD-# are unique, deidentified clinical patient samples used for testing.
As the clinical LOD for MSI-H is based on the tumor content of the sample we further determined the LOD of the MSI NGS assay in regards to tumor content using four samples selected from the 50 MSI-H gold-standard samples with sections containing areas of both tumor (70–90% tumor nuclei content) and adjacent normal tissue. To perform this evaluation, MSI-H tumor samples with abundant adjacent normal for which both elements were independently processed for DNA isolation. Tumor content was defined through pathological assessment of the tumor fraction, representing a range of neoplastic cells. The non-tumor DNA was mixed with the tumor DNA to represent a function of decreasing tumor content for two MSI-H samples. A series of seven different percent tumor nuclei dilutions were carried out on the four samples (Supplementary Table 9). For each sample, at the varying percent tumor nuclei amounts, QC data was collected and the number of peaks and mean indel length per loci were calculated to determine the MSI status (Supplementary Table 9). Correlation values were high between the sample specific indel peak number (mean r = 0.932) and mean indel lengths (mean r = 0.924) for the different percent tumor nuclei values, with the MSI NGS calls for three of the four samples within the dilution series maintaining 100% concordance down to 7–9% tumor content (Figure 2B). As the RD-5365 18% dilution sample failed QC (Supplementary Table 9, highlighted) and was used for subsequent serial dilution to 9% these data points were excluded from interpretation. The three samples that passed QC maintained accurate calling of MSI-H status down to 7–9%, therefore the LOD for the MSI NGS assay for clinical testing has been set at 10% tumor nuclei.
Variability in DNA input
Inconsistency in the DNA input amounts can be expected in normal practice due to potential variability at the lab bench. To evaluate the potential impact of such variability, the effect of DNA input at 50, 10, and 5 ng compared to standard input of 20ng was performed on five samples (three MSI-H and two MSS; Supplementary Table 10). Correlation values were calculated for both number of peaks and mean indel length when comparing the 20 ng input for each sample (Supplementary Table 10). When comparing the MSI NGS call for each sample across the four DNA input amounts there was 100% concordance (Figure 2C). Although there was high concordance across MSI NGS calls for all DNA input amounts, the standard input of the assay is 20 ng.
Batch size
In routine clinical testing, variability in batch size can be expected. To demonstrate that the MSI NGS assay results are unaffected by the number of samples included in a batch (sequencing run), the concordance of MSI NGS calling with varying numbers of samples included per run (5, 10, 20 and 40 samples) was characterized. Forty samples representing both MSS and MSI-H status were run on a single flow cell, followed by a subset of 20, 10, and 5 samples, with each subset run on a single flow cell. As the twenty sample run size is considered optimal for the workflow in the laboratory, correlation values were calculated and shown to be very high for both number of peaks and mean indel length when comparing the 20 sample batch size to the 5, 10, and 40 sample batch sizes for each sample (Supplementary Table 11). When comparing MSI calling for each sample run within the different batch sizes, there was a very high concordance (97.5%) across all runs (Figure 2D). Sample RD-5289 was the only sample to show disparity across final MSI NGS calling due to the 10 sample and 20 sample batch sizes being assigned inconclusive status due to centroid distance (Figure 2D, Supplementary Table 11), whereby in both cases the proximity of this value was very close to the threshold. The consistent MSI NGS results demonstrates the ability of the assay to be performed in batch sizes limited to a maximum of 40 per run, but allow for as few as 5 samples per run depending on volume in the laboratory.
MSI NGS precision (reproducibility studies)
Precision of the MSI NGS assay was determined through a series of reproducibility experiments to ensure the test’s ability to make concordant calling across variables typically found in routine clinical testing (intra-run, inter-run, inter-operator, inter-day, and inter-barcode variance). Furthermore, this study defined the utility and thresholds associated with a common set of control samples (NTC, MSI-CTL, and MSS-CTL) which are to be included in every run (Table 3). To measure intra-assay precision, six DNA samples were run in triplicate within a run using a single operator. Inter-assay precision was determined by running the same DNA samples without replication on a different day by the single operator, with different barcodes on a different day for the same operator, and on a different day with different barcodes for a different operator (Table 3). This study was designed to independently measure the precision of the analytic steps (reproducibility from DNA) with the result (MSS, MSI-H, or Inconclusive) derived from the MSI NGS caller.
Table 3: Summary of reproducibility validation studies
Reproducibility studies
Study sectionDesign summaryDemonstration
intra-run variance6 libraries (3 MSI-H + 3 MSS) + controls tested in triplicate in a single runEvaluate change in MSI calling as a result of sequencing and bioinformatics pipeline variability in a single run
inter-run variance tech 1, day 120 libraries (10 MSI-H + 10 MSS) + controls tested 1x across multiple runsEvaluate change in MSI calling as a result of sequencing and bioinformatics pipeline variability across multiple runs
inter-run variance tech 2, day 220 libraries (10 MSI-H + 10 MSS) + controls tested 1x across two different operatorsEvaluate change in MSI calling as a result of sequencing and bioinformatics pipeline variability across 2 operators
inter-run variance tech 1, day 320 libraries (10 MSI-H + 10 MSS) + controls tested 1x across two daysEvaluate change in MSI calling as a result of sequencing and bioinformatics pipeline variability across 2 days
between barcodes20 libraries (10 MSI-H + 10 MSS) + controls tested with multiple barcodesEvaluate change in MSI calling as a result of sequencing and bioinformatics pipeline variability across multiple barcodes
MSI Run controlsNTC (library QC only), MSI-H and MSS run controls used as template for multiple reproducibility evaluationsEvaluate change in MSI calling as a result of sequencing and bioinformatics pipeline variability across multiple replicates
Intra-run variance
The intra-run variance reproducibility study was completed by operator 1 on day 1 using DNA from 6 samples (3 MSI-H and 3 MSS) processed in triplicate with rotating barcodes (Table 3). For all samples run as part of the reproducibility studies QC metrics were calculated (Supplementary Table 12).
For each sample of the intra-run reproducibility study the number of peaks and mean indel length per loci were calculated to determine the MSI status and correlations for these parameters were calculated across the replicate sets (Supplementary Table 13). Although for three individual replicates within RD-5312 and RD-5345 an inconclusive call was made due to centroid cluster distance residing very close to the cut-off (Figure 3A, Supplementary Table 13), the intra-run reproducibility study demonstrated 100% concordance when comparing the MSI NGS calls when reviewing the cluster designations between the three replicates for the six samples evaluated (Supplementary Table 13). As the inconclusive range is set to protect against miscalling, reproducibility was defined as the assay’s ability to not call a MSI-H case MSS and vice versa, whereas inconclusive calling for a sample is deemed acceptable especially in cases where the centroid distance resides closely to the decision boundary.
Figure 3: The effects of variables found in routine clinical testing on the precision of MSI NGS calls. NGS calls are plotted as a relative distance to the boundaries of the inconclusive cluster difference (dashed red lines). High concordance of MSI NGS calls is observed with sample replicates from intra-run (A), inter-run and barcode (B), inter-technologist (C), and inter-day (D) reproducibility studies. RD-# are unique, deidentified clinical patient samples used for testing.
Inter-run and barcode variance
Between run reproducibility was performed on 20 samples (10 MSI-H and 10 MSS) across 2 operators, 3 different days, on 3 different runs with each replicate using a different barcode. Briefly, on day 2, operator 1 completed libraries for 20 samples sequenced without replication. On day 3, operator 2 completed a second library for the 20 samples that was again sequenced without replication. On day 3, operator 1 completed a third library that was sequenced without replication for a total of three inter-run DNA replicates for the 20 samples.
For each sample of the inter-run and barcode reproducibility study the number of peaks and mean indel length per loci were calculated to determine the MSI status including correlation across the replicate sets (Supplementary Table 13). For the MSI NGS assay, the inter-run and barcode reproducibility study had 100% concordance when comparing the MSI NGS cluster designations (Supplementary Table 13), however four of the twenty samples had disparity in final calling due to inconclusive status related to the cluster boundary (Figure 3B). As with all clinical testing that requires thresholds a transition from a clinical call to an inconclusive status should not be defined as discordant as certain samples will always reside close to the decision boundary defining the inconclusive range. More importantly, when defining reproducibility, an assay should not switch from a MSI-H to MSS or vice versa as this would show a lack in precision in the assay.
Between operator variance
Between operator reproducibility was performed on 20 samples across 2 operators, and 2 different runs. Briefly, operator 1 completed libraries for the 20 samples that were sequenced without replication. Operator 2 completed libraries for the same 20 samples that were again sequenced without replication.
For each sample of the inter-operator reproducibility study the number of peaks and mean indel length per loci were calculated to determine the MSI status (Supplementary Table 13). For the MSI NGS assay, the inter-operator reproducibility demonstrated a 100% concordance when comparing the MSI NGS cluster designations between the two replicates of twenty samples, however two samples had individual calls of inconclusive due to centroid distances residing extremely close to the decision boundary (Figure 3C). As previously described, these disparities are regarded as maintaining precision within this assay.
Between day variance
Between day reproducibility was evaluated on 20 samples across a single operator, and 2 different days. Briefly, operator 1 completed libraries for the 20 samples that was sequenced without replication on day 2. On a second day, operator 1 completed libraries for the 20 samples that was again sequenced without replication on day 3.
For each sample of the inter-day reproducibility study the number of peaks and mean indel length per loci were calculated to determine the MSI status (Supplementary Table 13). For the MSI NGS assay, the inter-day reproducibility demonstrated a 100% concordance when comparing the MSI NGS cluster designations between the two replicates of twenty samples sequenced on different days (Figure 3D). Three of twenty samples had individual inconclusive calls due to centroid cluster distances close to the decision boundary.
Run level controls: NTC, MSI-CTL, and MSS-CTL
The NTC (water) library preparation was included in each run to monitor assay contamination. NTC libraries cannot be sequenced as they gravely effect cluster performance, therefore nM library quantitation (qPCR) and library length (TapeStation, Agilent Technologies, Santa Clara, CA, USA) are assessed and used as contamination threshold metrics. During the validation, the NTC library was generated 14 times as individual replicates producing an average concentration of 4.0 nM (range: 1.6–8.6 nM) with an average library length of 161 bp (range: 147–218) (Supplementary Table 14). As the expected library size of this assay is ~300–350 bp (171 average insert size plus 130 bp of adapter and oligo sequence) and the average library length for the MSI-CTL and MSS-CTL were measured at 321 bp and 327 bp respectively it is evident that the NTC sample is not integrating DNA into the library preparation. Therefore the NTC library thresholds were defined as 8.5 nM (quantitation via qPCR; mean + 2 SDs) and 206 bp (length via TapeStation; mean + 2 SDs) to monitor the assay for gDNA contamination.
The MSI-CTL and MSS-CTL were included in each MSI NGS run as positive and negative run controls. During the validation, the MSI-CTL and MSS-CTL were run a total of 14 times. The correlation of each MSI-CTL and MSS-CTL replicate was calculated by comparing each subsequent run to the first MSI-CTL or MSS-CTL result, with an average correlation of 0.9467 (MSI-CTL) and 0.9265 (MSS-CTL) for number of peaks and 0.9833 (MSI-CTL) and 0.9230 (MSS-CTL) for average indel length demonstrating high correlation and reproducibility at these metrics (Supplementary Table 15). The accurate MSI-H and MSS calls for the MSI-CTL and MSS-CTL, along with other sample level metrics, including percent mapped reads and percent singletons, single-end (non-paired) reads, as well as run level parameters such as Cluster PF%, Total Reads PF, and % ≥Q30 which have been previously defined by Illumina, Inc. [21] are utilized to determine run level QC (Supplementary Tables 16 and 17).
MSI NGS accuracy
MSI status by NGS and PCR were compared to assess the accuracy of DNA-seq from fifty MSI-H and fifty MSS samples of multiple histologies (Supplementary Table 18). Twenty samples were included per run (10 MSI-H and 10 MSS);for each sample the number of peaks, mean indel length per loci and centroid cluster distances were calculated to determine the MSI status (Supplementary Table 19, QC Metrics: Supplementary Table 17). Although 15 of 100 samples (15%; 8 EEM, 6 CRA, 1 female genital) fell into the inconclusive range predefined during the development of the NGS assay, the overall concordance between MSI NGS and MSI-PCR for the 100 samples was 98% due to two MSI-PCR MSI-H samples being called MSS by MSI NGS (1 EEM and 1 CRA; Supplementary Table 19, highlighted samples). The two false negative cases can be attributed to the fact that these cases are on the edge of the predefined decision boundary of 3.0 between cluster centroid 1 (MSI-H) and cluster centroid 3 (MSS), which is an expected observation when decision boundaries are defined in clinical testing.
The high concordance and absence of any false positive calls of the MSI NGS assay confers the high accuracy of this sequencing workflow and pipeline. From these MSI NGS results, assay sensitivity, specificity, PPV, NPV, and several sequencing level metrics for use as future sample level quality control thresholds were calculated. The sensitivity and specificity of the accuracy study are 96% and 100% respectively with a PPV of 100% and an NPV of 96% with an inconclusive rate of 15% (Table 4).
Table 4: Sensitivity and specificity of accuracy study
TPFPTNFNInconclusiveTotalSensitivitySpecificityPPVNPV
3704621510096%100%100%96%
DISCUSSION
Traditionally the clinical significance of microsatellite instability status rested on the observations that a subset of MSI-H cancers is associated with Lynch syndrome [22], necessitating further testing, and that MSI-H colorectal cancers have an improved prognosis and do not respond to fluorouracil-based adjuvant chemotherapy [23]. Recently, the discovery that MSI-H tumors respond to PD-L1 checkpoint inhibitor therapy [2426] has resulted in the first pan-cancer drug indication based on molecular status [27]. The ability to determine MSI status across multiple tumor types is paramount to help identify patients that are likely to respond to CPI therapy while avoiding unnecessary toxicity to patients who are unlikely to respond. We have developed a robust MSI NGS assay that shares comparable specificity and sensitivity to existing gold standard PCR-based methodologies, and which has been CLIA certified and NYS CLEP approved for clinical testing.
The ability to accurately identify MSI status without the need of matched normal tissue, a major hurdle in clinical testing, relies on the use of our MSI caller which utilizes 29 loci targeting homopolymer tandem repeat regions within the genome, integrating both mean indel length and number of unique indel peaks at each loci to define Euclidian distance and its association to cluster centroids to define MSI status. Utilizing a gold standard sample set that has defined MSI status using PCR-based clinical testing as the training and test set we developed the MSI caller which showed 100% sensitivity and 100% specificity in both groups with an inconclusive status in reporting of approximately 10%. As described, the inconclusive range is determined by a defined Euclidean distance between MSI-H cluster 1 and MSS cluster 3. The integration of an inconclusive status helps to protect against false positive and false negative reporting which may lead to detrimental pharmacotherapy. To this end, a sample which resides in the inconclusive range will be reported clinically as inconclusive, which would indicate therapy should not be pursued in this small subset of patients.
Utilizing the MSI Caller algorithm and a defined set of gold-standard clinical samples, where MSI status has previously been reported using a PCR based clinical assay, we have carried out an analytical validation of a MSI NGS assay that can be utilized to determine MSI status in solid tumors for stratification of treatment to CPI therapy. As part of the analytical validation and assay robustness, precision and accuracy were determined. The MSI NGS assays robustness, or measure of the assays capacity to remain unaffected by small variations in procedural parameters, defined a tolerance to a minimum DNA input of 5 ng (although standard input is 20 ng), the sensitivity to detect genomic instability in tumors from 10%–100% neoplastic nuclei content, and the ability to run various batch sizes from 5 to 40 samples per run all of which are critical to help mitigate the vast sample variances identified within the clinical laboratory workflow. Precision of the MSI NGS assay was determined through a series of reproducibility experiments to ensure the test’s ability to make concordant calling across intra-run, inter-run, inter-operator, inter-day, and inter-barcode variance. Furthermore, during the reproducibility studies, the utility and thresholds associated with a common set of control samples (NTC, MSI-CTL, and MSS-CTL) which are to be included in every clinical run were defined. To measure accuracy of the MSI NGS assay 100 gold-standard solid tumor samples previously tested using our NYS CLEP approved MSI-PCR assay (Project ID:709) were utilized. The reported sensitivity and specificity of the accuracy study are 96% and 100% respectively with a PPV of 100% and an NPV of 96% with an inconclusive rate of 15%.
While the PCR-based Bethesda markers were designed for MSI profiling in CRC, the large number of target regions included in the MSI NGS assay allows for increased confidence in pan-cancer testing without the requirement for matched normal tissue, a key component to the significant advantage over PCR. Prior to MSI NGS testing in our laboratory performing MSI in a pan-cancer setting up to 40% of clinical orders lacked matched normal tissue and required subsequent requests for alternate blocks or blood to perform MSI-PCR. Of the cases where alternate material were requested slightly less than one-half yielded matched normal material for testing with an excessive wait time of typically 10 to12 days leading to delayed reporting and an overall inability to complete 20% of all clinical requests for MSI testing. Since the activation of the MSI NGS assay within the clinical laboratory the need to request and wait for matched normal tissue has been alleviated and all cases that meet the requirements set during the analytical validation can be tested, greatly reducing turnaround time and failure rates within the lab.
Overall, we have developed the first NYS-CLEP approved, analytically validated MSI assay that requires minimum input of tumor DNA without the need for matched normal and is histology agnostic. Although it has an equivalent cost and turnaround times to MSI-PCR, it is much more scalable at a high-volume laboratory. Although this assay is NGS-based, the workflow is performed similarly to a single analyte test and can be efficiently and inexpensively integrated into any molecular diagnostic laboratory.
Specimens
Samples were procured with informed patient consent under an institutional banking policy (IRB Protocol I115707) and the study was approved by the internal review board at Roswell Park Comprehensive Cancer Center (IRB Protocol # BRD 073116). For assay validation, 100 fixed paraffin embedded (FFPE) human clinical specimens collected from 2009–2017 and a subset of matched normal tissues from colorectal cancer (73 cases), endometrioid carcinoma (18), uterine (4), small intestine (2), prostate (1), stomach adenocarcinoma (1), and female genital (1) cancer patients stored at the OmniSeq, Inc. (Buffalo, NY, USA) and Roswell Park Comprehensive Cancer Center (Buffalo, NY, USA) remnant tissue biobanks were used to evaluate the performance of the MSI NGS assay. Two human cell lines, non-small cell lung cancer (NSCLC) HCC-78 cells (DSMZ, Braunschweig, Germany), and colorectal cancer (CRC) HCT-116 cells (ATCC, Manassas, VA, USA) processed as FFPE blocks were also used for development and as internal run controls.
Tissue QC and extraction
A hematoxylin and eosin (H&E) stained tumor and normal tissue sections were reviewed by a board-certified anatomical pathologist to establish tissue QC parameters. Criteria for neoplastic testing was ≥ 2 mm2 of tumor surface area per slide, with tumor cellularity ≥ 10% and necrosis ≤ 50%. For level of detection studies, non-malignant tissue was also identified for isolation and reviewed to exclude any neoplastic tissue. Genomic DNA was extracted from areas identified by the pathologist using 3–5 unstained slides with the truXTRAC FFPE extraction kit (Covaris, Inc., Woburn, MA, USA), as described previously. DNA was eluted in 100 µL water, and yield was determined by the Qubit DNA HS Assay (Thermo Fisher Scientific, Waltham, MA, USA), as per manufacturer’s recommendation. To ensure adequate library preparation, a predefined yield of 20 ng DNA was used.
Run controls
To establish thresholds and daily QC parameters, run controls were identified and included in each library preparation batch and NGS run. They included both MSI positive (HCT-116) and MSS (HCC-78) controls, as well as a no template control (NTC, water). Positive controls provide templates for all targets for MSI-H interpretations, while negative controls monitor assay specificity. The NTC is used to monitor assay contamination.
Library preparation and NGS
MSI NGS libraries were prepared from 20 ng DNA using the TruSeq Custom Amplicon Low Input Kit (Illumina, San Diego, CA, USA). The panel content is detailed in section “Results: Assay development”. Following hybridization, oligos were extended, ligated, and unique indexes were added. Libraries were amplified, purified, quantitated and normalized to 4 nM. Up to 40 equimolar libraries were pooled, denatured and further diluted to 7 pM. Pooled libraries were sequenced on a MiSeq (Illumina) sequencer using a 300 cycle paired end sequencing kit to obtain 500X mean depth per sample.
Accuracy studies
MSI NGS accuracy was evaluated by comparison with a PCR based NYS CLEP-approved assay for all samples. For gold standard PCR analysis, 4 sets of fluorescently-labeled primers were used for amplification of five markers (BAT-25, BAT-26, D2S123, D5S346, and D17S250). Internal lane size standards added to PCR products assured accurate sizing of alleles and to adjust for run-to-run variation. PCR products were separated by capillary electrophoresis using ABI PRISM 3500xl Genetic Analyzer and output data was analyzed with GeneMapper Software 5 (both Applied Biosystems, Foster City, CA, USA).
NGS analysis pipeline and QC
QC metrics were established and defined at the run and sample level to ensure high quality results and to monitor any run to run variance or long term drift (Supplementary Table 20). Sequencing data from the Illumina platform were first processed using custom bioinformatics pipeline for reference mapping and indel calling, during which validation-defined quality control (QC) specifications for depth of coverage were used as acceptance criteria. To ensure high quality results, a QC system was developed based on NGS data generated at validation. The QC criteria were established for several metrics at the run, sample, and run control thresholds, with defined values to accept or reject one or more aspects of sequencing. Likewise, specific QC metrics were monitored over time to detect any potential long-term assay drift. Quality filters were used at the amplicon level to remove counts below the threshold for detection, and at the base-pair level for low-quality indel calls.
A custom MSI NGS caller and pipeline was developed to predict the MSI status from NGS data utilizing a training set (n = 94; MSI-H = 22, MSS = 24) and test set (n = 48; MS--H = 22, MSS = 24) of gold-standard MSI-PCR samples collected from the clinical laboratory archive (see Results: Development of MSI NGS Caller). In general, the pipeline was designed to read “.fastq” files of all samples from a sequencing run and conduct sequence alignment, variant calling, indel extraction, indel length and number of peaks calculation and MSI prediction (Supplementary Figure 1). Specifically, in the first part of the algorithm fastq file of each sample within a run was aligned to human genome (hg19) using bwa resulting in SAM genome alignment file which was further sorted and converted to BAM format and indexed for faster processing. This was followed by the crucial variant calling step wherein, all the aligned reads within a sample specific indexed BAM file were then used for custom variant calling (TNscope v201711.02, Sentieon Inc., Mountain View, CA, USA) to generate VCF (variant call file) files. This was followed by extraction of indel calls from the VCF files which were then used to calculate number of peaks (number of unique indels at each loci) and average indel length for 29 loci identified in the MSI call development as follows: For each locus X, Average indel ${\text{Length}}_{\left(\text{L}\right)}=\frac{{\sum }_{i=1}^{n}allelic_lengt{h}_{i}}{Todal\text{}Number\text{}of\text{}alleles\text{}at\text{}loci"X"}$ and For each locus X, Number of Peaks (nPeaks(X)) = Total number of alleles at loci “X” Finally, MSI classification was then performed using these 58 features where, Euclidean distance was calculated between each new sample and the centroid of original training kmeans clusters. The closest cluster by Euclidean distance was then used to assign a MSI NGS prediction for the sample (Supplementary Figure 2).
Statistical analysis
Principal component analysis (FactoMineR v1.41 in R v3.4.2) was performed on 58 features to visualize the separation between MSI-H and MSS/Normal samples. To develop a predictive model, Kmeans clustering (kmeans stats package in R v3.4.2) was performed to identify three centroids (k = 3) that represent two MSI-H clusters and a combined MSS/Normal cluster. This led to an intuitive and simplistic method of assigning future samples to each of the original clusters using simple Euclidean distance measure. Correlation measure used throughout the study is Pearson’s correlation coefficient denoted as “r”. Standard performance metrics are defined as Sensitivity $\left(\frac{TP}{\left[TP+FN\right]}\right)$, specificity $\left(\frac{TN}{\left[TN+FP\right]}\right)$, PPV $\left(\frac{TP}{\left[TP+FP\right]}\right)$, NPV $\left(\frac{TP}{\left(TN+FN\right)}\right)$ and accuracy $\left(\frac{TP+TN}{\left(TP+FP+TN+FN\right)}\right)$.
Author contributions
SP, JA, BB, JH, VG, JMC, CM, and STG contributed to the experimental design of this analysis. SP, JA, FLL, BB, JH, VG, MKN, YW, MG, JMC, APS, CM and STG prepared and analyzed data and were major contributors to writing and revising the manuscript. All authors read and approved the final manuscript.
ACKNOWLEDGMENTS
We thank Elizabeth Brese and Monica Murphy from Roswell Park Comprehensive Cancer Center shared resources (funded by NCI P30CA16056) and technical expertise in procuring the FFPE samples used in this study. This work was supported by National Cancer Institute (NCI) grant P30CA016056 involving the use of Roswell Park Comprehensive Cancer Center’s Genomic Shared Resource.
Availability of data and materials
The datasets generated and/or analyzed during the current study are not publicly available due to a non-provisional patent filing covering the methods used to analyze such datasets but are available from the corresponding author upon reasonable request.
CONFLICTS OF INTEREST
SP, JA, FLL, BB, JH, VG, MKN, YW, MG, JMC, APS, CM, and STG are employees of OmniSeq, Inc. (Buffalo, NY, USA) and hold restricted stock in OmniSeq, Inc. JMC, CM, and STG are employees of Roswell Park Comprehensive Cancer Center (Buffalo, NY, USA), which is the majority shareholder of OmniSeq, Inc.
FUNDING
All studies were carried out and supported by OmniSeq, Inc. (Buffalo, NY, USA).
References
1. Bellizzi AM, Frankel WL. Colorectal Cancer Due to Deficiency in DNA Mismatch Repair Function. Adv Anat Pathol. 2009; 16:405–17. https://doi.org/10.1097/PAP.0b013e3181bb6bdc. [PubMed].
2. Umar A, Boland CR, Terdiman JP, Syngal S, de la Chapelle A, Rüschoff J, Fishel R, Lindor NM, Burgart LJ, Hamelin R, Hamilton SR, Hiatt RA, Jass J, et al. Revised Bethesda Guidelines for hereditary nonpolyposis colorectal cancer (Lynch syndrome) and microsatellite instability. J Natl Cancer Inst. 2004; 96:261–8. https://doi.org/10.1093/jnci/djh034. [PubMed].
3. Bonneville R, Krook MA, Kautto EA, Miya J, Wing MR, Chen HZ, Reeser JW, Yu L, Roychowdhury S. Landscape of Microsatellite Instability Across 39 Cancer Types. JCO Precis Oncol. 2017; 2017:1–15. https://doi.org/10.1200/PO.17.00073. [PubMed].
4. Maruvka YE, Mouw KW, Karlic R, Parasuraman P, Kamburov A, Polak P, Haradhvala NJ, Hess JM, Rheinbay E, Brody Y, Koren A, Braunstein LZ, D’Andrea A, et al. Analysis of somatic microsatellite indels identifies driver events in human tumors. Nat Biotechnol. Nature Publishing Group; 2017; 35:951–9. https://doi.org/10.1038/nbt.3966. [PubMed].
5. Hause RJ, Pritchard CC, Shendure J, Salipante SJ. Classification and characterization of microsatellite instability across 18 cancer types. Nat Med. 2016; 22:1342–50. https://doi.org/10.1038/nm.4191. [PubMed].
6. Cortes-Ciriano I, Lee S, Park WY, Kim TM, Park PJ. A molecular portrait of microsatellite instability across multiple cancers. Nat Commun. 2017; 8:15180. https://doi.org/10.1038/ncomms15180. [PubMed].
7. Kawakami H, Zaanan A, Sinicrope FA. Microsatellite Instability Testing and Its Role in the Management of Colorectal Cancer. Curr Treat Options Oncol. 2015; 16:30. https://doi.org/10.1007/s11864-015-0348-2. [PubMed].
8. Diaz LA, Le DT. PD-1 Blockade in Tumors with Mismatch-Repair Deficiency. N Engl J Med. 2015; 373:1979–1979. https://doi.org/10.1056/NEJMc1510353. [PubMed].
9. Overman MJ, McDermott R, Leach JL, Lonardi S, Lenz HJ, Morse MA, Desai J, Hill A, Axelson M, Moss RA, Goldberg MV, Cao ZA, Ledeine JM, et al. Nivolumab in patients with metastatic DNA mismatch repair-deficient or microsatellite instability-high colorectal cancer (CheckMate 142): an open-label, multicentre, phase 2 study. Lancet Oncol. 2017; 18:1182–91. https://doi.org/10.1016/S1470-2045(17)30422-9. [PubMed].
10. U.S. Food and Drug Administration. FDA grants accelerated approval to pembrolizumab for first tissue/site agnostic indication. FDA.gov. U.S. Food and Drug Administration. 2017; 2019:21. from https://www.fda.gov/Drugs/InformationOnDrugs/ApprovedDrugs/ucm560040.htm.
11. U.S. Food and Drug Administration. FDA grants nivolumab accelerated approval for MSI-H or dMMR colorectal cancer. FDA.gov. U.S. Food and Drug Administration. 2017; 2019:21. from https://www.fda.gov/Drugs/InformationOnDrugs/ApprovedDrugs/ucm569366.htm.
12. U.S. Food and Drug Administration. FDA grants accelerated approval to ipilimumab for MSI-H or dMMR metastatic colorectal cancer. FDA.gov. U.S. Food and Drug Administration. 2018; 2019:21. from https://www.fda.gov/Drugs/InformationOnDrugs/ApprovedDrugs/ucm613227.htm.
13. Waalkes A, Smith N, Penewit K, Hempelmann J, Konnick EQ, Hause RJ, Pritchard CC, Salipante SJ. Accurate Pan-Cancer Molecular Diagnosis of Microsatellite Instability by Single-Molecule Molecular Inversion Probe Capture and High-Throughput Sequencing. Clin Chem. 2018; 64:950–8. https://doi.org/10.1373/clinchem.2017.285981. [PubMed].
14. Salipante SJ, Scroggins SM, Hampel HL, Turner EH, Pritchard CC. Microsatellite Instability Detection by Next Generation Sequencing. Clin Chem. 2014; 60:1192–9. https://doi.org/10.1373/clinchem.2014.223677. [PubMed].
15. Hempelmann JA, Scroggins SM, Pritchard CC, Salipante SJ. MSIplus for Integrated Colorectal Cancer Molecular Testing by Next-Generation Sequencing. J Mol Diagn. 2015; 17:705–14. https://doi.org/10.1016/j.jmoldx.2015.05.008. [PubMed].
16. Bacher JW, Sievers CK, Albrecht DM, Grimes IC, Weiss JM, Matkowskyj KA, Agni RM, Vyazunova I, Clipson L, Storts DR, Thliveris AT, Halberg RB. Improved Detection of Microsatellite Instability in Early Colorectal Lesions. Grosso M, editor. PLoS One. 2015; 10:e0132727. https://doi.org/10.1371/journal.pone.0132727. [PubMed].
17. Campanella NC, Berardinelli GN, Scapulatempo-Neto C, Viana D, Palmero EI, Pereira R, Reis RM. Optimization of a pentaplex panel for MSI analysis without control DNA in a Brazilian population: correlation with ancestry markers. Eur J Hum Genet. 2014; 22:875–80. https://doi.org/10.1038/ejhg.2013.256. [PubMed].
18. New York State Department of Health. Clinical Laboratory Evaluation Program. Albany, New York: New York State Department of Health. 2019. Available from: https://www.wadsworth.org/regulatory/clep.
19. Boland CR, Thibodeau SN, Hamilton SR, Sidransky D, Eshleman JR, Burt RW, Meltzer SJ, Rodriguez-Bigas MA, Fodde R, Ranzani GN, Srivastava S. A National Cancer Institute workshop on microsatellite instability for cancer detection and famitial predisposition: Development of international criteria for the determination of microsatellite instability in colorectal cancer. Cancer Res. 1998; 58:5248–57. [PubMed].
20. De La Chapelle A, Hampel H. Clinical relevance of microsatellite instability in colorectal cancer. J Clin Oncol. 2010; 28:3380–7. https://doi.org/10.1200/JCO.2009.27.0652. [PubMed].
21. Illumina. Sequencing Analysis Viewer Software: User Guide. San Diego, California, USA: Illumina; 2014. Available from: https://support.illumina.com/content/dam/illumina-support/documents/documentation/software_documentation/sav/sequencing-analysis-viewer-user-guide-15020619-f.pdf.
22. Cohen SA, Pritchard CC, Jarvik GP. Lynch Syndrome: From Screening to Diagnosis to Treatment in the Era of Modern Molecular Oncology. Annu Rev Genomics Hum Genet. 2019; 20:083118-015406. https://doi.org/10.1146/annurev-genom-083118-015406. [PubMed].
23. Ribic CM, Sargent DJ, Moore MJ, Thibodeau SN, French AJ, Goldberg RM, Hamilton SR, Laurent-Puig P, Gryfe R, Shepherd LE, Tu D, Redston M, Gallinger S. Tumor Microsatellite-Instability Status as a Predictor of Benefit from Fluorouracil-Based Adjuvant Chemotherapy for Colon Cancer. N Engl J Med. 2003; 349:247–57. https://doi.org/10.1056/NEJMoa022289. [PubMed].
24. Zehir A, Benayed R, Shah RH, Syed A, Middha S, Kim HR, Srinivasan P, Gao J, Chakravarty D, Devlin SM, Hellmann MD, Barron DA, Schram AM, et al. Mutational landscape of metastatic cancer revealed from prospective clinical sequencing of 10,000 patients. Nat Med. 2017; 23:703–13. https://doi.org/10.1038/nm.4333. [PubMed].
25. Le DT, Uram JN, Wang H, Bartlett BR, Kemberling H, Eyring AD, Skora AD, Luber BS, Azad NS, Laheru D, Biedrzycki B, Donehower RC, Zaheer A, et al. PD-1 Blockade in Tumors with Mismatch-Repair Deficiency. N Engl J Med. 2015; 372:2509–20. https://doi.org/10.1056/NEJMoa1500596. [PubMed].
26. Le DT, Durham JN, Smith KN, Wang H, Bartlett BR, Aulakh LK, Lu S, Kemberling H, Wilt C, Luber BS, Wong F, Azad NS, Rucki AA, et al. Mismatch repair deficiency predicts response of solid tumors to PD-1 blockade. Science. 2017; 357:409–413. https://doi.org/10.1126/science.aan6733. [PubMed].
27. Merck K. (pembrolizumab) [package insert]. Whitehouse Station, NJ, USA: U.S. Food and Drug Administration; 2019. Available from: https://www.accessdata.fda.gov/drugsatfda_docs/label/2019/125514Orig1s054lbl.pdf.
|
{}
|
# proper notation for “greater than” in a figure legend: “>5” or “5<”
In a publication-quality figure, I'm making a legend for points with values 1,2,3,4 and greater than or equal to 5. Should I denote the last value as
$\ge5$
or
$5 \le$
I don't think there are any guidelines.
I have never seen anything else than "$\geq 5$" and find this much easier to parse than "$5\leq$". This probably is because we read from left to right. Note that you already write in your question
greater than or equal to 5
and not some other construction.
• I agree with your recommendation, but I think you can say this positively and directly: There is a simple guideline. Which is easier to read and understand? – Nick Cox Jan 9 '18 at 19:46
• I agree as well. But perhaps informally, "5+" would be even easier to understand. – user3433489 Jan 9 '18 at 20:31
While > is a binary operator, and therefore can be treated as a function with two parameters, the terminology "greater than" implies that this a predicate on the first number. That is, "A > B" means "being greater than B is a property that A has". This can be analyzed as a curried function: numbers_greater_than(B).includes(A) . This matches how people use comparators: it's more natural to ask for a bucket larger than five gallons than to ask for a bucket that five gallons is smaller than.
|
{}
|
## Top new questions this week:
### What's the point of Hamiltonian mechanics?
I've just finished a Classical Mechanics course, and looking back on it some things are not quite clear. In the first half we covered the Lagrangian formalism, which I thought was pretty cool. I …
classical-mechanics lagrangian-formalism hamiltonian-formalism
### Why treat complex scalar field and its complex conjugate as two different fields?
I am new to QFT, so I may have some of the terminology incorrect. Many QFT books provide an example of deriving equations of motion for various free theories. One example is for a complex scalar …
quantum-field-theory complex-numbers
### Winter solstice, sunrise and sunset time
We all know the Winter Solstice comes on December the 20th or 21st, which is (by definition) the shortest day of the year. The Winter Solstice day is not the day of the year the Sun rises later (that …
astronomy solstice
### Does the weight of an hourglass change when sands are falling inside?
An hourglass H weighs h. When it's placed on a scale with all the sand rested in the lower portion, the scale reads weight x where x = h. Now, if you turn the hourglass upside down to let the sand …
mass free-fall weight
### Causes of hexagonal shape of Saturn's jet stream
NASA has just shown a more detailed picture of the hexagonal vortex/storm on Saturn: http://www.ibtimes.com/nasa-releases-images-saturns-hexagon-mega-storm-may-have-been-swirling-centuries-1496218 …
astrophysics planets aerodynamics atmospheric-science solitons
### Is the total energy of earth changing with time?
Many years ego, Earth was hot and over time, it has lost energy and has got colder. Is it now in equilibrium or its total energy is changing?
thermodynamics energy earth geophysics
### How does one measure space-like geodesics? Or: What is the physical interpretation of space-like geodesics?
In general relativity, time-like geodesics are the trajectories of free-falling test particles, parametrized by proper time. Thus, they are easy to interpret in physical terms and are easy to measure …
general-relativity geodesics
## Greatest hits from previous weeks:
### Is aluminium magnetic?
From high school, I remember that Aluminium has 13 electrons and thus has an unpaired electron in the 3p shell. This should make Aluminium magnetic. However, the wiki page of Aluminium says its …
electromagnetism
### Buoyancy: helium vs hydrogen balloons
Given I have two identical balloons on earth, how will the buoyancy compare between the one filled with helium and another filled with hydrogen? How can I calculate the ratio of buoyancy given two …
buoyancy
### Causality for the Dirac Field
In Peskin & Schroeder page 54, they are trying to show how far they can take the idea of a commutator for the Dirac field instead of anti-commutator. To this end they are examining causality, …
homework quantum-field-theory
### Understanding the virtual states referenced in multiphoton absorption studies
The Heisenberg energy-time uncertainty tells us that we can have so-called virtual states between eigenstates as long as the lifetime of these states is at most: $\tau = (\frac{h}{4 \pi E_v})$ Where …
photons uncertainty-principle non-linear-optics absorption
|
{}
|
# What is a “Fourier transform limited pulse”?
I have some doubts about the definiton of a Fourier transform limited pulse. For example if I consider a generic pulse: $$E(z,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}A(\omega)e^{i(-\beta(\omega)z+\omega t)}d\omega$$ Defining:
• $$S(\omega,z)=|\mathscr{F}[E(z,t)]_{\omega,z}|^{2}$$
• $$\Delta \omega(z)$$ as the range of frequencies which correspond at half height of $$S(\omega,z)$$ for a fixed $$z$$ (pulse bandwith).
• $$\tau_{p}$$ (the duration of the pulse) as the range of time which correspond at half height of $$|E(z,t)|^2$$ at fixed $$z$$. I'm considering a pulse inside a dispersive medium so its duration depends on the $$z$$ you are at.
Done these definitions, is it right to say that a pulse is Fourier limited if $$\Delta \omega(z)\tau(z)=\alpha$$ for each $$z$$, where $$\alpha$$ is a number which changes according to the type of pulse (Gaussian, ecc...)?
• I could guess what "Fourier transform limited pulse" means, but I've never heard that phrase before, so it would be nice to have a link to an example use case. – DanielSank Jan 17 '19 at 17:39
• @DanielSank This paper is one representative example. – Emilio Pisanty Jan 17 '19 at 17:41
• @EmilioPisanty Ugh, paywall. Does "transform limited" mean "short enough that the spectral width is larger than I want"? – DanielSank Jan 17 '19 at 17:47
• @DanielSank c'mon ;-). See my answer for the details. – Emilio Pisanty Jan 17 '19 at 17:48
I'm considering a pulse inside a dispersive medium so its duration depends on the z you are.
... then the concept of transform-limited pulse does not hold globally for your setup. Transform-limited pulses are a 1D (generally time-domain) phenomenon, so in your configuration the question "is the pulse transform-limited" would be asked and answered locally and independently at each different point. And, in the presence of dispersion, if the pulse is transform-limited at a given point $$z_0$$, then it will not be transform-limited at any other point in general.
Generically, given a locally-defined electric field $$E(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}A(\omega)e^{+i\omega t}d\omega,$$ with spectral amplitude $$A(\omega) = |A(\omega)| e^{i\phi(\omega)}$$, the pulse is said to be transform-limited if its duration is minimal over the set of pulses that have an identical power spectrum $$|A(\omega)|^2$$. The reason we use this definition is that the power spectrum $$|A(\omega)|^2$$ is both
• fixed by the gain profile of the laser gain medium and the details of the cavity, and
• easily measurable by using a conventional spectrometer,
whereas the time duration is
• extremely hard to measure,
• not determined by the laser source, since the introduction of any dispersive optics will affect the pulse duration without affecting the power spectrum, and
• accessible (given enough money, time, and dedication) to experimental modification via a number of pulse-shaping schemes.
For a given laser source, the power spectrum is basically fixed, and therefore so is the bandwidth $$\Delta\omega$$, and this puts a limit, via the Fourier bandwidth theorem, on the minimal pulse duration that's achievable with your laser source. However, unless you've done a lot of work, the pulse that comes out of your source will not be that short - instead, it will contain chirp and other types of dispersive features which make it longer than that minimal pulse duration. That problem can be fixed by using pulse shapers to introduce additional spectral phases (i.e. additional terms $$e^{i\phi_\mathrm{shaper}(\omega)}$$ multiplying the spectral amplitude) which cancel out the chirp and other dispersive behaviours to minimize the pulse duration.
The transform-limited pulse duration is the minimal pulse duration that's achievable using this procedure.
If you want to get truly technical, then this also depends on the choice of measure for the duration of the pulse (i.e. choosing the FWHM, as you've done with your $$\tau$$, or some other measure which e.g. takes into account some pre-defined sensitivity to pre- or post-pulses), but if you're arguing about that then you're well and truly into the weeds by that point.
The concept of a transform-limited pulse is of extreme relevance in on-the-ground experimental situations, where the spectrum of your pulse is some jagged beast instead of some nice smooth spectrum (say, take fig. 2(b) of this paper). To evaluate the transform-limited duration, you basically take a set of reasonably-smooth spectral phases $$\phi(\omega)$$ that's as expansive as you can, and you select the one that gives you the smallest pulse duration. (And yes, by duration you use the FWHM by default, but really you should use whatever is the best descriptor of the temporal resolution limits in your experiment, which will depend on the process you're using.)
• Ok, so this is what I've understood: given a pulse $E=E(z,t)$ first of all I fix $z=0$ and I compute $\Delta \nu \Delta \tau_{p}$ (using FWHM). This value will depend on the type of pulse (=0.44 for Gaussian) and I define Fourier limited each pulse of the same shape for which it holds. Then if I evaluate $\Delta \nu \Delta \tau_{p}$ at a generic $z$ inside a dispersive medium, we can say that the value will be different since $\Delta \tau_{p}$ increases, while I have no idea of how $\Delta \nu$ will change. So the pulse won't be T.F. anymore. Do you confirm what I have written? – Landau Jan 20 '19 at 13:17
• Not particularly. You're fixing the concept to a fixed time-domain shape, and that's wrong. The concept says that the duration is minimal for a fixed spectral shape. – Emilio Pisanty Jan 20 '19 at 14:15
• See edited answer; hopefully that'll make the concept clearer. – Emilio Pisanty Jan 20 '19 at 14:27
• Ok, I've read it carefully. However there is still one thing that I don't understand. Fixed $A(w)$ (and so $\Delta w$) then it should be fixed also $E[0,t]$ (and so $\Delta \tau_{0}$) simply by putting $A(w)$ inside the definition of the pulse. And if compute $\Delta \tau _{0} \Delta w$ I get a number let's say $\alpha$. I don't understan why, how you say, it should hold for a laser that $\Delta \tau_{0} \Delta w>\alpha$. I mean it can't be possible if we are basing on the previous definition of pulse... – Landau Jan 20 '19 at 18:32
• Well, you can define it, but depending on the set of spectral phase functions you (implicitly or explicitly) set, the infimum may or may not be achieved (and it may or may not be achieved with a physical pulse). In practice, yes, you want phi to be restricted to some set of reasonable, well-behaved functions. But seriously, you're over-stressing. It's a fuzzy term. The details depend on the set of spectral phases and the measure of pulse length that you choose, but if your choices are reasonable then the result is robust, and the shape and duration of the minimal pulse will be roughly similar. – Emilio Pisanty Jun 2 '19 at 18:15
Given a generic pulse: $$E[z,t]=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} A(w)e^{i[\beta (w)z -wt]}dw$$ You have: $$A(w)=\mathscr{F}(E(0,t))_{w}$$ $$S(w)=|A(w)|^2$$ $$S(w)_{FWHM}=\Delta w$$ $$I(z,t)=|E(z,t)|^2$$ $$I(z,t)_{FWHM}=\tau_{p}(z)$$ Fixed $$S(w)$$ ( and consequently $$\Delta w$$), which is automatically fixed once you choose a particular laser, you have: $$\tau_{p}(0)=I(0,t)_{FWHM}=|E(0,t)|^2_{FWHM}$$ $$=\vert \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} \sqrt{S(w)} e^{-iwt+i\phi}dw \vert ^2_{FWHM}$$ So $$\tau_{p}(0)$$ depends on $$\phi$$. If our laser produces a pulse with that partcular $$\phi$$ for which $$\tau_{p}(0)$$ is minimum then we call it transform limited pulse. For example if $$S(w)$$ is fixed as a Gaussian function then this pulse will be T.L. only if $$\tau_{p}(0)\Delta w =2.44$$.
Finally, for $$z>0$$ since we are considering a dispersive media then: $$\tau_{p}(z)>\tau_{p}(0)$$ So we are sure that the pulse won't be T.L. anymore.
• If nothing else: $w$ ($w$) is not the same thing as $\omega$ ($\omega$). You keep using the former when you should be using the latter. That's kind of equivalent to someone writing Panda instead of Landau and claiming that they're the same thing. Language matters. Use it correctly. – Emilio Pisanty Jan 20 '19 at 20:12
• Other than that, yeah, this is mostly correct (though pretty muddled in its presentation - I imagine it makes sense to you, but for a general reader it's a pretty confused exposition). Note that in your $z>0$ final paragraph, that's only true if the pulse is transform-limited at $z=0$ and the medium is dispersive. It's obviously perfectly possible for the pulse to be transform-limited at $z=z_0 \neq 0$, in which case it won't be transform-limited at $z=0$. – Emilio Pisanty Jan 20 '19 at 20:15
|
{}
|
# Let A be a 6 X 9 matrix. If Nullity (ATA^TAT) = 2 then Nullity(A) = 2
Question
Matrix transformations
Let A be a 6 X 9 matrix. If Nullity $$\displaystyle{\left({A}{T}{A}^{{T}}{A}{T}\right)}$$ = 2 then Nullity(A) = 2
2021-02-12
It is given that AA is a 6×9 matrix and Nullity $$\displaystyle{\left({A}^{{T}}\right)}={2}$$. Let rank(A)=k. Then we know that rank $$\displaystyle{\left({A}^{{T}}\right)}={k}$$. By Rank-nullity theorem we have
rank(A)+Nullity(A)=9andrank(AT)+Nullity(AT)=6
It follows that
rank $$\displaystyle{\left({A}^{{T}}\right)}={6}−{2}={4} ⟹ {k}={4}$$
Therefore we get
Nullity(A)=9−k=9−4=5
Hence the given statement is False.
### Relevant Questions
1:Find the determinant of the following mattrix $$[((2,-1,-6)),((-3,0,5)),((4,3,0))]$$ 2: If told that matrix A is singular Matrix find the possible value(s) for x $$A = { (16x, 4x),(x,9):}$$
Let T be the linear transformation from R2 to R2 consisting of reflection in the y-axis. Let S be the linear transformation from R2 to R2 consisting of clockwise rotation of 30◦. (b) Find the standard matrix of T , [T ]. If you are not sure what this is, see p. 216 and more generally section 3.6 of your text. Do that before you go looking for help!
Write the given matrix equation as a system of linear equations without matrices.
$$[(2,0,-1),(0,3,0),(1,1,0)][(x),(y),(z)]=[(6),(9),(5)]$$
It can be shown that the algebraic multiplicity of an eigenvalue lambda is always greater than or equal to the dimension of the eigenspace corresponding to lambda. Find h in the matrix A below such that the eigenspace for lambda = 5 is two-dimensional: $$\displaystyle{A}={\left[\begin{array}{cccc} {5}&-{2}&{6}&-{1}\\{0}&{3}&{h}&{0}\\{0}&{0}&{5}&{4}\\{0}&{0}&{0}&{1}\end{array}\right]}$$
a) Let A and B be symmetric matrices of the same size.
Prove that AB is symmetric if and only $$AB=BA.$$
b) Find symmetric $$2 \cdot 2$$
matrices A and B such that $$AB=BA.$$
Let D be the diagonal subset $$\displaystyle{D}={\left\lbrace{\left({x},{x}\right)}{\mid}{x}∈{S}_{{3}}\right\rbrace}$$ of the direct product S_3 × S_3. Prove that D is a subgroup of S_3 × S_3 but not a normal subgroup.
|
{}
|
# Thread: Posting of project, transcendental numbers
1. ## Posting of project, transcendental numbers
Hi,
Am I allowed to post a pdf version of a paper on transcendental numbers(major university project I have), on this forum, to be checked for mistakes.
2. ## Re: Posting of project, transcendental numbers
Originally Posted by Goku
Hi,
Am I allowed to post a pdf version of a paper on transcendental numbers(major university project I have), on this forum, to be checked for mistakes.
well the mods say that reviewing of papers is not allowed on the forum. and what if someone steals from it??
3. ## Re: Posting of project, transcendental numbers
It does not contain anything new in mathematics, it is just an Undergraduate project.
4. ## Re: Posting of project, transcendental numbers
Originally Posted by Goku
It does not contain anything new in mathematics, it is just an Undergraduate project.
if its long then no one will review.. if you have a doubt somewhere try to formulate a reasonably small problem(along with your solution) out of it and post it.
|
{}
|
# A better proposition
6,107pages on
this wiki
## Forum page
Ok. What with the whole controversial "Illogicopedia V.2." thing, I think some things need to be sorted out. First I'm gonna explain the situation of the site as it currently stands:
Positives Our forum is the best it's ever been. It's pretty damn active, everyone uses it, and the chat's on a wide variety of topics. We have a high level of community participation, plus the fact that there isn't a single user on here who is negative or has a negative effect on the community. That's actually pretty amazing. Our blog is simply very cool. It took a little bit for me to adjust the casual usage of the blog for "thoughts of the week" but I've warmed to it and see it as a great thing now. I think the whole site has a big sense of family going on. Even though we've gotten far larger, the community still feels tightly-knit. Negatives The biggest problem with the site is that its overall image is that of something juvenile. That's partially to be expected with a site based on nonsense and surrealism, but it's gotten out of hand now. While we have over 5000 articles now it feels like we're swimming in a sea of crap. I know it's subjective and that, but you gotta remember that with articles on sites like this, higher effort = cooler output. We've got way more output than we have effort. What we should concentrate on is quality over quantity right now. I think the main page could probably be reshuffled and jazzed up to not only look cooler but also reflect the site of nowadays, 'cos there's been lots of change. IRC is deserted! C'mon guys, we have, and I quote, a really high amount of active users for a wikia site. Get on! :P
## What we (MMF + Testicles) feel the site needsEdit
• An organisation of VFF. A big problem with VFF is that there're too many unconsidered self-noms, and there's a big problem with the whole situation of "mob voting", where a specific clique of users all vote together for something they want featured. First, I think voting should be made completely equal between all users: admins, bureaucrats and phantoms should get as many votes as users, i.e. 1. Second, and I hate to say this, but we need to be tougher on our policy on self-noms. What should probably be done about that is it should be a requirement to submit the article for review and get a good review before nomming it. If the article's that good, maybe the reviewer will nom it themselves :P
• That's a point, the ?review. It seriously needs improvements. All reviews in ?review at the moment are very short sentences. I (MMF, as an experienced reviewer in Uncyclopedia) want to take over the review section and revolutionise it so that reviews are well-thought-out and helpful.
• Like was said, the main page should probably be changed to reflect the modern state of the site. Plus it should be prettified :P
• A big-time concentration on revamping the articles we've already got before starting writing new articles. Trash, mexicans and stub templates will come in useful here. Those templates need to be a mark that say "improve this" as opposed to "this is bad/short". All stubby mexican articles could either be deleted (not cool) or, a better idea would be to make a section on the site that's like Uncyclopedia's UnDictionary. Y'know, the one that's a repository of merged stub articles that are still cool. (Damnit, why does Uncyc have to make some right choices some times?) A great idea would be a regular improvement drive, where users are urged to go and find trashy pages and work their magic!
• We could bring in some more experienced Uncyc users to help with site reorganisation, if they're willing to be cool and help. I (MMF) know for certain that Codeine thinks the site appears too juvenile, so maybe I can persuade him to put his money where his mouth is and help work at the problems he mentioned :P
• We definitely need to appoint a new admin, particularly in the US timezone.
Ermm, that's all I can remember at the moment. Duncan, if I've missed anything stick it in here :P MMF!talk←/→admin(:D) 11:30, 17 Octodest 2008 (UTC)
Warning: Looooooooooooooooong comment follows. And I mean long.
OK, I have calmed down slightly and am currently on IRC to talk about this some more. Yes, Illogicopedia might benefit from some changes and indeed, if we all work together we can make things better for all of us!
First of all, I have always been against deletion of articles, even if they suck there's at least one sentence in there worth keeping. I know I've been slacking on this front recently but I used to go through newpages (bear in mind this was when there were like, three new articles a day) and sift through them all adding links, categories and generally prettifying them. These days, the admin jobs are more distributed - there's updating features, refreshing VFF, welcoming new users, giving out stars... the list goes on. Some days I just want to write funny stuff, you know? Here it might be good to get some people on board who care about site maintenance. Guys who have the best interests of the wiki at heart; those who are willing to get their hands dirty and do the donkey work for no praise, just because they care about Illogicopedia. A tough one, eh?
I like your suggestion about the Uncyclopedians. Get some of those guys on board and start up the long awaited Article Improvement Drive again (I've made a page which has been lying here for a while). Really go to town with it and put features etc. on hold temporarily whilst we concentrate on improving what we already have. Let's take a leaf out of Fluffalizer's book and merge similar articles, creating redirects from the old ones. I already did this with the United Kingdom article and I think it's all the better for it.
?Review: My personal opinion on this is that I prefer giving one sentence reviews. Just a personal thing, but I'm sure other people are willing to give more time to this. Really pump this project up, and it'll go hand in hand with the AID.
Front page redesign: focus on community, community, community. Maybe lose the visual connection with Uncyclopedia/Wikipedia (is it getting old?) and modernise, baby. Stick links to recently updated forum topics; make the moon thing less prominent; cater more for Internet Exploder users; have a more conversational tone; more space for IOTM blurb; more ties with the blog - latest blog posts on the front page; keep the DYKs, news and Vandalpedia.
Overall, even though us regulars know that more structured nonsense is, on the whole, better, yes - for some reason we do come across as juvenile. I get that feeling from Uncyc anyways. When Illogicopedia started up, I (or was it Seppy?) made the suggestion that Illogicopedia be Family Guy to Uncyclopedia's Simpsons: a slightly more random affair whilst keeping some of the wit.
C'mon people, if we all focus we can do it. -- Hindleyak Converse?blog 11:48, 17 Octodest 2008 (UTC)
## IdeaEdit
Maybe whats you should do is lock new page creation and force peoples to improve the existancing articles. --Unsigned comment posted by 194.80.240.66, 12:37, 17 Octodest 2008
No new articles ever would be really, REALLY bad though. Also, are the listed requirements for nominating an article for feature only for self-nom, or are they for any nomination at all? Some WHAT!? (number two) (talk) (contribs) (edit count) 20:03, 17 Octodest 2008 (UTC)
General improvement of existing articles is what is being asked of us, its not too hard, just skim through our creations and make em better. And try and make stubs better too. Though I do own some pretty naff ones myself... Darkgenome 11:11, 22 Octodest 2008 (UTC)
## Like some...Edit
The ideas I like are the new VFF and the fact that we need to make the main page look better. And that's about it. Readmesoon (Talk | contribs) (14:33, 17 Octodest 2008 (UTC))
## Uh... Edit
I fonchezzz have come to the conclusion that I shall become devoted again if duty calls. I will become devoted to the plans that you have come up with. Well, I have come to the conclusion that it is our best bet to redo the feature vote thing. At least we can say it is better than it was when we started. Remember that poll on each nomination? Ya. that didn't work. And I think, unlike hindley, that we should delete the crap that makes up most of our articles. The last attempt at that did not work well, so why should it this time. Maybe the best thing to do is have a reward for adding on to articles, so people will be more likely to do it. I would be up for it. I am willing to help out for nothing. Also, the main page... that could use some new stuff. I like that idea. I am bored of it. Let's see... review... I forgot about that. I haven't helped it lately. I personally it is up to the reviewer to decide how long the review is. And as for self noms, if they suck... they wont be voted for, so why bother banning self noms. Just as long as it doesn't get out of hand, which I don't think it is. And I think self noms for images should be welcome. We don't have many. Oh which reminds me, we shouldn't have all the pictures changing every time you enter the main page the way we have it. Some images appear from when we first came up with featured images. Maybe we should have a new batch every few weeks. And for new admins I say we need two. One Australian one American, or two Americans, or whatever. And... that concludes my rant for now. I hope this ends up with results unlike some things we have come up with...-- 22:05, 17 Octodest 2008 (UTC)
Yes, the Uncyc/wikipedia looking alike thing is getting a bit old, maybe we could redesign the page slightly to make it better but have subtle hints of wikipedia remain, similar shape of columns. flavours of it, rather than potato cut print.
Featured articles lately have been following the "family guy" "simpsons" analogy, which is nice, but i notice its a rare few in the range of articles. i dont mind articles where they do something else just as interesting, but some articles, like pew are just "they dont exist". people are sadly too defending of their articles, id imagine a wiki where edits can only be a maximum of some arbitrary number say 500 bytes, so all articles would be build through everyone making tiny contributions to an article rather than wacking out massive ideas. Had i been doing ?pedia now, i would have suggested that :D , anyway, i meander. the point is, im happy to plaster over the cracks if we can come out of the other end with a real wiki, not like our half arsed efforts to inject energy :P --Silent Penguin 22:20, 17 Octodest 2008 (UTC)
That 500 byte thing intriguess me.If you were to write an epic, youd have to start with a bit of text, save the page. Edit and add more, save again and so on till it's done. Could be annoying, don't you think? -- Dxpenguinman, the Penguinman ...He's e-vile! Talk Got An Idea? GAMESHOW! 00:00, 18 Octodest 2008 (UTC)
No, because you wouldn't be able to do that, 500 per page, not per edit everything would be a collaboration.--Silent Penguin 12:48, 18 Octodest 2008 (UTC)
Nothing would ever get done. This is a wiki. We should allow people to contribute as much as they feel is necessary. In addition, please describe how this sytem would work (Which namespaces? Would there be exceptions on request for certain articles (group pages, political parties)? How would the coding be implemented as long as we're tied to Wikia?). --Aaaaannooooo!!! 13:04, 18 Octodest 2008 (UTC)
## Comments from a retired userEdit
On the Illogiblog, I have been having a conversation with BenedictBlade about why he left and all that. This is what he had to say when I asked exactly why he left Illogic:
It's the community, which has become one which has split in ways that show that not too many people care about the wiki's health and look, they just care about themselves and their articles. That was not the same back in my day. The articles aren't the random prose they were, there more of a poorly created uncyclopedia articles and humorous attempts at a story. You may argue that our size has bonded us together, though just take a good look at the talk pages. Most are from one user, and when others are on it is usually a moan or flamewar. This isn't everybody, i'm just highliting it now before it turns into more of a wobble than it already is
Completely agree with him on everything he says, I suppose it just took a user to retire for me to realise it. There's a few loose cannons who only care about their articles and never edit anybody else's. This might be fine, but when they start hating everybody's articles but their own, we have a problem.
Having stewed this over, I propose the following (in lieu of all previous comments):
• Main page redesign. We can draw up some roughs in Paint or something and, with the help of some coders, put it into action. I'm gonna give this one some thought.
• Article overhaul. Relaunch the Article Improvement Drive, get people to hit 'random page' and improve whatever comes up. I'd love to see recentchanges filled with these!
• MMF's super Pear Reviews. If you're up to it, MMF, we could designate you the head of the review committee and you can concentrate on that for the duration of this improvement drive. If you want to do it.
• Review of voting laws. This was going to happen anyway, but the archaic "two votes for admins" laws should be done away with. On vote per user, unlimited nominations. In addition, prohibit self nomination unless there has been an entry on ?review.
• New admin. Somebody who cares about the site's well being and is prepared to get their hands dirty, mucking in with trash improvement, fighting vandals, updating features, IllogiNews etc etc. All the existing admins get together on IRC to discuss who it ought to be.
Thoughts? -- Hindleyak Converse?blog 10:58, 18 Octodest 2008 (UTC)
Some good ideas there. I didn't even know admins got two votes. I must say I find Benedict Blade's decision to retire rather sudden (I talked to him on IRC, he reviewed my article and said he'd vote for it!). I also agree an article should be reviewed before being self-nommed for VFH. --Aaaaannooooo!!! 11:38, 18 Octodest 2008 (UTC)
Well, I'm all for this (obviously) - so basically - let's get cracking. What first? -- 14:43, 18 Octodest 2008 (UTC)
Let's start with VFF rules. Here is a plan Anotherpongo proposed on IRC for VFF: Admins and users both get one vote, IPs get 0. I'm thinking we can discuss other details on IRC today; feel free to join in. Some WHAT!? (number two) (talk) (contribs) (edit count) 14:50, 18 Octodest 2008 (UTC)
MMF says we need an admin in the us timezone. I am in the us tinmezeom. i am an admin on flapjack wiki and billy and mandy wiki and hitchikers. --Ragglefraggleking 15:31, 18 Octodest 2008 (UTC)
Flapjack wiki? Cool, I want a flapjack now... --- Hindleyak Converse?blog 16:28, 18 Octodest 2008 (UTC)
Yeah, but are you wiling to do you share of the work? Some WHAT!? (number two) (talk) (contribs) (edit count) 17:21, 18 Octodest 2008 (UTC)
I agree that the admins should only get one vote like everybody else. Elassint, 10 23 2008 talk
## Cool!Edit
I'm finding it really cool that loads are coming here to help and chip in ^^. I'd be happy to super-ify the reviews, but I definitely wouldn't want to just do that. After a little bit of discussion with Dunc, we surmised that all the previous "illogilutions" were too unfocused. Basically I think what we're doing here is adjusting to becoming a genuinely big wiki, and what we need to do is focus our tasks: blitzkrieg one idea at a time so we get all done rather than being unfocused and not doing that good a job. So we work on the article sort-out drive first, then the main page (cos content will be sorted out in the site) and then the other systems like VFF and ?review, all in that order. Maybe a good analogy is that ?pedia is like a car, and the engine's gotten a bit rusty and there's a view other minor problems, so we need to stop the car for a little bit and fix the problems, so that the car can get back up and running better than ever ^^. We'd have to be really focused though, cos I don't think writers are qualified enough to repair cars :P MMF!talk←/→admin(:D) 21:01, 18 Octodest 2008 (UTC)
I have a spanner. It is, however, made of plastic.
Ready to get the show on the road RE. Article Improvement Drive? Here's an idea I had ages ago but never quite got round to it (Pickle judging was a priority then)... you get a reward for cleaning up articles. I was thinking gold stars, maybe, but that would be a bit much per article. So we create a subpage and invite people to tell us the articles they have improved. At the end of, say, a month, we tally up all the articles and redeem them for Nectar points, er, I mean gold stars. Say, something like 6 articles=1 star, or whatever. -- Hindleyak Converse?blog 10:52, 19 Octodest 2008 (UTC)
Maybe not gold stars, we need a second currency methinks. Feel free to shoot me down -- 11:13, 19 Octodest 2008 (UTC)
Points you can use to buy stuff from IllogiShop, with the special IllogiClub Card? -- Hindleyak Converse?blog 11:22, 19 Octodest 2008 (UTC)
Hindleyite, I categorically morbidly clinicly love you. :p Could be an idea actually, but it would need some thinking through. If you want to go ahead with it I'm happy to chip in, also if there's any monkey work you need help with y'know where I am 11:25, 19 Octodest 2008 (UTC)
'Twas partly a joke, but it could work... each user has a cool template with their points written on it. Dunno what you'd be able to buy from the shop though, maybe some marbles or something. :) -- Hindleyak Converse?blog 11:33, 19 Octodest 2008 (UTC)
Marbles are notoriously easily lost. 11:35, 19 Octodest 2008 (UTC)
Who'd want to buy marbles??? .....Oh good i get them all to myself! -- Dxpenguinman, the Penguinman ...He's e-vile! Talk Got An Idea? GAMESHOW! 12:34, 19 Octodest 2008 (UTC)
It sounds too much like capitalism to me, I'm afraid. I know of few wikis which have thrived on a currency-based system. It sounds like just another complication. You would probably need to double your number of admins to ensure proper distribution of currency and punishment of forgers. --Aaaaannooooo!!! 10:17, 20 Octodest 2008 (UTC)
You're probably right. Oh well, t'was a nice idea, alas.... -- 13:23, 20 Octodest 2008 (UTC)
Silver stars. Or maybe golden mops (cleanup). Some WHAT!? (number two) (talk) (contribs) (edit count) 18:57, 20 Octodest 2008 (UTC)
What about titles to go into a user's signature? Like, for a limited period only they're available in increasing rank depending on how much effort / edits are put in, or something... MMF!talk←/→admin(:D) 19:06, 20 Octodest 2008 (UTC)
This is the best idea yet, it seems perfect. Uncyc has got ranks of the British Army, can anyone think of any suitably illogical titles we could bestow upon people? -- Hindleyak Converse?blog 12:37, 21 Octodest 2008 (UTC)
Sub-Sub-Sub-Sub Apprentice Janitor? Sub-Sub Apprentice Janitor? --Sir Asema Politics Complaint Inbox or Outbox
ASEMA! :D MMF!talk←/→admin(:D) 18:36, 21 Octodest 2008 (UTC)
Hey I got an issue sorta related to this. you know how we are redoing the aids stuff? can hamburg articles be an exception? They are different. They arent crap cause they are short. After all they are already redone articles.-- 01:01, 22 Octodest 2008 (UTC)
Yeah guys, please take note that "George hamburg" articles are how they are for a reason, so please don't go tagging them :P Otherwise, go nuts! MMF!talk←/→admin(:D) 10:45, 22 Octodest 2008 (UTC)
Yeah, George Hamburg is Illogicopedia's resident poet. -- Hindleyak Converse?blog 11:15, 22 Octodest 2008 (UTC)
GASP... IT'S ASEMA! The man is back, and he's badder than ever! Welcome back aboard, Assman. -- Hindleyak Converse?blog 11:15, 22 Octodest 2008 (UTC)
Perhaps the title, Sub-Mariner? -- Dxpenguinman, the Penguinman ...He's e-vile! Talk Got An Idea? GAMESHOW! 11:43, 22 Octodest 2008 (UTC)
## Umm Edit
I'd have thought the mass article improvement thing would have been more official than a simple increase in aid tags... -- 16:37, 22 Octodest 2008 (UTC)
I'm assuming it will be. I'm guessing this is the mass-tagging period before the actual sorting out :P. I'll sort out the whole page decreeing the stuff and stuff or whatever. MMF!talk←/→admin(:D) 16:41, 22 Octodest 2008 (UTC)
I'm gonna base it on classic things from Illogicopedia's back history. Hope that's ok! MMF!talk←/→admin(:D) 17:17, 22 Octodest 2008 (UTC)
Don't forget this, which I spent absolutely ages on. -- Hindleyak Converse?blog 10:54, 23 Octodest 2008 (UTC)
There should be a template to state that an article is lacking in nonsense. An article may be quite large, but not have any illogical content whatsoever. Some WHAT!? (number two) (talk) (contribs) (edit count) 19:29, 22 Octodest 2008 (UTC)
## I doth Wonder... Edit
Could we somehow split the site into 'Old Illogicopedia' and 'New Illogicopedia'?
I actually agree with this idea, too. It gives newer members a chance to contribute larger. Migraine 19:54, 22 Octodest 2008 (UTC)
Can I be the first to say about that first point, no. :P MMF!talk←/→admin(:D) 20:12, 22 Octodest 2008 (UTC)
That idea is to radical and sounds silly. Elassint, 10 23 2008 talk
## How do you improve nonsense?Edit
If it's nonsense, how do you differentiate between "high-quality" nonsense and "low-quality" nonsense. To judge quality you must understand it. If you can understand it, it is not nonsense. If it's not nonsense, it's Uncyclopedia. Either Illogicopedia is Uncyclopedia, or it is full of meaningless nonsense. What is, must be.
$p \leftrightarrow \Box p$
Like Oedipus--what man can force the hand of heaven? You can try to escape the prophecy, but your actions only result in its fulfillment. An encyclopedia of nonsense entails an encyclopedia of nonsense. It's tautological. Now we complain that it is nonsense. The proposition is illogical. --ModusTollens 07:13, 27 Octodest 2008 (UTC)
You misunderstand. We are not an encyclopedia of nonsense, "The wiki is mostly dedicated to non-, semi- or entirely humorous surrealism (which might variously be considered clever, dumb, silly, or just plain nonsensical) and some satire. Humour-wise Illogicopedia is more about anything funny that didn't necessarily take that much effort to make, or the kind of random self-referential humour you find removed from Uncyclopedia; the sort of thing people'll get a cheap laugh out of. Un-humour-wise, if such a word exists, Illogicopedia will accept virtually anything with some form of redeeming value. This allows a fairly lax content policy that helps make the wiki more accessible to everyone." I don't think that is nonsense. Sorry if the name misled you. -- 09:45, 27 Octodest 2008 (UTC)
There is a fair amount of nonsense here though. The vast majority of my articles are nonsense. To answer your question though, Modustollens, low quality is a page full of "fo4oiwjfjjdfkkkvkkv," or a one-line article. Higher quality stuff tends to be funny or interesting in some way, though most of the time it can still be classified as some form of nonsense. --THE 11:11, 27 Octodest 2008 (UTC)
:D cheers TEH -- 11:21, 27 Octodest 2008 (UTC)
Yes, you put it quite well there, THE. We need to make that clearer on the policy/about pages, perhaps? -- Hindleyak Converse?blog 12:59, 27 Octodest 2008 (UTC)
I dont think anyone actually read that page ever. --Silent Penguin 13:22, 27 Octodest 2008 (UTC)
Another thing I recommend that editors avoid when attempting to write illogically is taking something sensible and adding little scraps of nonsense here and there. Just because you use the word "flugnoflarbex" doesn't make it a better or more suitable article; there should be nonsense in what the writing comes down to. (This is just advice; it is not a proposed guideline at all.) Some WHAT!? (number two) (talk) (contribs) (edit count) 18:53, 28 Octodest 2008 (UTC)
Having said what I just said, I think people should be cautious when tagging stuff. Quality on illogicopedia is a very subjective thing. Don't run around tagging everything you don't like, or risk losing illogicopedia's sense of accepting the bizarre, pointless and strange. --THE 17:51, 27 Octodest 2008 (UTC)
# Photos
2,656photos on this wiki
See all photos >
|
{}
|
# Lattice QCD calculations
Due to the asymptotic freedom, the coupling constant $\alpha_S$ of QCD is a diminishing function as the energy scale increases according to the following equation, already stated (*): $\alpha(|q^2|)=\frac{12\pi}{\left(11n-2f\right)ln\left(\frac{|q^2|}{\Lambda^2_{QCD}}\right)}$.
Therefore, the high-energy or equivalently the short-distance behavior can be described by a perturbative expansion, but a perturbative approach to the QCD fails at large distances where the $\alpha_S$ begins to diverge as the scale of energy decreases.
A different approach allows to better characterize the transition to the deconfined state of hadronic matter and the physical mechanisms at the origin of colour confinement.
A suitable non-perturbative approach is the numerical study of QCD on a lattice (L-QCD).
The leading idea is to outline QCD interactions as a grid in the space-time with quarks placed on nodes and gluonic fields on links.
While the size of the grid is considered infinitely large, the sites are infinitesimally close to each other.
Many progresses have been achieved on the algorithms and on the computing performances and nowadays L-QCD computation represents a notably reliable method to test QCD in the non-perturbative domain.
The computational complexity of such calculations is so high that for example the Italian National Istituf of Nuclear Physics (INFN) began to build specific SIMD
supercomputers (APE Project) to perform these simulations already in 1984.
Single Instruction Multiple Data (SIMD) computers are vector machines where a single control unit can drive several functional units which are able to execute simple operations. SIMD computers can be considered precursors of modern GPUs.
Lattice calculations present intrinsic systematic errors due to the use of a finite lattice cutoff and to the use of quark masses which become eventually infinite.
To lessen the computational load, so-called quenched calculations are introduced.
In such approximations quark fields are considered as non-dynamic “frozen” variables.
While this represented the ordinary way to perform calculations in early L-QCD computing, “dynamical” fermions are now standard.
In addition, numerical methods suffer in evaluating integrals of high oscillatory functions with a large number of variables.
This is the fermion sign problem that emerges for example when quark-chemical potentials are included, e.g. in calculations at non-zero net baryon density or when wave functions change sign due to the effects of the symmetry introduced by the Pauli’s principle.
Relatively simple models (e.g. the MIT Bag Model furnish yet a reasonable valuation for the critical temperature $T_C \sim 170\ MeV$ and the critical energy density $\epsilon_C \sim 1\ GeV/fm^3$.
Lattice QCD calculations have shown that for massless quarks at baryonic potential $\mu_B=0$ the transition to the QGP happens via a first order transition if $n_f\geq 3$ (three quarks with zero masses) and via a second order transition for $n_f=2$ (two quarks and zero masses).
Critical temperature should amount to $(173\pm15)\ MeV$, and the critical energy density to $\epsilon=(0.7\pm0.3)\ GeV/fm^3$, where the uncertainties are mainly due to the method used for its determination.
The Figure below shows a more realistic calculation that includes the mass for the $s$ quark (case “$2+1$ flavours”) and indicates that at zero chemical potential the transition appears most likely as a crossover.
If the transition was of the first order, $\epsilon$ would have a discontinuity in correspondence of the critical temperature $T_C$.
Since the crossover takes place in a small range of temperatures, the phase transition shows a rapid variation in the observables and in the Figure it can be seen that the energy density $\epsilon$ abruptly rises in just $20\ MeV$ of temperature interval.
In the previous Figure it is also visible that the saturated values of energy density at high temperatures are still under the Stefan-Boltzmann (SB) limit; this indicates residual interactions among the quarks and gluons in the QGP phase.
Even the $p/T^4$ ratio saturates under the SB limit, for temperatures $\sim 2T_c$. This suggests a non ideal behavior for the gas considered in Lattice QCD calculation.
The inclusion of lighter quarks masses in the calculations results in a significant decrease for the transition temperature, but early predictions led to significant discrepancies in the results.
Although critical temperature depends on the number of quark flavours involved in the restoring of the chiral symmetry, these differences strongly diminished in current calculations.
A reliable extrapolation of the transition temperature to the chiral limit gave
$T_C = (173 \pm 8)\ MeV,\ n_f=2$,
$T_C = (154 \pm 8)\ MeV,\ n_f=3$.
Calculations based on chiral order parameter show a crossover transition for $T_{\chi}=155\ MeV$.
In addition, even though QCD seems to give only one transition from the low temperature hadronic regime to the high temperature plasma phase, it has been speculated that two distinct phase transitions leading to deconfinement at $T_d$ and chiral symmetry restoration at $T_\chi$ could occur in QCD, with $T_d\leq T_\chi$ according to general arguments about energy scales.
Another important outcome of lattice QCD is the prediction of the restoration of the chiral symmetry that would occur in correspondence of the deconfinement transition.
It is expected, in fact, that the value of the chiral condensate after the deconfinement transition goes to zero, allowing the restoration of the chiral symmetry.
The Figure below shows a comparison between predictions in two-flavours L-QCD for the chiral condensate $\psi\bar\psi$, which is the order parameter for chiral-symmetry breaking in the chiral limit ($m_q\rightarrow0$), and the Polyakov loop, which is the order parameter for deconfinement in the pure gauge limit ($m_q\rightarrow\infty$).
It can be seen that as the temperature increases through the crossover, the value of the chiral condensate $\psi\bar\psi$ drops down and the Polyakov loop boosts.
Such variations occur at the same temperature, suggesting that deconfinement and restoration of chiral symmetry happen at the same temperature.
The corresponding susceptibilities $\chi_L\propto \left(\langle L^2\rangle - \langle L \rangle^2\right)$ and $\chi_m=\partial\langle\psi\bar\psi\rangle/\partial m$ are also showed.
Their peak occur at the same value of the coupling.
In addition, the calculation of the potential energy between two heavy quarks as a function of the temperature shows a confirmation of the deconfinement.
The last Figure shows the predicted behaviour in the three flavour QCD scenario
of the potential energy between a quark and an antiquark.
It can be seen on the left side that as the separation increases, the potential energy flattens and it becomes constant at long distances, validating the hypothesis of deconfinement.
On the right side, instead, it is shown that the separation between a quark and and antiquark decreases with the raise of the temperature
Finally, the results of lattice calculations suggest to consider the QGP as a weakly
coupled medium characterized by the coupling constant
$\alpha_S(T)\propto \frac{1}{log\left[\frac{2\pi T}{\Lambda_{QCD}}\right]}$,
confirming the evidence of deconfinement found at \SPS and the perfect fluid behavior highlighted by $\textsf{RHIC}$ data and maybe discussed later.
|
{}
|
Corrected and searchable version of Google books edition
Latest Tweets
In July 2008 I wrote an editorial in the New Zealand Medical Journal (NZMJ), at the request of its editor.
The title was Dr Who? deception by chiropractors. It was not very flattering and it resulted in a letter from lawyers representing the New Zealand Chiropractic Association. Luckily the editor of the NZMJ, Frank Frizelle, is a man of principle, and the legal action was averted. It also resulted in some interesting discussions with disillusioned chiropractors that confirmed one’s worst fears. Not to mention revealing the internecine warfare between one chiropractor and another.
This all occurred before the British Chiropractic Association sued Simon Singh for defamation. The strength of the reaction to that foolhardy action now has chiropractors wondering if they can survive at all. The baselessness of most of their claims has been exposed as never before. No wonder they are running scared. The whole basis of their business is imploding.
Needless to say chiropractors were very cross indeed. Then in February 2009 I had a polite email from a New Zealand chiropractor, David Owen, asking for help to find one of the references in the editorial. I’d quoted Preston Long as saying
"Long (2004)7 said “the public should be informed that chiropractic manipulation is the number one reason for people suffering stroke under the age of 45.
And I’d given the reference as
7. Long PH. Stroke and spinal manipulation. J Quality Health Care. 2004;3:8–10
I’d found the quotation, and the reference, in Ernst’s 2005 article, The value of Chiropractic, but at the time I couldn’t find the Journal of Quality Healthcare. I did find the same article on the web. At least the article had the same title, the same author and the same quotation. But after finding, and reading, the article, I neglected to change the reference from J Quality Health Care to http://skepticreport.com/sr/?p=88. I should have done so and for that I apologise.
When I asked Ernst about the Journal of Quality Healthcare, he couldn’t find his copy of the Journal either, but he and his secretary embarked on a hunt for it, and eventually it was found.
It turns out that Journal of Quality Healthcare shut down in 2004, without leaving a trace on the web, or even in the British Library. It was replaced by a different journal, Patient Safety and Quality Healthcare (PSQH) A reprint was obtained from them. It is indeed the same as the web version that I’d read, and it highlighted the quotation in question. The reprint of the original article, which proved so hard to find, can be downloaded here.
The full quotation is this
"Sixty-two clinical neurologists from across Canada, all certified members of the Royal College of Physicians and Surgeons, issued a warning to the Canadian public, which was reported by Brad Stewart, MD. The warning was entitled Canadian Neurologists Warn Against Neck Manipulation. The final conclusion was that endless non-scientific claims are being made as to the uses of neck manipulation(Stewart, 2003). They need to be stopped. The public should be informed that chiropractic manipulation is the number one reason for people suffering stroke under the age of 45."
I have often condemned the practice of citing papers without reading them (it is, of course, distressingly common), so I feel bad about this, though I had in fact read the paper in question in its web version. I’m writing about it because I feel one should be open about mistakes, even small ones.
I’m also writing about it because one small section of the magic medicine community seems to think they have nailed me because of it. David Owen, the New Zealand chiropractor, wrote to the editor of the NZMJ, thus.
The quote [in question] is the public should be informed that chiropractic manipulation is the number one reason for people suffering stroke under the age of 45. Long PH. Stroke and Manipulation. J Quality Health Care. 2004:3:8-10 This quote actually comes from the following blog article http://www.skepticreport.com/medicalquackery/strokespinal.htm [DC the URL is now http://skepticreport.com/sr/?p=88] I have attached all my personal communications with Colquhoun. They demonstrate this is not a citation error. Prof Colquhoun believes the origin of the quote doesn’t matter because Long was quoting from a Canadian Neurologists’ report (this is also incorrect). As you can see he fails to provide any evidence at all to support the existance [sic] of the “J Quality Health Care.” This would not be an issue at all if he had admitted it came from a blog site— but I guess the link would have eroded the credibility of the quote. Colquhoun ‘s belief that my forwarding this complaint is me “resorting to threats” is the final nail in the coffin. If he had any leg to stand on where is the threat? This may seem pedantic but it surely reflects a serious ethical breach. Is it acceptable to make up a reference to try and slip any unsupported statement into a “scientific” argument and thereby give it some degree of credibility? Incidentally, at the end of the article, conflicts of interest are listed as none. As Colquhoun is a Professor of Pharmacology and much of his research funding no doubt comes from the pharmaceutical industry how can he have no conflict of interest with therapies that do not advocate the use of drugs and compete directly against the billions spent on pain medications each year? If I may quote Colquhoun himself in his defence of his article (Journal of the New Zealand Medical Association, 05-September-2008, Vol 121 No 1281) I’ll admit, though, that perhaps ‘intellect’ is not what’s deficient in this case, but rather honesty. David Owen
### Financial interests
Well, here is a threat: I’m exposed as a shill of Big Pharma. ". . . much of his funding no doubt comes from the pharmaceutical industry". I can’t count how many times this accusation has been thrown at me by advocates of magic medicine. Oddly enough none of them has actually taken the trouble to find out where my research funding has come from. None of them even knows enough about the business to realise the extreme improbability that the Pharmaceutical Industry would be interested in funding basic work on the stochastic properties of single molecules. They fund only clinicians who can help to improve their profits,
The matter of funding is already on record, but I’ll repeat it now. The media ‘nutritional therapist’, Patrick Holford, said, in the British Medical Journal
“I notice that Professor David Colquhoun has so far not felt it relevant to mention his own competing interests and financial involvements with the pharmaceutical industry “
” Oh dear, Patrick Holford really should check before saying things like “I notice that Professor David Colquhoun has so far not felt it relevant to mention his own competing interests and financial involvements with the pharmaceutical industry”. Unlike Holford, when I said “no competing interests”, I meant it. My research has never been funded by the drug industry, but always by the Medical Research Council or by the Wellcome Trust. Neither have I accepted hospitality or travel to conferences from them. That is because I would never want to run the risk of judgements being clouded by money. The only time I have ever taken money from industry is in the form of modest fees that I got for giving a series of lectures on the basic mathematical principles of drug-receptor interaction, a few years ago.”
I spend a lot of my spare time, and a bit of my own money, in an attempt to bring some sense into the arguments. The alternative medicine gurus make their livings (in some cases large fortunes) out of their wares.
So who has the vested interest?
### Does chiropractic actually cause stroke?
As in the case of drugs and diet, it is remarkably difficult to be sure about causality. A patient suffers a vertebral artery dissection shortly after visiting a chiropractor, but did the neck manipulation cause the stroke? Or did it precipitate the stroke in somebody predisposed to one? Or is the timing just coincidence and the stroke would have happened anyway? There has been a lot of discussion about this and a forthcoming analysis will tackle the problem of causality head-on,
My assessment at the moment, for what it’s worth, is that there are some pretty good reasons to suspect that neck manipulation can be dangerous, but it seems that serious damage is rare.
In a sense, it really doesn’t matter much anyway, because it is now apparent that chiropractic is pretty well discredited without having to resort to arguments about rare (though serious) effects. There is real doubt about whether it is even any good for back pain (see Cochrane review), and good reason to think that the very common claims of chiropractors to be able to cure infant colic, asthma and so on are entirely, ahem, bogus. (See also Steven Novella, ebm-first, and innumerable other recent analyses.)
Chiropractic is entirely discredited, whether or not it may occasionally kill people.
### Complaint sent to UCL
I had an enquiry about this problem also from my old friend George Lewith. I told him what had happened. Soon after this, a complaint was sent to Tim Perry and Jason Clarke, UCL’s Director and Deputy Director of Academic Services. The letter came not from Lewith or Owen, but from Lionel Milgom. Milgrom is well known in the magic medicine community for writing papers about how homeopathy can be “explained” by quantum entanglement. Unfortunately for him, his papers have been read by some real physicists and they are no more than rather pretentious metaphors. See, for example, Danny Chrastina’s analysis, and shpalman, here. Not to mention Lewis, AP Gaylard and Orac.
Dear Mr Perry and Mr Clark, I would like to bring to your attention an editorial (below) that appeared in the most recent issue of the New Zealand Medical Journal. In it, one of your Emeritus Professors, David Colquhoun, is accused of a serious ethical breach, and I quote – “Is it acceptable to make up a reference to try and slip any unsupported statement into a “scientific” argument and thereby give it some degree of credibility?” Professor Colquhoun is well-known for writing extensively and publicly excoriating many forms of complementary and alternative medicine, particularly with regard to the alleged unscientific nature and unethical behaviour of its practitioners. Professor Colquhoun is also a voluble champion for keeping the libel laws out of science. While such activities are doubtlessly in accord with the venerable Benthamite liberal traditions of UCL, I am quite certain hypocrisy is not. And though Professor Colquhoun has owned up to his error, as the NZMJ’s editor implies, it leaves a question mark over his credibility. As custodians of the college’s academic quality therefore, you might care to consider the possible damage to UCL’s reputation of perceived professorial cant; emeritus or otherwise. Yours Sincerely Dr Lionel R Milgrom
So, as we have seen, the quotation was correct, the reference was correct, and I’d read the article from which it came I made a mistake in citing the original paper rather than the web version of the same paper..
I leave it to the reader to judge whether this constitutes a "serious ethical breach", whether I’d slipped in an "unsupported statement", and whether it constitutes "hypocrisy"
### Follow-up
It so happens that no sooner was this posted than there appeared Part 2 of the devastating refutation of Lionel Milgrom’s attempt to defend homeopathy, written by AP Gaylard. Thanks to Mojo (comment #2) for pointing this out.
I’m perfectly happy to think of alternative medicine as being a voluntary, self-imposed tax on the gullible (to paraphrase Goldacre again). But only as long as its practitioners do no harm and only as long as they obey the law of the land. Only too often, though, they do neither.
When I talk about law, I don’t mean lawsuits for defamation. Defamation suits are what homeopaths and chiropractors like to use to silence critics. heaven knows, I’ve becomes accustomed to being defamed by people who are, in my view. fraudsters, but lawsuits are not the way to deal with it.
I’m talking about the Trading Standards laws Everyone has to obey them, and in May 2008 the law changed in a way that puts the whole health fraud industry in jeopardy.
The gist of the matter is that it is now illegal to claim that a product will benefit your health if you can’t produce evidence to justify the claim.
I’m not a lawyer, but with the help of two lawyers and a trading standards officer I’ve attempted a summary. The machinery for enforcing the law does not yet work well, but when it does, there should be some very interesting cases.
The obvious targets are homeopaths who claim to cure malaria and AIDS, and traditional Chinese Medicine people who claim to cure cancer.
But there are some less obvious targets for prosecution too. Here is a selection of possibilities to savour..
• Universities such as Westminster, Central Lancashire and the rest, which promote the spreading of false health claims
• Hospitals, like the Royal London Homeopathic Hospital, that treat patients with mistletoe and marigold paste. Can they produce any real evidence that they work?
• Edexcel, which sets examinations in alternative medicine (and charges for them)
• Ofsted and the QCA which validate these exams
• Skills for Health and a whole maze of other unelected and unaccountable quangos which offer “national occupational standards” in everything from distant healing to hot stone therapy, thereby giving official sanction to all manner of treatments for which no plausible evidence can be offered.
• The Prince of Wales Foundation for Integrated Health, which notoriously offers health advice for which it cannot produce good evidence
• Perhaps even the Department of Health itself, which notoriously referred to “psychic surgery” as a profession, and which has consistently refused to refer dubious therapies to NICE for assessment.
The law, insofar as I’ve understood it, is probably such that only the first three or four of these have sufficient commercial elements for there to be any chance of a successful prosecution. That is something that will eventually have to be argued in court.
But lecanardnoir points out in his comment below that The Prince of Wales is intending to sell herbal concoctions, so perhaps he could end up in court too.
### The laws
We are talking about The Consumer Protection from Unfair Trading Regulations 2008. The regulations came into force on 26 May 2008. The full regulations can be seen here, or download pdf file. They can be seen also on the UK Statute Law Database.
The Office of Fair Trading, and Department for Business, Enterprise & Regulatory Reform (BERR) published Guidance on the Consumer Protection from Unfair Trading Regulations 2008 (pdf file),
Statement of consumer protection enforcement principles (pdf file), and
The Consumer Protection from Unfair Trading Regulations: a basic guide for business (pdf file).
Has The UK Quietly Outlawed “Alternative” Medicine?
On 26 September 2008, Mondaq Business Briefing published this article by a Glasgow lawyer, Douglas McLachlan. (Oddly enough, this article was reproduced on the National Center for Homeopathy web site.)
“Proponents of the myriad of forms of alternative medicine argue that it is in some way “outside science” or that “science doesn’t understand why it works”. Critical thinking scientists disagree. The best available scientific data shows that alternative medicine simply doesn’t work, they say: studies repeatedly show that the effect of some of these alternative medical therapies is indistinguishable from the well documented, but very strange “placebo effect” ”
“Enter The Consumer Protection from Unfair Trading Regulations 2008(the “Regulations”). The Regulations came into force on 26 May 2008 to surprisingly little fanfare, despite the fact they represent the most extensive modernisation and simplification of the consumer protection framework for 20 years.”
The Regulations prohibit unfair commercial practices between traders and consumers through five prohibitions:-
• General Prohibition on Unfair Commercial
Practices (Regulation 3)
• Prohibition on Misleading Actions (Regulations 5)
• Prohibition on Misleading Omissions (Regulation 6)
• Prohibition on Aggressive Commercial Practices (Regulation 7)
• Prohibition on 31 Specific Commercial Practices that are in all Circumstances Unfair (Schedule 1). One of the 31 commercial practices which are in all circumstances considered unfair is “falsely claiming that a product is able to cure illnesses, dysfunction or malformations”. The definition of “product” in the Regulations includes services, so it does appear that all forms medical products and treatments will be covered.
Just look at that!
One of the 31 commercial practices which are in all circumstances considered unfair is “falsely claiming that a product is able to cure illnesses, dysfunction or malformations”
Section 5 is equally powerful, and also does not contain the contentious word “cure” (see note below)
5.—(1) A commercial practice is a misleading action if it satisfies the conditions in either paragraph (2) or paragraph (3).
(2) A commercial practice satisfies the conditions of this paragraph—
(a) if it contains false information and is therefore untruthful in relation to any of the matters in paragraph (4) or if it or its overall presentation in any way deceives or is likely to deceive the average consumer in relation to any of the matters in that paragraph, even if the information is factually correct; and
(b) it causes or is likely to cause the average consumer to take a transactional decision he would not have taken otherwise.
These laws are very powerful in principle, But there are two complications in practice.
One complication concerns the extent to which the onus has been moved on to the seller to prove the claims are true, rather than the accuser having to prove they are false. That is a lot more favourable to the accuser than before, but it’s complicated.
The other complication concerns enforcement of the new laws, and at the moment that is bad.
### Who has to prove what?
That is still not entirely clear. McLachlan says
“If we accept that mainstream evidence based medicine is in some way accepted by mainstream science, and alternative medicine bears the “alternative” qualifier simply because it is not supported by mainstream science, then where does that leave a trader who seeks to refute any allegation that his claim is false?
Of course it is always open to the trader to show that his the alternative therapy actually works, but the weight of scientific evidence is likely to be against him.”
On the other hand, I’m advised by a Trading Standards Officer that “He doesn’t have to refute anything! The prosecution have to prove the claims are false”. This has been confirmed by another Trading Standards Officer who said
“It is not clear (though it seems to be) what difference is implied between “cure” and “treat”, or what evidence is required to demonstrate that such a cure is false “beyond reasonable doubt” in court. The regulations do not provide that the maker of claims must show that the claims are true, or set a standard indicating how such a proof may be shown.”
The main defence against prosecution seems to be the “Due diligence defence”, in paragraph 17.
Due diligence defence
17. —(1) In any proceedings against a person for an offence under regulation 9, 10, 11 or 12 it is a defence for that person to prove—
(a) that the commission of the offence was due to—
(i) a mistake;
(ii) reliance on information supplied to him by another person;
(iii) the act or default of another person;
(iv) an accident; or
(v) another cause beyond his control; and
(b) that he took all reasonable precautions and exercised all due diligence to avoid the commission of such an offence by himself or any person under his control.
If “taking all reasonable precautions” includes being aware of the lack of any good evidence that what you are selling is effective, then this defence should not be much use for most quacks.
Douglas McLachlan has clarified, below, this difficult question
### False claims for health benefits of foods
A separate bit of legislation, European regulation on nutrition and health claims made on food, ref 1924/2006, in Article 6, seems clearer in specifying that the seller has to prove any claims they make.
Article 6
Scientific substantiation for claims
1. Nutrition and health claims shall be based on and substantiated by generally accepted scientific evidence.
2. A food business operator making a nutrition or health claim shall justify the use of the claim.
3. The competent authorities of the Member States may request a food business operator or a person placing a product on the market to produce all relevant elements and data establishing compliance with this Regulation.
That clearly places the onus on the seller to provide evidence for claims that are made, rather than the complainant having to ‘prove’ that the claims are false.
On the problem of “health foods” the two bits of legislation seem to overlap. Both have been discussed in “Trading regulations and health foods“, an editorial in the BMJ by M. E. J. Lean (Professor of Human Nutrition in Glasgow).
“It is already illegal under food labelling regulations (1996) to claim that food products can treat or prevent disease. However, huge numbers of such claims are still made, particularly for obesity ”
“The new regulations provide good legislation to protect vulnerable consumers from misleading “health food” claims. They now need to be enforced proactively to help direct doctors and consumers towards safe, cost effective, and evidence based management of diseases.”
In fact the European Food Standards Agency (EFSA) seems to be doing a rather good job at imposing the rules. This, predictably, provoked howls of anguish from the food industry There is a synopsis here.
“Of eight assessed claims, EFSA’s Panel on Dietetic Products, Nutrition and Allergies (NDA) rejected seven for failing to demonstrate causality between consumption of specific nutrients or foods and intended health benefits. EFSA has subsequently issued opinions on about 30 claims with seven drawing positive opinions.”
“. . . EFSA in disgust threw out 120 dossiers supposedly in support of nutrients seeking addition to the FSD’s positive list.
If EFSA was bewildered by the lack of data in the dossiers, it needn’t hav been as industry freely admitted it had in many cases submitted such hollow documents to temporarily keep nutrients on-market.”
Or, on another industry site, “EFSA’s harsh health claim regime
“By setting an unworkably high standard for claims substantiation, EFSA is threatening R&D not to mention health claims that have long been officially approved in many jurisdictions.”
Here, of course,”unworkably high standard” just means real genuine evidence. How dare they ask for that!
### Enforcement of the law
19. —(1) It shall be the duty of every enforcement authority to enforce these Regulations.
(2) Where the enforcement authority is a local weights and measures authority the duty referred to in paragraph (1) shall apply to the enforcement of these Regulations within the authority’s area.
Nevertheless, enforcement is undoubtedly a weak point at the moment. The UK is obliged to enforce these laws, but at the moment it is not doing so effectively.
A letter in the BMJ from Rose & Garrow describes two complaints under the legislation in which it appears that a Trading Standards office failed to enforce the law. They comment
” . . . member states are obliged not only to enact it as national legislation but to enforce it. The evidence that the government has provided adequate resources for enforcement, in the form of staff and their proper training, is not convincing. The media, and especially the internet, are replete with false claims about health care, and sick people need protection. All EU citizens have the right to complain to the EU Commission if their government fails to provide that protection.”
This is not a good start. A lawyer has pointed out to me
“that it can sometimes be very difficult to get Trading Standards or the OFT to take an interest in something that they don’t fully understand. I think that if it doesn’t immediately leap out at them as being false (e.g “these pills cure all forms of cancer”) then it’s going to be extremely difficult. To be fair, neither Trading Standards nor the OFT were ever intended to be medical regulators and they have limited resources available to them. The new Regulations are a useful new weapon in the fight against quackery, but they are no substitute for proper regulation.”
Trading Standards originated in Weights and Measures. It was their job to check that your pint of beer was really a pint. Now they are being expected to judge medical controversies. Either they will need more people and more training, or responsibility for enforcement of the law should be transferred to some more appropriate agency (though one hesitates to suggest the MHRA after their recent pathetic performance in this area).
### Who can be prosecuted?
Any “trader”, a person or a company. There is no need to have actually bought anything, and no need to have suffered actual harm. In fact there is no need for there to be a complainant at all. Trading standards officers can act on their own. But there must be a commercial element. It’s unlikely that simply preaching nonsense would be sufficient to get you prosecuted, so the Prince of Wales is, sadly, probably safe.
Universities who teach that “Amethysts emit high Yin energy” make an interesting case. They charge fees and in return they are “falsely claiming that a product is able to cure illnesses”.
In my view they are behaving illegally, but we shan’t know until a university is taken to court. Watch this space.
The fact remains that the UK is obliged to enforce the law and presumably it will do so eventually. When it does, alternative medicine will have to change very radically. If it were prevented from making false claims, there would be very little of it left apart from tea and sympathy
### Follow-up
New Zealand must have similar laws.
Just as I was about to post this I found that in New Zealand a
“couple who sold homeopathic remedies claiming to cure bird flu, herpes and Sars (severe acute respiratory syndrome) have been convicted of breaching the Fair Trading Act.”
### Is the solution government regulation?
In New Zealand the law about misleading the public into believing you are a medical practitioner already exists. The immediate problem would be solved if that law were taken seriously, but it seems that it is not.
It is common in both the UK and in New Zealand to suggest that some sort of official government regulation is the answer. That solution is proposed in this issue of NZMJ by Evans et al2. A similar thing has been proposed recently in the UK by a committee headed by Michael Pittilo, vice-chancellor of Robert Gordon’s University, Aberdeen.
I have written about the latter under the heading A very bad report. The Pittilo report recommends both government regulation and more degrees in alternative medicine. Given that we now know that most alternative medicine doesn’t work, the idea of giving degrees in such subjects must be quite ludicrous to any thinking person.
The magazine Nature7 recently investigated the 16 UK universities who run such degrees. In the UK, first-year students at the University of Westminster are taught that “amethysts emit high yin energy” . Their vice chancellor, Professor Geoffrey Petts, describes himself a s a geomorphologist, but he cannot be tempted to express an opinion about the curative power of amethysts.
There has been a tendency to a form of grade inflation in universities—higher degrees for less work gets bums on seats. For most of us, getting a doctorate involves at least 3 years of hard experimental research in a university. But in the USA and Canada you can get a ‘doctor of chiropractic’ degree and most chiropractic (mis)education is not even in a university but in separate colleges.
Florida State University famously turned down a large donation to start a chiropractic school because they saw, quite rightly, that to do so would damage their intellectual reputation. This map, now widely distributed on the Internet, was produced by one of their chemistry professors, and it did the trick.
Other universities have been less principled. The New Zealand College of Chiropractic [whose President styles himself “Dr Brian Kelly”,though his only qualification is B. App Sci (chiro)] is accredited by the New Zealand Qualifications Authority (NZQA). Presumably they, like their UK equivalent (the QAA), are not allowed to take into account whether what is being taught is nonsense or not. Nonsense courses are accredited by experts in nonsense. That is why much accreditation is not worth the paper it’s written on.
Of course the public needs some protection from dangerous or fraudulent practices, but that can be done better (and more cheaply) by simply enforcing existing legislation on unfair trade practices, and on false advertising. Recent changes in the law on unfair trading in the UK have made it easier to take legal action against people who make health claims that cannot be justified by evidence, and that seems the best
way to regulate medical charlatans.
### Conclusion
For most forms of alternative medicine—including chiropractic and acupuncture—the evidence is now in. There is now better reason than ever before to believe that they are mostly elaborate placebos and, at best, no better than conventional treatments. It is about time that universities and governments recognised the evidence and stopped talking about regulation and accreditation.
Indeed, “falsely claiming that a product is able to cure illnesses, dysfunction, or malformations” is illegal in Europe10.
Making unjustified health claims is a particularly cruel form of unfair trading practice. It calls for prosecutions, not accreditation.
Competing interests: None.
NZMJ 25 July 2008, Vol 121 No 1278; ISSN 1175 8716
Author information: David Colquhoun, Research Fellow, Dept of Pharmacology, University College London, United Kingdom (http://www.ucl.ac.uk/Pharmacology/dc.html)
Correspondence: Professor D Colquhoun, Dept of Pharmacology, University College London, Gower Street, London WC1E 6BT, United Kingdom. Fax: +44(0)20 76797298; email: d.colquhoun@ucl.ac.uk
References:
1. Gilbey A. Use of inappropriate titles by New Zealand practitioners of acupuncture, chiropractic, and osteopathy. N Z Med J. 2008;121(1278). [pdf]
2. Evans A, Duncan B, McHugh P, et al. Inpatients’ use, understanding, and attitudes towards traditional, complementary and alternative therapies at a provincial New Zealand hospital. N Z Med J. 2008;121(1278).
3 Shapiro. Rose. Suckers. How Alternative Medicine Makes Fools of Us All Random House, London 2008. (reviewed here)
4. Singh S, Ernst E. Trick or Treatment. Bantam Press; 2008 (reviewed here)
5. Bausell RB. Snake Oil Science. The Truth about Complementary and Alternative Medicine. (reviewed here)
Oxford University Press; 2007
6. Colquhoun D. Science degrees without the Science, Nature 2007;446:373–4. See also here.
7. Long PH. Stroke and spinal manipulation. J Quality Health Care. 2004;3:8–10.
8. Libin K. Chiropractors called to court. Canadian National Post; June21, 2008.
9. Goldacre B. A menace to science. London: Guardian; February 12, 2007/
10. Department for Business Enterprise & Regulatory Reform (BERR). Consumer Protection from Unfair Trading Regulations 2008. UK: Office of Fair Trading.
|
{}
|
Online Calculator Resource
# Mixed Number to Decimal Calculator
Mixed Number to Decimal
= ?
$3\frac{3}{4} = 3.75$Solution 1
$3\frac{3}{4} = 3 + \frac{3}{4}$$= \frac{3}{1} + \frac{3}{4}$$= \left(\frac{3}{1} \times \frac{4}{4} \right) + \frac{3}{4}$$= \frac{12}{4} + \frac{3}{4} = \frac{15}{4}$$= 15 \div 4 = 3.75$Solution 2
$3\frac{3}{4} = 3 + \frac{3}{4}$$3 + (3 \div 4)$$3 + 0.75 = 3.75$
## Calculator Use
Convert mixed numbers, fractions or integers to decimal numbers. This online calculator will convert a mixed number, fraction, integer or whole number to an improper fraction, reduce that fraction if it can be reduced, then perform the division of the numerator by the denominator to find the decimal equivalent of the mixed number. You can enter mixed numbers, fractions or integers.
## How to convert a mixed number to a decimal
A mixed number such as 7 1/4 can be converted to a decimal. It is implied that 7 1/4 is really 7 + 1/4 and that 7 = 7/1, therefore we are first adding the fraction 7/1 + 1/4. Since 4 is the denominator in the original fraction part we will use it as our common denominator. 7/1 * 4/4 = 28/4. Then, 28/4 + 1/4 = 29/4. 29/4 = 29 ÷ 4 = 7.25.
7 1/4 = 7/1 + 1/4 = 28/4 + 1/4 = 29/4 = 29 ÷ 4 = 7.25
As you may have noticed, 7.25 is just the whole part of 7 1/4 added to 1 ÷ 4 = 0.25. This is certainly another valid approach to converting mixed numbers to decimals. Just be aware of mixed numbers that contain improper fractional parts. For example, 7 5/4 can be converted in 2 different ways:
7 5/4 = 7/1 + 5/4 = 28/4 + 5/4 = 33/4 = 33 ÷ 4 = 8.25, or
7 5/4 = 7 + (5 ÷ 4) = 7 + 1.25 = 8.25
Cite this content, page or calculator as:
Furey, Edward "Mixed Number to Decimal Calculator"; from http://www.calculatorsoup.com - Online Calculator Resource.
|
{}
|
# How do you use partial fractions to find the integral int (x^2-x+2)/(x^3-x^2+x-1)dx?
Dec 17, 2016
$\ln | x - 1 | - \arctan x + C$
#### Explanation:
The denominator can be factored as ${x}^{2} \left(x - 1\right) + 1 \left(x - 1\right) = \left({x}^{2} + 1\right) \left(x - 1\right)$.
$\frac{A x + B}{{x}^{2} + 1} + \frac{C}{x - 1} = \frac{{x}^{2} - x + 2}{\left({x}^{2} + 1\right) \left(x - 1\right)}$
$\left(A x + B\right) \left(x - 1\right) + C \left({x}^{2} + 1\right) = {x}^{2} - x + 2$
$A {x}^{2} + B x - A x - B + C {x}^{2} + C = {x}^{2} - x + 2$
$\left(A + C\right) {x}^{2} + \left(B - A\right) x + \left(C - B\right) = {x}^{2} - x + 2$
Then, we can write a systems of equations.
$\left\{\begin{matrix}A + C = 1 \\ B - A = - 1 \\ C - B = 2\end{matrix}\right.$
Solve to get $A = 0 , B = - 1 , C = 1$.
Therefore, the partial fraction decomposition is $- \frac{1}{{x}^{2} + 1} + \frac{1}{x - 1}$.
The integral becomes $\int \left(\frac{1}{x - 1} - \frac{1}{{x}^{2} + 1}\right) \mathrm{dx}$.
Note that $\frac{d}{\mathrm{dx}} \left(\arctan x\right) = \frac{1}{{x}^{2} + 1} \mathrm{dx}$ and that $\frac{d}{\mathrm{dx}} \left(\ln x\right) = \frac{1}{x} \mathrm{dx}$.
Therefore, the integral is $\ln | x - 1 | - \arctan x + C$.
Hopefully this helps!
|
{}
|
# Tag Info
2
As @theGD already pointed out in the comment, scaling is often not needed for spectroscopic data as the features already have a common intensity axis. Here's my guess what's happening when you scale: You have spectra with very nice zero baselines. In other words, all those features outside your analyte signal are constant mean + some noise. If you scale ...
2
Assuming your independent variable matrix is $m\times n$, that you have $m$ observations and $n$ variables. For each PLS component (AKA latent variable), you get a loading vector ($n \times 1$), so for $h$ components the size of loading matrix ($P$) is $n \times h$. These loadings are calculated for both interpretation and algorithmic purposes but they have ...
2
I was also looking for information on these parameters and found a good explanation in the book Eriksson et al. Multi- and Metavariate Data Analysis Principles and Applications. In general, I think you have the right idea. According to Eriksson et al, the fit tells us how well we are able to mathematically reproduce the data of the training set. The $R^2$ ...
2
There are many possible reasons, but it seems like you may not have enough rows to estimate the model accurately. You have enough degrees of freedom to vastly overfit, and because PLS regression finds the latent space that best models the covariance between the regressors and the target, it will find a space that overfits the data. As you expand the model ...
2
Some questions that may help digging down to what the actual issue is. I don't think cross validation itself is the problem here - it's probably just exposing problems in the model. All your tentative models do not show much improvement over the 0 component model: even at 10 latent variables, $RMSE_{CV}$* is still withing 95 % of the $RMSE_{CV}$ always ...
2
First of all, PLS-DA means that you perform a PLS regression and then apply a threshold to assign class labels. Now, there are two very different situations where this is done: the underlying nature of the problem is metric, and the classes mean that the modeled property is above or below some threshold or limit. Presence/absence of an analyte (...
1
Both are supervised classification methods. LDA aims to find projections that aims to minimize within class distance while maximizing between class distance. PLS-DA is basically PLS regression to class information (I think this kind of class information is called one-hot-encoding of classes in machine learning) and aims to maximize covariance between ...
1
Different sources indicate that a PLS regression takes into account the variability of the dependent variables (while PCR doesn't). Why is this aspect so important and why it is considered to be an advantage over PCR? You may have confounders that contribute large variance to $\mathbf X$, but as they are confounders that variance does not help but rather ...
1
pls::plsr centers both $\mathbf X$ and $\mathbf Y$, and the corresponding intercepts are in $Xmeans and$Ymeans. So in order to predict using the coefficients that map $\mathbf{Y_c} = \mathbf { X_c} \mathbf B$, you need to center $\mathbf X$: $\mathbf X_c = \mathbf X - \bar x$ ($\bar x$ is $Xmeans) matrix-multiply by$\mathbf B$:$\mathbf Y_c = \mathbf ...
1
That (our) paper applies independent of the field, but wrt. to test sample size it is only about figures of merit that are proportions of test cases (sensitivity, specificity, ...). You'll find that these figures of merit are not recommended for many situations, among other reasons (they are no proper scoring rules) because they have high variance. This ...
1
My Question Is: I'm confused why a single component solution is working equally well compared to 3 component solution for my simulated data below- as I think i've simulated 3 independent components explaining variance in Y, and a 4th component which is independent of Y. Your simulation has only one data generating process built into $FactorY$: FactorY=...
1
PLS-DA is closely related to LDA: for n > p the full rank PLS-DA (i.e. using all latent variables) is the same as LDA. For 1 latent variable, PLS-DA yields the same classification as closest (Euclidean) distance in feature space. I.e. the regularization "squeezes" the pooled covariance matrix into spherical shape. A two class problem with both classes ...
1
Further research has thrown up this webpage at purdue.edu which links to source code for various variants of PLS. On the latter page, the PLS1 method appears to be very similar to the algorithm shown on the PLS regression Wikipedia page. The purdue.edu implementation cites "Overview and Recent Advances in Partial Least Squares" by Roman Rosipal and Nicole ...
1
It has nothing to do with PLS-DA, it is related to autoscaling spesifically. While taking derivative (or smoothing) is applied per spectrum, the autoscaling does the following: Calculate the mean of each variable using all calibration set samples Subtract this mean from from each variable on both calibration and validation set Calculate standard ...
1
I'm going to break down my answer in a different way. 1. when is PCA or PLS preferred? PCA is an unsupervised data reduction, i.e. the data is compressed into its underlying components without any guidance from data external ($Y$) to the $X$ data. The top ranked components returned are those that dominate the variation in the $X$ data as it has been pre-...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
FACTOID # 19: Cheap sloppy joes: Looking for reduced-price lunches for schoolchildren? Head for Oklahoma!
Home Encyclopedia Statistics States A-Z Flags Maps FAQ About
WHAT'S NEW
SEARCH ALL
Search encyclopedia, statistics and forums:
(* = Graphable)
Encyclopedia > Cauchy sequence
The plot of a Cauchy sequence (xn), shown in blue, as n versus xn. If the space containing the sequence is complete, the "ultimate destination" of this sequence, that is, the limit, exists.
A sequence that is not Cauchy. The elements of the sequence fail to get close to each other as the sequence progresses.
In mathematics, a Cauchy sequence, named after Augustin Cauchy, is a sequence whose elements become close to each other as the sequence progresses. To be more precise, by dropping enough (but still only a finite number of) terms from the start of the sequence, it is possible to make the maximum of the distances from any of the remaining elements to any other such element smaller than any preassigned positive value. Image File history File links Size of this preview: 800 × 560 pixelsFull resolution (2706 × 1894 pixel, file size: 76 KB, MIME type: image/png) % draw an illustration of a Cauchy sequence function main() % prepare the screen and define some parameters figure(1); clf; hold on; axis equal; axis off; fontsize... Image File history File links Size of this preview: 800 × 560 pixelsFull resolution (2706 × 1894 pixel, file size: 76 KB, MIME type: image/png) % draw an illustration of a Cauchy sequence function main() % prepare the screen and define some parameters figure(1); clf; hold on; axis equal; axis off; fontsize... Image File history File links Size of this preview: 800 × 512 pixelsFull resolution (2706 × 1733 pixel, file size: 80 KB, MIME type: image/png) % draw an illustration of a sequence that is not Cauchy function main() % prepare the screen and define some parameters figure(1); clf; hold on; axis equal... Image File history File links Size of this preview: 800 × 512 pixelsFull resolution (2706 × 1733 pixel, file size: 80 KB, MIME type: image/png) % draw an illustration of a sequence that is not Cauchy function main() % prepare the screen and define some parameters figure(1); clf; hold on; axis equal... Euclid, Greek mathematician, 3rd century BC, as imagined by by Raphael in this detail from The School of Athens. ... Augustin Louis Cauchy Augustin Louis Cauchy (August 21, 1789 – May 23, 1857) was a French mathematician. ... For other senses of this word, see sequence (disambiguation). ... In mathematics a metric or distance function is a function which defines a distance between elements of a set. ...
In other words, suppose a pre-assigned positive real value is chosen. However small is, starting from a Cauchy sequence and eliminating terms one by one from the start, after a finite number of steps, any pair chosen from the remaining terms will be within distance of each other.
Because Cauchy sequences require the notion of distance, they can only be defined in a metric space. Their utility lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), they give a criterion for convergence which depends only on the terms of the sequence itself. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates. In mathematics, a metric space is a set where a notion of distance between elements of the set is defined. ... In mathematical analysis, a metric space M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if it doesnt have any holes, if there arent any points missing. For... The limit of a sequence is one of the oldest concepts in mathematical analysis. ...
The notions above are not as unfamiliar as might at first appear. The customary acceptance of the fact that any real number x has a decimal expansion is an implicit acknowledgment that a particular Cauchy sequence of rational numbers (whose terms are the successive truncations of the decimal expansion of x) has the real limit x. In some cases it may be difficult to describe x independently of such a limiting process involving rational numbers.
Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filter and Cauchy net. In topology, one defines uniform spaces in order to study concepts such as uniform continuity, completeness and uniform convergence. ... In mathematics, a filter is a special subset of a partially ordered set. ... In mathematical analysis, a Cauchy sequence is a sequence whose terms become arbitrarily close to each other as the sequence progresses. ...
Cauchy sequence of real numbers GA_googleFillSlot("encyclopedia_square");
A sequence
$x_1, x_2, x_3, ldots$
of real numbers is called Cauchy, if for every positive real number ε > 0 there is a positive integer N such that for all natural numbers m,n > N A negative number is a number that is less than zero, such as −3. ... The integers are commonly denoted by the above symbol. ...
$|x_m - x_n| < varepsilon,$
where the vertical bars denote the absolute value. In mathematics, the absolute value (or modulus[1]) of a real number is its numerical value without regard to its sign. ...
In a similar way one can define Cauchy sequences of complex numbers.
Cauchy sequence in a metric space
To define Cauchy sequences in any metric space, the absolute value | xmxn | is replaced by the distance d(xm,xn) between xm and xn.
Formally, given a metric space (M, d), a sequence In mathematics, a metric space is a set where a notion of distance between elements of the set is defined. ...
$x_1, x_2, x_3, ldots$
is Cauchy, if for every positive real number ε > 0 there is a positive integer N such that for all natural numbers m,n > N, the distance In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... The integers are commonly denoted by the above symbol. ...
d(xm,xn)
is less than ε. Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in M. Nonetheless, such a limit does not always exist within M. Wikibooks Calculus has a page on the topic of Limits In mathematics, the concept of a limit is used to describe the behavior of a function as its argument either gets close to some point, or as it becomes arbitrarily large; or the behavior of a sequences elements as...
Completeness
A metric space X in which every Cauchy sequence has a limit (in X) is called complete. In mathematical analysis, a metric space M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if it doesnt have any holes, if there arent any points missing. For...
Examples
The real numbers are complete, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In mathematics, there are a number of ways of defining the real number system as an ordered field. ... In mathematics, a rational number is a number which can be expressed as a ratio of two integers. ...
A rather different type of example is afforded by a metric space X which has the discrete metric ( where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of X must be constant beyond some fixed point, and converges to the eventually repeating term. In mathematics, a metric space is a set where a notion of distance between elements of the set is defined. ...
Counter-example: rational numbers
The rational numbers Q are not complete (for the usual distance): There are sequences of rationals that converge (in R) to irrational numbers; these are Cauchy sequences having no limit in Q. In fact,if a real number x is irrational, then the sequence (xn), whose n-th term is the truncation to n decimal places of the decimal expansion of x, gives Cauchy sequence of rational numbers with irrational limit x. Irrational numbers certainly exist, for example: In mathematics, a rational number is a number which can be expressed as a ratio of two integers. ... In mathematics, an irrational number is any real number that is not a rational number — that is, it is a number which cannot be expressed as a fraction m/n, where m and n are integers. ...
• The sequence defined by x0 = 1, xn+1 = (xn + 2/xn)/2 consists of rational numbers (1, 3/2, 17/12,...), which is clear from the definition; however it converges to the irrational square root of two, see Babylonian method of computing square root.
• The sequence xn = Fn / Fn − 1 of ratios of consecutive Fibonacci numbers which, if it converges at all, converges to a limit φ satisfying φ2 = φ + 1, and no rational number has this property. If one considers this as a sequence of real numbers, however, it converges to the real number $phi = (1+sqrt5)/2$, the Golden ratio, which is irrational.
• The values of the exponential, sine and cosine functions, exp(x), sin(x), cos(x), are known to be irrational for any rational value of x≠0, but each can be defined as the limit of a rational Cauchy sequence, using, for instance, the Maclaurin series.
In mathematics, an irrational number is any real number that is not a rational number — that is, it is a number which cannot be expressed as a fraction m/n, where m and n are integers. ... This article presents and explains several methods which can be used to calculate square roots. ... A tiling with squares whose sides are successive Fibonacci numbers in length A Fibonacci spiral, created by drawing arcs connecting the opposite corners of squares in the Fibonacci tiling shown above – see golden spiral. ... Not to be confused with Golden mean (philosophy), the felicitous middle between two extremes, Golden numbers, an indicator of years in astronomy and calendar studies, or the Golden Rule. ... As the degree of the taylor series rises, it approaches the correct function. ...
Other properties
• Every convergent sequence (with limit s, say) is a Cauchy sequence, since, given any real number r > 0, beyond some fixed point, every term of sequence is within distance r/2 of s, so any two terms of the sequence are within distance r of each other.
• Every Cauchy sequence of real (or complex) numbers is bounded ( since for some N, all terms of the sequence from the N-th onwards are within distance 1 of each other, and if M is the largest absolute value of the terms up to and including the N-th, then no term of the sequence has absolute value greater than M+1).
• In any metric space, a Cauchy sequence which has a convergent subsequence with limit s is itself convergent (with the same limit), since, given any real number r > 0, beyond some fixed point in the original sequence, every term of the subsequence is within distance r/2 of s, and any two terms of the original sequence are within distance r/2 of each other, so every term of the original sequence is within distance r of s.
These last two properties, together with a lemma used in the proof of the Bolzano-Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano-Weierstrass theorem and the Heine–Borel theorem. The lemma in question states that every bounded sequence of real numbers has a convergent subsequence. Given this fact, every Cauchy sequence of real numbers is bounded, hence has a convergent subsequence, hence is itself convergent. It should be noted, though, that this proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of constructing the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological. In mathematics, a function f defined on some set X with real or complex values is called bounded, if the set of its values is bounded. ... The Bolzano-Weierstrass theorem in real analysis states that every bounded sequence of real numbers contains a convergent subsequence. ... In mathematical analysis, the Heine–Borel theorem, named after Eduard Heine and Émile Borel, states: For a subset S of Euclidean space Rn, the following are equivalent: S is closed and bounded every open cover of S has a finite subcover, that is, S is compact. ... The least upper bound axiom, also abbreviated as the LUB axiom, is an axiom of real analysis. ... In mathematical analysis, a metric space M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if it doesnt have any holes, if there arent any points missing. For...
One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers (or, more generally, of elements of any complete normed linear space, or Banach space). Such a series $sum_{n=1}^{infty} x_{n}$ is considered to be convergent if and only if the sequence of partial sums (sm) is convergent, where $s_{m} = sum_{n=1}^{m} x_{n}$. It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers p > q , In mathematics, a series is a sum of a sequence of terms. ... In mathematics, Banach spaces (pronounced ), named after Stefan Banach who studied them, are one of the central objects of study in functional analysis. ... In mathematics, a series is a sum of a sequence of terms. ...
$s_{p} - s_{q} = sum_{n=q+1}^{p} x_{n}$.
If is a uniformly continuous map between the metric spaces M and N and (xn) is a Cauchy sequence in M, then (f(xn)) is a Cauchy sequence in N. If (xn) and (yn) are two Cauchy sequences in the rational, real or complex numbers, then the sum (xn + yn) and the product (xnyn) are also Cauchy sequences. In mathematical analysis, a function f(x) is called uniformly continuous if, roughly speaking, small changes in the input x effect small changes in the output f(x) (continuity), and furthermore the size of the changes in f(x) depends only on the size of the changes in x but...
Generalizations
Cauchy sequences in topological vector spaces
There is also a concept of Cauchy sequence for a topological vector space X: Pick a local base B for X about 0; then (xk) is a Cauchy sequence if for all members V of B, there is some number N such that whenever n,m > N,xnxm is an element of V. If the topology of X is compatible with a translation-invariant metric d, the two definitions agree. In mathematics a topological vector space is one of the basic structures investigated in functional analysis. ... This is a glossary of some terms used in the branch of mathematics known as topology. ... In mathematics a metric or distance is a function which assigns a distance to elements of a set. ...
Cauchy sequences in groups
There is also a concept of Cauchy sequence in a group G: Let H = (Hr) be a decreasing sequence of normal subgroups of G of finite index. Then a sequence (xn) in G is said to be Cauchy (w.r.t. H) if and only if for any r there is N such that $forall m,n > N, x_n x_m^{-1} in H_r$. In mathematics, if G is a group, H a subgroup of G, and g an element of G, then gH = { gh : h an element of H } is a left coset of H in G, and Hg = { hg : h an element of H } is a right coset of H in G... ↔ ⇔ ≡ logical symbols representing iff. ...
The set C of such Cauchy sequences forms a group (for the componentwise product), and the set C0 of null sequences (s.th. $forall r, exists N, forall n > N, x_n in H_r$) is a normal subgroup of C. The factor group C / C0 is called the completion of G with respect to H. In mathematics, given a group G and a normal subgroup N of G, the quotient group, or factor group, of G over N is a group that intuitively collapses the normal subgroup N to the identity element. ...
One can then show that this completion is isomorphic to the inverse limit of the sequence (G / Hr). In mathematics, the inverse limit (also called the projective limit) is a construction which allows one to glue together several related objects, the precise matter of the gluing process being specified by morphisms between the objects. ...
An example of this construction, familiar in number theory and algebraic geometry is the construction of the p-adic completion of the integers with respect to a prime p. In this case, G is the integers under addition, and Hr is the additive subgroup consisting of integer multiples of pr. Number theory is the branch of pure mathematics concerned with the properties of numbers in general, and integers in particular, as well as the wider classes of problems that arise from their study. ... Algebraic geometry is a branch of mathematics which, as the name suggests, combines techniques of abstract algebra, especially commutative algebra, with the language and the problematics of geometry. ...
If H is a cofinal sequence (i.e., any normal subgroup of finite index contains some Hr), then this completion is canonical in the sense that it is isomorphic to the inverse limit of (G / H)H, where H varies over all normal subgroups of finite index. For further details, see ch. I.10 in Lang's "Algebra". In mathematics, a subset B of a partially ordered set A is cofinal if for every a in A there is b in B such that a ≤ b. ... Canonical is an adjective derived from canon. ... In mathematics, if G is a group, H a subgroup of G, and g an element of G, then gH = { gh : h an element of H } is a left coset of H in G, and Hg = { hg : h an element of H } is a right coset of H in G...
In constructive mathematics
In constructive mathematics, Cauchy sequences often must be given with a modulus of Cauchy convergence to be useful. If (x1,x2,x3,...) is a Cauchy sequence in the set X, then a modulus of Cauchy convergence for the sequence is a function α from the set of natural numbers to itself, such that $forall k forall m, n > alpha(k), |x_m - x_n| < 1/k$. In the philosophy of mathematics, constructivism asserts that it is necessary to find (or construct) a mathematical object to prove that it exists. ... Graph of example function, The mathematical concept of a function expresses the intuitive idea of deterministic dependence between two quantities, one of which is viewed as primary (the independent variable, argument of the function, or its input) and the other as secondary (the value of the function, or output). A... In mathematics, a natural number can mean either an element of the set {1, 2, 3, ...} (i. ...
Clearly, any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The converse (that every Cauchy sequence has a modulus) follows from the well-ordering property of the natural numbers (let α(k) be the smallest possible N in the definition of Cauchy sequence, taking r to be 1 / k). However, this well-ordering property does not hold in constructive mathematics (it is equivalent to the principle of excluded middle). On the other hand, this converse also follows (directly) from the principle of dependent choice (in fact, it will follow from the weaker AC00), which is generally accepted by constructive mathematicians. Thus, moduli of Cauchy convergence are needed directly only by constructive mathematicians who (like Fred Richman) do not wish to use any form of choice. In mathematics, a well-order (or well-ordering) on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. ... The law of excluded middle (tertium non datur in Latin) states that for any proposition P, it is true that (P or ~P). ... In mathematics, the axiom of dependent choice, denoted DC, is a weak form of the axiom of choice which is still sufficient to develop most of real analysis. ...
That said, using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Perhaps even more useful are regular Cauchy sequences, sequences with a given modulus of Cauchy convergence (usually α(k) = k or α(k) = 2k). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent (in the sense used to form the completion of a metric space) to a regular Cauchy sequence; this can be proved without using any form of the axiom of choice. Regular Cauchy sequences were used by Errett Bishop in his Foundations of Constructive Analysis, but they have also been used by Douglas Bridges in a non-constructive textbook (ISBN 978-0-387-98239-7). However, Bridges also works on mathematical constructivism; the concept has not spread far outside of that milieu. In mathematical analysis, a metric space M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if it doesnt have any holes, if there arent any points missing. For... Errett Albert Bishop (1928-1983) was an American mathematician known for is work on analysis. ...
References
• Bourbaki, Nicolas (1972). Commutative Algebra, English translation, Addison-Wesley. ISBN 0-201-00644-8.
• Lang, Serge (1997). Algebra, 3rd ed., reprint w/ corr., Addison-Wesley. ISBN 978-0-201-55540-0.
• Spivak, Michael (1994). Calculus, 3rd ed., Berkeley, CA: Publish or Perish. ISBN 0-914098-89-6.
• Troelstra, A. S. and D. van Dalen. Constructivism in Mathematics: An Introduction. (for uses in constructive mathematics)
Results from FactBites:
Sequence - Wikipedia, the free encyclopedia (631 words) A finite sequence is also called an n-tuple. A subsequence of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. If the terms of the sequence are a subset of a ordered set, then a monotonically increasing sequence is one for which each term is greater than or equal to the term before it; if each term is strictly greater than the one preceding it, the sequence is called strictly monotonically increasing.
Cauchy sequence - Wikipedia, the free encyclopedia (604 words) In mathematical analysis, a Cauchy sequence, named after Augustin Cauchy, is a sequence whose elements become close as the sequence progresses. Cauchy sequences require the notion of distance so they can only be defined in a metric space. They are of interest because in a complete space, all such sequences converge to a limit, and one can test for "Cauchiness" without knowing the value of the limit (if it exists), in contrast to the definition of convergence.
More results at FactBites »
Share your thoughts, questions and commentary here
|
{}
|
# Tight Approximation Algorithms for p-Mean Welfare Under Subadditive Valuations
### Abstract
We develop polynomial-time algorithms for the fair and efficient allocation of indivisible goods among $n$ agents that have subadditive valuations over the goods. We first consider the Nash social welfare as our objective and design a polynomial-time algorithm that, in the value oracle model, finds an $8n$-approximation to the Nash optimal allocation. Subadditive valuations include XOS (fractionally subadditive) and submodular valuations as special cases. Our result, even for the special case of submodular valuations, improves upon the previously best known $O(n \log n)$-approximation ratio of Garg et al. (2020). More generally, we study maximization of $p$-mean welfare. The $p$-mean welfare is parameterized by an exponent term $p \in (-\infty, 1]$ and encompasses a range of welfare functions, such as social welfare $(p = 1)$, Nash social welfare ($p \to 0$), and egalitarian welfare ($p \to -\infty$). We give an algorithm that, for subadditive valuations and any given $p \in (-\infty, 1]$, computes (in the value oracle model and in polynomial time) an allocation with $p$-mean welfare at least $8n$ times the optimal. Further, we show that our approximation guarantees are essentially tight for XOS and, hence, subadditive valuations. We adapt a result of Dobzinski et al. (2010) to show that, under XOS valuations, an $O (n^{1-\varepsilon})$ approximation for the $p$-mean welfare for any $p \in (-\infty,1]$ (including the Nash social welfare) requires exponentially many value queries; here, $\varepsilon>0$ is any fixed constant.
Type
Publication
In ESA 2020
##### Anand Krishna
###### Game Theory Lab & Approximation Algorithms Lab
My research interests include Optimization, Fair Division and RL.
|
{}
|
# Fixed Point Arithmetics in C++ using templates
I am trying to create a Fixed Point Arithmetics library : I call fixed point a number which has bits reserved for decimal part.
Here is the code :
#ifndef FIXEDPOINTNUMBER_HPP
#define FIXEDPOINTNUMBER_HPP
#include <type_traits>
#include <cstdint>
///////////////////////////////////////////////////////////
//////////////////// DECLARATION ////////////////////
///////////////////////////////////////////////////////////
/**
* @brief Provides fixed-point number calculations.
* @author Julien Vernay (JDM)
* @date 01-01-2018 (dd-mm-yyyy)
* @arg @c T Underlying type, no overhead
* @arg @c N Number of bits used for decimal part
* @details Fixed-Point Number uses an int value, so we only needs int manipulation with bitshift tricks instead of floating arithmetics.
* @details The underlying value @c val can be represented by : <em>VALUE = val / (2^N)</em>
*/
template<typename T, unsigned char N>
class FixedPointNumber {
public:
FixedPointNumber(); /**< @brief Constructs with 0 */
FixedPointNumber(T value, bool raw = 0); /**< @brief Constructs with a @c T value */
FixedPointNumber(float value); /**< @brief Constructs with a @c float value */
operator T() const; /**< @brief Casts to integer value of type @c T (eventually flooring) */
operator float() const; /**< @brief Casts to a float value */
T raw() const; /**< @brief Returns @c val without any casting */
template<unsigned char N2>
operator FixedPointNumber<T, N2>() const; /**< @brief Casts to another FixedPointNumber with same underlying type */
template<typename T2>
operator FixedPointNumber<T2, N>() const; /**< @brief Casts to another FixedPointNumber with same decimal part bits */
template<typename T2, unsigned char N2>
operator FixedPointNumber<T2, N2>() const; /**< @brief Casts to another FixedPointNumber */
FixedPointNumber<T, N>& operator+=(FixedPointNumber<T, N> const& rhs);
FixedPointNumber<T, N>& operator-=(FixedPointNumber<T, N> const& rhs);
FixedPointNumber<T, N>& operator*=(FixedPointNumber<T, N> const& rhs);
FixedPointNumber<T, N>& operator/=(FixedPointNumber<T, N> const& rhs);
FixedPointNumber<T, N> operator-() const;
bool operator==(FixedPointNumber<T, N> const& rhs) const;
bool operator>(FixedPointNumber<T, N> const& rhs) const;
private:
std::enable_if_t<std::is_integral_v<T>, T> val;
};
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator+(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator-(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator*(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator/(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
bool operator!=(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
bool operator<(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
bool operator>=(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs);
template<typename T, unsigned char N>
bool operator<=(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs);
///////////////////////////////////////////////////////////
//////////////////// DEFINITIONS ////////////////////
///////////////////////////////////////////////////////////
template<typename T, unsigned char N>
FixedPointNumber<T, N>::FixedPointNumber() : val(0) {}
template<typename T, unsigned char N>
FixedPointNumber<T, N>::FixedPointNumber(T value, bool raw) : val(raw ? value : value << N) {}
template<typename T, unsigned char N>
FixedPointNumber<T, N>::FixedPointNumber(float value) {
std::uint32_t value_int = *reinterpret_cast<std::uint32_t*>(&value);
std::uint32_t mantissa = (value_int & 0x007FFFFF) | 0x00800000;
std::int8_t exponent = ((value_int >> 23) & 0x000000FF) - 150 + N;
if (exponent >= 0)
mantissa <<= exponent;
else
mantissa >>= -exponent;
val = (value_int & 0x80000000) ? -static_cast<T>(mantissa) : static_cast<T>(mantissa);
}
template<typename T, unsigned char N>
FixedPointNumber<T, N>::operator T() const {
return val >> (val >= 0) ? N : -N;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N>::operator float() const {
if (val == 0) return 0.f; //trivial case, needed to prevent infinite loops for CLZ
std::uint32_t mantissa = (val >= 0) ? val : -val;
std::uint8_t fbs = 31; //first bit set : fbs = floor(log2(mantissa))
#if defined(__GNUC__) //g++ compiler
fbs -= __builtin_clz(mantissa);
#elif defined(_MSC_VER) //MSVC compiler
fbs -= __lzcnt(mantissa);
#else //unknown compiler : using naive algorithm
for (std::uint32_t copy = mantissa; copy & 0x80000000; --fbs) copy <<= 1;
#endif
if (fbs <= 23)
mantissa <<= 23 - fbs;
else
mantissa >>= fbs - 23;
mantissa &= 0x007FFFFF; //keeping mantissa
mantissa |= (val < 0) ? 0x80000000 : 0; //sign
mantissa |= static_cast<std::uint32_t>(127 + fbs - N) << 23; //exponent
return *reinterpret_cast<float*>(&mantissa);
}
template<typename T, unsigned char N>
T FixedPointNumber<T, N>::raw() const {
return val;
}
template<typename T, unsigned char N>
template<unsigned char N2>
FixedPointNumber<T, N>::operator FixedPointNumber<T, N2>() const {
if (N >= N2)
return { static_cast<T>(val >> (N - N2)), true };
else
return { static_cast<T>(val << (N2 - N)), true };
}
template<typename T, unsigned char N>
template<typename T2>
FixedPointNumber<T, N>::operator FixedPointNumber<T2, N>() const {
return { static_cast<T2>(val), true };
}
template<typename T, unsigned char N>
template<typename T2, unsigned char N2>
FixedPointNumber<T, N>::operator FixedPointNumber<T2, N2>() const {
if (N >= N2)
return { static_cast<T2>(static_cast<T2>(val) >> (N - N2)), true };
else
return { static_cast<T2>(static_cast<T2>(val) << (N2 - N)), true };
}
template<typename T, unsigned char N>
FixedPointNumber<T, N>& FixedPointNumber<T, N>::operator+=(FixedPointNumber<T, N> const& rhs) {
val += rhs.val;
return *this;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N>& FixedPointNumber<T, N>::operator-=(FixedPointNumber<T, N> const& rhs) {
val -= rhs.val;
return *this;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N>& FixedPointNumber<T, N>::operator*=(FixedPointNumber<T, N> const& rhs) {
val = ((+val) * (+rhs.val)) >> N;
return *this;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N>& FixedPointNumber<T, N>::operator/=(FixedPointNumber<T, N> const& rhs) {
val = ((+val) << N) / rhs.val;
return *this;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N> FixedPointNumber<T, N>::operator-() const {
return { static_cast<T>(-val), true };
}
template<typename T, unsigned char N>
bool FixedPointNumber<T, N>::operator==(FixedPointNumber<T, N> const& rhs) const {
return val == rhs.val;
}
template<typename T, unsigned char N>
bool FixedPointNumber<T, N>::operator>(FixedPointNumber<T, N> const& rhs) const {
return val > rhs.val;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator+(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs) {
return lhs += rhs;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator-(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs) {
return lhs -= rhs;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator*(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs) {
return lhs *= rhs;
}
template<typename T, unsigned char N>
FixedPointNumber<T, N> operator/(FixedPointNumber<T, N> lhs, FixedPointNumber<T, N> const& rhs) {
return lhs /= rhs;
}
template<typename T, unsigned char N>
bool operator!=(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs) {
return !(lhs == rhs);
}
template<typename T, unsigned char N>
bool operator<(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs) {
return rhs > lhs;
}
template<typename T, unsigned char N>
bool operator>=(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs) {
return !(rhs > lhs);
}
template<typename T, unsigned char N>
bool operator<=(FixedPointNumber<T, N> const& lhs, FixedPointNumber<T, N> const& rhs) {
return !(lhs > rhs);
}
#endif
So to resume it, the class FixedPointNumber contains only one variable of integral type T, which the N least significant bits are the decimal part, and the sizeof(T) - N most significant bits are the integral part.
What was aimed first is to have non-integer value smaller than a float, and all the syntax and casting to go with it.
Operators +,-,*,/ and !=,<, >=, <= are not methods of the class in order to have better encapsulation.
Here is an example for a main.cpp file :
#include "FixedPointNumber.hpp"
#include <iostream>
using namespace std;
using Nbr8 = FixedPointNumber<uint8_t, 8>; //domain : [0, 1[ epsilon = 1/256 8 bits
using Nbr16A = FixedPointNumber<int16_t, 11>; //domain : [-16, 16[ epsilon = 1/2048 16 bits
using Nbr16B = FixedPointNumber<uint16_t, 15>; //domain : [-1, 1[ epsilon = 1/32768 16 bits
int main() {
{
Nbr8 a = 0.37f, b = 0.52f;
float af = 0.37f, bf = 0.52f;
cout << af << " -> " << float(a) << "\t\tError (%) : " << 100 * (af - float(a)) / af << endl;
cout << bf << " -> " << float(b) << "\t\tError (%) : " << 100 * (bf - float(b)) / bf << endl;
cout << af + bf << " -> " << float(a + b) << "\t\tError (%) : " << 100 * (af + bf - float(a + b)) / (af + bf) << endl;
cout << "Nbr8 recap : 25% bits for about 0.5% error if interval correctly chosen" << endl << endl;
}
{
Nbr16B a = 0.37f, b = 0.52f;
float af = 0.37f, bf = 0.52f;
cout << af << " -> " << float(a) << "\t\tError (%) : " << 100 * (af - float(a)) / af << endl;
cout << bf << " -> " << float(b) << "\t\tError (%) : " << 100 * (bf - float(b)) / bf << endl;
cout << af + bf << " -> " << float(a + b) << "\t\t\tError (%) : " << 100 * (af + bf - float(a + b)) / (af + bf) << endl;
cout << "Nbr16 recap : 50% bits for about 0.002% error if interval correctly chosen" << endl << endl;
Nbr16A a2 = a, b2 = b;
cout << "switching point position, less precision but wider domain !" << endl;
cout << "a2 = " << float(a2) << " b2 = " << float(b2) << " AGAINST a1 = " << float(a) << " b1 = " <<float(b) << endl;
cout << "Notice that gap between two consecutive values is constant in a domain, contrary to floating point numbers." << endl << endl;
}
return 0;
}
Complete and updated code can be found here.
• Please copy your code here, as we'are not sure if the link will be alive in the years to come. Also, it would be helpful to add some explanations and concerns about your code – Incomputable Jan 1 '18 at 20:45
• It is now completed and I added small description :) – Julien Vernay Jan 1 '18 at 21:21
• Great, voted for reopen. The only thing left would be to provide small example main() to demonstrate the usage of the class. No need to cover all of the library, just something that is "selling" point of your library. It might be precision of your values against built-in, or anything else. – Incomputable Jan 1 '18 at 21:25
• Not enough for a full on review, because some other folks already did this, but they haven't mentioned those two points: 1) constexpr and noexcept 2) Some of your operators aren't found by ADL. The way to do this normally is to make them friends. – Rakete1111 Jan 2 '18 at 23:52
• @JulienVernay I would make them constexpr, but now I realized that you use compiler builtins, and I don't know if they are constexpr too. I mean, the main reason is "why not?" IMO. Might make your variables more optimization friendly, and then you can use them in constant expressions, which is nice. noexcept can also help the compiler by making optimizations if you are using exceptions (don't know if you are, if you are not, I don't think it makes a big difference). Nice that you ask, because I don't know. On further investigation, it doesn't matter. Ignore that part and cheers :) – Rakete1111 Jan 3 '18 at 0:14
Looks really good!
A few improvements:
## Confusing construction.
Construction from raw is making your constructors more complicated than they need to be.
Your class would be easier to work with if constructing from a value ALWAYS constructs by actual value.
Also consider that boolean arguments are often hard to understand at the call site. Can someone not familiar with your library tell what the following line does without having to sift through your header? No, and it's a problem here since raw construction is clearly going to be an unfamiliar edge case.
FixedPointNumber<char, 4> val(12, true);
To fix these issues, I would instead add a static member function, and get rid of the (T,bool) constructor entirely:
static FixedPointNumber<T, N> from_raw(T data);
My example becomes proper self-documenting code at the call site:
auto val = FixedPointNumber<char, 4>::from_raw(12);
## No conversion to/from double
A bit of a no-brainer, but that would definitely be nice.
In fact... it would be nice to support arbitrary floating point formats through a traits type.
## Concerns about overflow behavior of multiply/division operation
val = ((+val) * (+rhs.val)) >> N;
I don't really like how inconsistent you are being with the overflow behavior. char and short get promoted, but not int or long? I would rather see everything get promoted, or nothing.
Edit: followup:
I'm not sure to understand what you meant by "through a traits type"
Imagine that you had a type that looks very roughly like this, and your conversion functions used these values to build the resulting float value.
template<typename T>
struct float_traits;
template<>
struct float_traits<float> {
static constexpr int mantissa_offset = 0;
static constexpr int mantissa_bits = 23;
static constexpr int exponent_offset = 23;
static constexpr int exponent_bits = 8;
static constexpr int sign_bit_offset = 31;
};
Adding support for double or half-precision floats would just be a matter of adding the proper specialization to float_traits, which a user can even do within their own codebase.
• The "static constructor" for raw is what I was searching, because yes, it looks ugly to have this bool value in the constructor x) . Yes, double would be nice ! I'll give it a try. I'm not sure to understand what you meant by "through a traits type" ? Can you explain please ? Yes overflow is a problem, but even if int can be promoted to long long, how can I promote long long ? Thanks for your review ! – Julien Vernay Jan 2 '18 at 21:18
• @JulienVernay I've ammended the answer itself with the answer to your traits question. – Frank Jan 2 '18 at 21:58
• I understand what you would say, and I think it could be nice to add it ! I'll work on it ! Thanks – Julien Vernay Jan 2 '18 at 22:30
I recently had my interest in fixed point math piqued as part of a side project I was doing, so this is great to see! Having a C++ class for fixed point numbers would make things so easy!
I'm guessing that the weird formatting is due to copy/paste issues and that your actual code uses indentation. If not, it definitely should.
# Description
I notice throughout your comments you say "decimal part". However, you don't use any decimal representation. I was thinking that maybe you were working in BCD or something like that. I would change all references to "decimal" to be "fractional" to be more precise.
# Don't Use using namespace std
You've written:
using namespace std;
in your header main.cpp file. That means that every file which includes your header now has all of the std namespace defined, too. If I have my own max() function that's not in std and I include your header, I will now get conflicts on my max() function. See here for more details on why this isn't a good idea.
# Usage
Seeing the examples of how to use this type in your example main.cpp file left me very confused, even with a comment describing the range. The way you have it now, I have to know how many bits are in a given type's representation, then subtract from that the number of bits I want for the fractional part in order to figure out the range I'll end up with. I also have to make sure that the type I supply has enough bits for the representation. (What happens if I do using Nbr32 = FixedPointNumber<uint8_t, 17>;?)
Furthermore, the type I supply may be unsigned, but the range of the resulting type can still cover negative values. Does it make sense to have an unsigned fixed point type and a signed fixed point type? I'm not sure. But it is odd to supply an unsigned type and have it end up being signed.
I'm not an expert at templates, so I'm not entirely sure what's possible. I think it would be better to have the template take the number of bits for the integral part and the number of bits for the fractional part and choose the type it uses internally based on that and not require a caller to figure it out.
In other words, I'd like to use it like this:
using Fixed16_16 = FixedPointNumber<16, 16>;
using Fixed8_4 = FixedPointNumber<8, 4>;
As I say, I don't know whether it's possible to make that work, but it sure would be nice. If you can get closer to that, it would be great.
# Naming
I'm all for long descriptive names. However, I do feel like FixedPointNumber is too long. We don't call float a floatingPointNumber or int an intNumber. I think it would be fine as FixedPoint or even Fixed (though be aware that Apple has used that type name in the past for 16.16 fixed point numbers).
# Future Directions
I'd love to see a whole math library for this type. Things like trigonometric and transcendental functions would be helpful. (You might look into cordics if you plan to pursue something like that.)
• Regarding infering the storage type from the requested number of bits: It's pretty easy to do at face value, but a fair bit of care would be required to ensure that the behavior of the class is consistent. Specifically, I suspect most operations would require explicitly masking out of any unused bits, which would end up being a nasty performance hit. – Frank Jan 2 '18 at 20:10
• I used decimal because I didn't know how to express it in english, thanks for advice. I use using namespace std only in main.cpp so it should be fine ? I will implement static methods like lower() and upper() (constexpr ?) to have access easily to the domain (and maybe integer_bits() and fractional_bits() ?). Unsigned/signed seems correct to me, as unsigned can cover signed domain. In unsigned case, val is specified as unsigned so every operation uses unsigned. For instance, in float conversion, the test val < 0 (for specifying sign of float) is always false, so float > 0. – Julien Vernay Jan 2 '18 at 21:02
• About using more bits for fractional part that available in underlying type, I will try for example that domain of FixedPointNumber<uint8, 9> is [0, 0.5[ (as if the bit not present was 0), and if I can't, I'll do a comparaison sizeof(T)>=N to produce compile-time error. I will add a template<uint TotalBits, uint FracBits> FixedPointNumber version which determines underlying type. The name is too long, I agree, but I think it is not important because people will probably use using or typedef, because it is unlikely to use many types at same moment for number representation. – Julien Vernay Jan 2 '18 at 21:12
• I thing cordics can be implemented, but not in the near future ^^. Thanks for your review ! PS : Indeed, no indentation results of bad copy/pasting x) – Julien Vernay Jan 2 '18 at 21:21
• Re "Usage": I'd say that is an API decision, do you want an expert interface (the caller can choose all options) or a user-friendly "it just works" interface (the caller tells what he wants and gets something that works). Currently, the interface seems a bit in between those extremes (can choose underlying type and number of fractional bits, cannot choose signedness and "integral bits", though the latter is implied). Of course, one can always put a user-friendly interface on top of an expert one – hoffmale Jan 2 '18 at 23:01
|
{}
|
# Find the minimum value of $\left(\frac{a}{x} + \frac{b}{y} + \frac{c}{z}\right)\sqrt{yz + zx + xy}$
Going back a few more years and you can find more and more interesting problems over the years as time turns back. I am still surprised at how easy this competition has become. Then I come across this problem, which goes by the following.
$$x$$, $$y$$ and $$z$$ are positive variables and $$a = F_{n - 1}$$, $$b = F_{n + 1}$$ are positive parameters. ($$F_n$$ is the $$n^{th}$$ Fibonacci number.
Find the minimum value of $$\left(\dfrac{1}{x} + \dfrac{a}{y} + \dfrac{b}{z}\right)\sqrt{yz + zx + xy}$$.
It was simple, yet difficult. I wished to find a solution without a solution without using Lagrange multipliers but found no results. I would be grateful if you have a solution like so.
• In the general case the answer is very ugly and it's just impossible to write it. – Michael Rozenberg Mar 23 at 7:18
• By the way, for $(a,b,c)=(1,2,5)$ we can get a nice answer. – Michael Rozenberg Mar 23 at 7:37
• Now that's what they asked in the competition for the participants in the lower grade that same year. – Lê Thành Đạt Mar 23 at 10:33
• “It was simple, yet difficult.” – What is that supposed to mean? – Martin R Mar 23 at 10:52
• There are only 2 lines to ask for the problem but nobody can solve it. – Lê Thành Đạt Mar 23 at 10:53
Hint: $$\frac{a}{x}+\frac{b}{y}+\frac{c}{z}\geq 3\sqrt[3]\frac{abc}{xyz}$$ and $$yz+zx+xy\geq 3\sqrt[3]{(xyz)^2}$$ Putting things together we obtain$$\left(\frac{a}{x}+\frac{b}{y}+\frac{c}{z}\right)\sqrt{xy+yz+zx}\geq 3\sqrt{3}\sqrt [3]{abc}$$
• For equality this needs $a=b=c$, which is not assured as these are given parameters, hence this does not give the minimum, only a lower bound. – Macavity Mar 23 at 6:48
• @Sonnhard Your reasoning is total wrong. Try to understand when does the equality occur for different $a$, $b$ and $c$. – Michael Rozenberg Mar 23 at 7:20
|
{}
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
Chapter Contents
Chapter Introduction
NAG Toolbox
# NAG Toolbox: nag_roots_contfn_cntin_rcomm (c05ax)
## Purpose
nag_roots_contfn_cntin_rcomm (c05ax) attempts to locate a zero of a continuous function using a continuation method based on a secant iteration. It uses reverse communication for evaluating the function.
## Syntax
[x, c, ind, ifail] = c05ax(x, fx, tol, ir, c, ind, 'scal', scal)
[x, c, ind, ifail] = nag_roots_contfn_cntin_rcomm(x, fx, tol, ir, c, ind, 'scal', scal)
## Description
nag_roots_contfn_cntin_rcomm (c05ax) uses a modified version of an algorithm given in Swift and Lindfield (1978) to compute a zero α$\alpha$ of a continuous function f(x) $f\left(x\right)$. The algorithm used is based on a continuation method in which a sequence of problems
f(x) − θrf(x0), r = 0,1, … ,m $f(x)-θrf(x0), r=0,1,…,m$
are solved, where 1 = θ0 > θ1 > > θm = 0 $1={\theta }_{0}>{\theta }_{1}>\cdots >{\theta }_{m}=0$ (the value of m$m$ is determined as the algorithm proceeds) and where x0 ${x}_{0}$ is your initial estimate for the zero of f(x) $f\left(x\right)$. For each θr ${\theta }_{r}$ the current problem is solved by a robust secant iteration using the solution from earlier problems to compute an initial estimate.
You must supply an error tolerance tol. tol is used directly to control the accuracy of solution of the final problem ( θm = 0 ${\theta }_{m}=0$) in the continuation method, and sqrt(tol) $\sqrt{{\mathbf{tol}}}$ is used to control the accuracy in the intermediate problems ( θ1 , θ2 , , θm1 ${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{m-1}$).
## References
Swift A and Lindfield G R (1978) Comparison of a continuation method for the numerical solution of a single nonlinear equation Comput. J. 21 359–362
## Parameters
Note: this function uses reverse communication. Its use involves an initial entry, intermediate exits and re-entries, and a final exit, as indicated by the parameter ind. Between intermediate exits and re-entries, all parameters other than fx must remain unchanged.
### Compulsory Input Parameters
1: x – double scalar
On initial entry: an initial approximation to the zero.
2: fx – double scalar
On initial entry: if ind = 1 ${\mathbf{ind}}=1$, fx need not be set.
If ind = 1 ${\mathbf{ind}}=-1$, fx must contain f(x) $f\left({\mathbf{x}}\right)$ for the initial value of x.
On intermediate re-entry: must contain f(x) $f\left({\mathbf{x}}\right)$ for the current value of x.
3: tol – double scalar
On initial entry: a value that controls the accuracy to which the zero is determined. tol is used in determining the convergence of the secant iteration used at each stage of the continuation process. It is used directly when solving the last problem ( θm = 0 ${\theta }_{m}=0$ in Section [Description]), and sqrt(tol) $\sqrt{{\mathbf{tol}}}$ is used for the problem defined by θr ${\theta }_{r}$, r < m $r. Convergence to the accuracy specified by tol is not guaranteed, and so you are recommended to find the zero using at least two values for tol to check the accuracy obtained.
Constraint: tol > 0.0 ${\mathbf{tol}}>0.0$.
4: ir – int64int32nag_int scalar
On initial entry: indicates the type of error test required, as follows. Solving the problem defined by θr ${\theta }_{r}$, 1rm $1\le r\le m$, involves computing a sequence of secant iterates xr0,xr1, ${x}_{r}^{0},{x}_{r}^{1},\dots \text{}$. This sequence will be considered to have converged only if:
for ir = 0${\mathbf{ir}}=0$,
|xr(i + 1) − xr(i)| ≤ eps × max (1.0,|xr(i)|), $|xr (i+1) -xr(i) |≤eps×max(1.0,|xr(i) |),$
for ir = 1${\mathbf{ir}}=1$,
|xr(i + 1) − xr(i)| ≤ eps, $|xr (i+1) -xr(i) |≤eps,$
for ir = 2${\mathbf{ir}}=2$,
|xr(i + 1) − xr(i)| ≤ eps × |xr(i)|, $|xr (i+1) -xr(i) |≤eps×|xr(i) |,$
for some i > 1 $i>1$; here eps $\mathit{eps}$ is either tol or sqrt(tol) $\sqrt{{\mathbf{tol}}}$ as discussed above. Note that there are other subsidiary conditions (not given here) which must also be satisfied before the secant iteration is considered to have converged.
Constraint: ir = 0${\mathbf{ir}}=0$, 1$1$ or 2$2$.
5: c(26$26$) – double array
( c(5) ${\mathbf{c}}\left(5\right)$ contains the current θr ${\theta }_{r}$, this value may be useful in the event of an error exit.)
6: ind – int64int32nag_int scalar
On initial entry: must be set to 1$1$ or 1 $-1$.
ind = 1 ${\mathbf{ind}}=1$
fx need not be set.
ind = 1 ${\mathbf{ind}}=-1$
fx must contain f(x) $f\left({\mathbf{x}}\right)$.
Constraint: on entry ind = -1${\mathbf{ind}}=-1$, 1$1$, 2$2$, 3$3$ or 4$4$.
### Optional Input Parameters
1: scal – double scalar
On initial entry: a factor for use in determining a significant approximation to the derivative of f(x) $f\left(x\right)$ at x = x0 $x={x}_{0}$, the initial value. A number of difference approximations to f(x0) ${f}^{\prime }\left({x}_{0}\right)$ are calculated using
f′(x0) ∼ (f(x0 + h) − f(x0)) / h $f′(x0)∼(f(x0+h)-f(x0))/h$
where |h| < |scal| $|h|<|{\mathbf{scal}}|$ and h$h$ has the same sign as scal. A significance (cancellation) check is made on each difference approximation and the approximation is rejected if insignificant.
Suggested value: sqrt(ε)$\sqrt{\epsilon }$, where ε$\epsilon$ is the machine precision returned by nag_machine_precision (x02aj).
Default: sqrt(machine precision) $\sqrt{\mathbit{machine precision}}$
Constraint: ${\mathbf{scal}}$ must be sufficiently large that x + scalx ${\mathbf{x}}+{\mathbf{scal}}\ne {\mathbf{x}}$ on the computer.
None.
### Output Parameters
1: x – double scalar
On intermediate exit: the point at which f$f$ must be evaluated before re-entry to the function.
On final exit: the final approximation to the zero.
2: c(26$26$) – double array
3: ind – int64int32nag_int scalar
On intermediate exit: contains 2$2$, 3$3$ or 4$4$. The calling program must evaluate f$f$ at x, storing the result in fx, and re-enter nag_roots_contfn_cntin_rcomm (c05ax) with all other parameters unchanged.
On final exit: contains 0$0$.
4: ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).
## Error Indicators and Warnings
Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=1$
On entry, tol ≤ 0.0 ${\mathbf{tol}}\le 0.0$, or ir ≠ 0${\mathbf{ir}}\ne 0$, 1$1$ or 2$2$.
ifail = 2${\mathbf{ifail}}=2$
The parameter ind is incorrectly set on initial or intermediate entry.
ifail = 3${\mathbf{ifail}}=3$
scal is too small, or significant derivatives of f$f$ cannot be computed (this can happen when f$f$ is almost constant and nonzero, for any value of scal).
ifail = 4${\mathbf{ifail}}=4$
The current problem in the continuation sequence cannot be solved, see c(5) ${\mathbf{c}}\left(5\right)$ for the value of θr ${\theta }_{r}$. The most likely explanation is that the current problem has no solution, either because the original problem had no solution or because the continuation path passes through a set of insoluble problems. This latter reason for failure should occur rarely, and not at all if the initial approximation to the zero is sufficiently close. Other possible explanations are that tol is too small and hence the accuracy requirement is too stringent, or that tol is too large and the initial approximation too poor, leading to successively worse intermediate solutions.
ifail = 5${\mathbf{ifail}}=5$
Continuation away from the initial point is not possible. This error exit will usually occur if the problem has not been properly posed or the error requirement is extremely stringent.
ifail = 6${\mathbf{ifail}}=6$
The final problem (with θm = 0 ${\theta }_{m}=0$) cannot be solved. It is likely that too much accuracy has been requested, or that the zero is at α = 0 $\alpha =0$ and ir = 2 ${\mathbf{ir}}=2$.
## Accuracy
The accuracy of the approximation to the zero depends on tol and ir. In general decreasing tol will give more accurate results. Care must be exercised when using the relative error criterion ( ir = 2 ${\mathbf{ir}}=2$).
If the zero is at x = 0 ${\mathbf{x}}=0$, or if the initial value of x and the zero bracket the point x = 0 ${\mathbf{x}}=0$, it is likely that an error exit with ${\mathbf{ifail}}={\mathbf{4}}$, 5${\mathbf{5}}$ or 6${\mathbf{6}}$ will occur.
It is possible to request too much or too little accuracy. Since it is not possible to achieve more than machine accuracy, a value of tolmachine precision should not be input and may lead to an error exit with ${\mathbf{ifail}}={\mathbf{4}}$, 5${\mathbf{5}}$ or 6${\mathbf{6}}$. For the reasons discussed under ${\mathbf{ifail}}={\mathbf{4}}$ in Section [Error Indicators and Warnings], tol should not be taken too large, say no larger than tol = 1.0e−3 ${\mathbf{tol}}=\text{1.0e−3}$.
For most problems, the time taken on each call to nag_roots_contfn_cntin_rcomm (c05ax) will be negligible compared with the time spent evaluating f(x) $f\left(x\right)$ between calls to nag_roots_contfn_cntin_rcomm (c05ax). However, the initial value of x and the choice of tol will clearly affect the timing. The closer that x is to the root, the less evaluations of f$f$ required. The effect of the choice of tol will not be large, in general, unless tol is very small, in which case the timing will increase.
## Example
```function nag_roots_contfn_cntin_rcomm_example
fx = 0;
c = zeros(26, 1);
for k=3:4
x = 1;
tol = 10^-k;
ir = int64(0);
ind = int64(1);
while (ind ~= 0)
[x, c, ind, ifail] = nag_roots_contfn_cntin_rcomm(x, fx, tol, ir, c, ind);
fx = x - exp(-x);
end
if ifail == 4 || ifail ==6
fprintf('FTol = %11.4e, final value = %11.4e, theta = %10.2e\n', tol, x, c(5));
elseif ifail == 0
fprintf('Tol is %11.4e, Root is %11.4e\n', tol, x);
end
end
```
```
Tol is 1.0000e-03, Root is 5.6715e-01
Tol is 1.0000e-04, Root is 5.6715e-01
```
```function c05ax_example
fx = 0;
c = zeros(26, 1);
for k=3:4
x = 1;
tol = 10^-k;
ir = int64(0);
ind = int64(1);
while (ind ~= 0)
[x, c, ind, ifail] = c05ax(x, fx, tol, ir, c, ind);
fx = x - exp(-x);
end
if ifail == 4 || ifail ==6
fprintf('FTol = %11.4e, final value = %11.4e, theta = %10.2e\n', tol, x, c(5));
elseif ifail == 0
fprintf('Tol is %11.4e, Root is %11.4e\n', tol, x);
end
end
```
```
Tol is 1.0000e-03, Root is 5.6715e-01
Tol is 1.0000e-04, Root is 5.6715e-01
```
|
{}
|
代码加载
Note
定义
Julia加载代码有两种机制:
1. 代码包含:例如 include("source.jl")。包含允许你把一个程序拆分为多个源文件。表达式 include("source.jl") 使得文件 source.jl 的内容在出现 include 调用的模块的全局作用域中执行。如果多次调用 include("source.jl")source.jl 就被执行多次。source.jl 的包含路径解释为相对于出现 include 调用的文件路径。重定位源文件子树因此变得简单。在 REPL 中,包含路径为当前工作目录,即 pwd()
2. 加载包:例如 import Xusing Ximport 通过加载包(一个独立的,可重用的 Julia 代码集合,包含在一个模块中),并导入模块内部的名称 X,使得模块 X 可用。 如果在同一个 Julia 会话中,多次导入包 X,那么后续导入模块为第一次导入模块的引用。但请注意,import X 可以在不同的上下文中加载不同的包:X 可以引用主工程中名为 X 的一个包,但它在各个依赖中可以引用不同的、名称同为 X 的包。更多机制说明如下。
包的联合
Julia 支持联合的包管理,这意味着多个独立的部分可以维护公有包、私有包以及包的注册表,并且项目可以依赖于一系列来自不同注册表的公有包和私有包。您也可以使用一组通用工具和工作流(workflow)来安装和管理来自各种注册表的包。Julia 附带的 Pkg 软件包管理器允许安装和管理项目的依赖项,它会帮助创建并操作项目文件(其描述了项目所依赖的其他项目)和清单文件(其为项目完整依赖库的确切版本的快照)。
环境(Environments)
1. 项目环境(project environment)是包含项目文件和清单文件(可选)的目录,并形成一个显式环境。项目文件确定项目的直接依赖项的名称和标识。清单文件(如果存在)提供完整的依赖关系图,包括所有直接和间接依赖关系,每个依赖的确切版本以及定位和加载正确版本的足够信息。
2. 包目录(package directory)是包含一组包的源码树子目录的目录,并形成一个隐式环境。如果 X 是包目录的子目录并且存在 X/src/X.jl,那么程序包 X 在包目录环境中可用,而 X/src/X.jl 是加载它使用的源文件。
• 项目环境提供可迁移性。通过将项目环境以及项目源代码的其余部分存放到版本控制(例如一个 git 存储库),您可以重现项目的确切状态和所有依赖项。特别是,清单文件会记录每个依赖项的确切版本,而依赖项由其源码树的加密哈希值标识;这使得 Pkg 可以检索出正确的版本,并确保你正在运行准确的已记录的所有依赖项的代码。
• 当不需要完全仔细跟踪的项目环境时,包目录更方便。当你想要把一组包放在某处,并且希望能够直接使用它们而不必为之创建项目环境时,包目录是很实用的。
• 堆栈环境允许向基本环境添加工具。您可以将包含开发工具在内的环境堆到堆栈环境的末尾,使它们在 REPL 和脚本中可用,但在包内部不可用。
• roots: name::Symboluuid::UUID
环境的 roots 映射将包名称分配给UUID,以获取环境可用于主项目的所有顶级依赖项(即可以在 Main 中加载的那些依赖项)。当 Julia 在主项目中遇到 import X 时,它会将 X 的标识作为 roots[:X]
• graph: context::UUIDname::Symboluuid::UUID
环境的 graph 是一个多级映射,它为每个 context UUID 分配一个从名称到 UUID 的映射——类似于 roots 映射,但专一于那个 context。当 Julia 在 UUID 为 context 的包代码中运行到 import X 时,它会将 X 的标识看作为 graph[context][:X]。正是因为如此,import X 可以根据 context 引用不同的包。
• paths: uuid::UUID × name::Symbolpath::String
paths 映射会为每个包分配 UUID-name 对,即该包的入口点源文件的位置。在 import X 中,X 的标识已经通过 roots 或 graph 解析为 UUID(取决于它是从主项目还是从依赖项加载),Julia 确定要加载哪个文件来获取 X 是通过在环境中查找 paths[uuid,:X]。要包含此文件应该定义一个名为 X 的模块。一旦加载了此包,任何解析为相同的 uuid 的后续导入只会创建一个到同一个已加载的包模块的绑定。
Note
项目环境(Project environments)
roots 映射 在环境中由其项目文件的内容决定,特别是它的顶级 nameuuid 条目及其 [deps] 部分(全部是可选的)。考虑以下一个假想的应用程序 App 的示例项目文件,如先前所述:
name = "App"
uuid = "8f986787-14fe-4607-ba5d-fbff2944afa9"
[deps]
Pub = "c07ecb7d-0dc9-4db7-8803-fadaaeaf08e1"
roots = Dict(
:App => UUID("8f986787-14fe-4607-ba5d-fbff2944afa9"),
)
[[Priv]] # 私有的那个
deps = ["Pub", "Zebra"]
path = "deps/Priv"
[[Priv]] # 公共的那个
uuid = "2d15fe94-a1f7-436c-a4d8-07a9a496e01c"
git-tree-sha1 = "1bf63d3be994fe83456a03b874b409cfd59a6373"
version = "0.1.5"
[[Pub]]
git-tree-sha1 = "9ebd50e2b0dd1e110e842df3b433cb5869b0dd38"
version = "2.1.4"
[Pub.deps]
Priv = "2d15fe94-a1f7-436c-a4d8-07a9a496e01c"
Zebra = "f7a24cb4-21fc-4002-ac70-f0e3a0dd3f62"
[[Zebra]]
uuid = "f7a24cb4-21fc-4002-ac70-f0e3a0dd3f62"
git-tree-sha1 = "e808e36a5d7173974b90a15a353b564f3494092f"
version = "3.4.2"
• 应用程序使用两个名为 Priv 的不同包,一个作为根依赖项的私有包,以及一个通过 Pub 作为间接依赖项的公共包。它们通过不同 UUID 来区分,并且有不同的依赖项:
• 私有的 Priv 依赖于 PubZebra 包。
• 公有的 Priv 没有依赖关系。
• 该应用程序还依赖于 Pub 包,而后者依赖于公有的 Priv 以及私有的 Priv 包所依赖的那个 Zebra 包。
graph = Dict(
# Priv——私有的那个:
:Zebra => UUID("f7a24cb4-21fc-4002-ac70-f0e3a0dd3f62"),
),
# Priv——公共的那个:
UUID("2d15fe94-a1f7-436c-a4d8-07a9a496e01c") => Dict(),
# Pub:
:Priv => UUID("2d15fe94-a1f7-436c-a4d8-07a9a496e01c"),
:Zebra => UUID("f7a24cb4-21fc-4002-ac70-f0e3a0dd3f62"),
),
# Zebra:
UUID("f7a24cb4-21fc-4002-ac70-f0e3a0dd3f62") => Dict(),
)
graph[UUID("c07ecb7d-0dc9-4db7-8803-fadaaeaf08e1")][:Priv]
1. 如果目录中的项目文件与要求的 uuid 以及名称 X 匹配,那么可能出现以下情况的一种:
• 若该文件具有顶层 路径 入口,则 uuid 会被映射到该路径,文件的执行与包含项目文件的目录相关。
• 此外,uuid 依照包含项目文件的目录,映射至与src/X.jl
1. 若非上述情况,且项目文件具有对应的清单文件,且该清单文件包含匹配 uuid 的节(stanza),那么:
• 若其具有一个 路径 入口,则使用该路径(与包含清单文件的目录相关)。
• 若其具有一个 git-tree-sha1 入口,计算一个确定的 uuidgit-tree-sha1 函数——我们把这个函数称为 slug——并在每个 Julia DEPOT_PATH 的全局序列中的目录查询名为 packages/X/\$slug 的目录。使用存在的第一个此类目录。
If, on the other hand, Julia was loading the other Priv package—the one with UUID 2d15fe94-a1f7-436c-a4d8-07a9a496e01c—it finds its stanza in the manifest, see that it does not have a path entry, but that it does have a git-tree-sha1 entry. It then computes the slug for this UUID/SHA-1 pair, which is HDkrT (the exact details of this computation aren't important, but it is consistent and deterministic). This means that the path to this Priv package will be packages/Priv/HDkrT/src/Priv.jl in one of the package depots. Suppose the contents of DEPOT_PATH is ["/home/me/.julia", "/usr/local/julia"], then Julia will look at the following paths to see if they exist:
1. /home/me/.julia/packages/Priv/HDkrT
2. /usr/local/julia/packages/Priv/HDkrT
Julia uses the first of these that exists to try to load the public Priv package from the file packages/Priv/HDKrT/src/Priv.jl in the depot where it was found.
Here is a representation of a possible paths map for our example App project environment, as provided in the Manifest given above for the dependency graph, after searching the local file system:
paths = Dict(
# Priv – the private one:
# relative entry-point inside App repo:
"/home/me/projects/App/deps/Priv/src/Priv.jl",
# Priv – the public one:
(UUID("2d15fe94-a1f7-436c-a4d8-07a9a496e01c"), :Priv) =>
# package installed in the system depot:
"/usr/local/julia/packages/Priv/HDkr/src/Priv.jl",
# Pub:
# package installed in the user depot:
"/home/me/.julia/packages/Pub/oKpw/src/Pub.jl",
# Zebra:
(UUID("f7a24cb4-21fc-4002-ac70-f0e3a0dd3f62"), :Zebra) =>
# package installed in the system depot:
"/usr/local/julia/packages/Zebra/me9k/src/Zebra.jl",
)
This example map includes three different kinds of package locations (the first and third are part of the default load path):
1. The private Priv package is "vendored" inside the App repository.
2. The public Priv and Zebra packages are in the system depot, where packages installed and managed by the system administrator live. These are available to all users on the system.
3. The Pub package is in the user depot, where packages installed by the user live. These are only available to the user who installed them.
包目录
• X.jl
• X/src/X.jl
• X.jl/src/X.jl
Which dependencies a package in a package directory can import depends on whether the package contains a project file:
• If it has a project file, it can only import those packages which are identified in the [deps] section of the project file.
• If it does not have a project file, it can import any top-level package—i.e. the same packages that can be loaded in Main or the REPL.
The roots map is determined by examining the contents of the package directory to generate a list of all packages that exist. Additionally, a UUID will be assigned to each entry as follows: For a given package found inside the folder X...
1. If X/Project.toml exists and has a uuid entry, then uuid is that value.
2. If X/Project.toml exists and but does not have a top-level UUID entry, uuid is a dummy UUID generated by hashing the canonical (real) path to X/Project.toml.
3. Otherwise (if Project.toml does not exist), then uuid is the all-zero nil UUID.
The dependency graph of a project directory is determined by the presence and contents of project files in the subdirectory of each package. The rules are:
• If a package subdirectory has no project file, then it is omitted from graph and import statements in its code are treated as top-level, the same as the main project and REPL.
• If a package subdirectory has a project file, then the graph entry for its UUID is the [deps] map of the project file, which is considered to be empty if the section is absent.
As an example, suppose a package directory has the following structure and content:
Aardvark/
src/Aardvark.jl:
import Bobcat
import Cobra
Bobcat/
Project.toml:
[deps]
Cobra = "4725e24d-f727-424b-bca0-c4307a3456fa"
Dingo = "7a7925be-828c-4418-bbeb-bac8dfc843bc"
src/Bobcat.jl:
import Cobra
import Dingo
Cobra/
Project.toml:
uuid = "4725e24d-f727-424b-bca0-c4307a3456fa"
[deps]
Dingo = "7a7925be-828c-4418-bbeb-bac8dfc843bc"
src/Cobra.jl:
import Dingo
Dingo/
Project.toml:
uuid = "7a7925be-828c-4418-bbeb-bac8dfc843bc"
src/Dingo.jl:
# no imports
Here is a corresponding roots structure, represented as a dictionary:
roots = Dict(
:Aardvark => UUID("00000000-0000-0000-0000-000000000000"), # no project file, nil UUID
:Cobra => UUID("4725e24d-f727-424b-bca0-c4307a3456fa"), # UUID from project file
:Dingo => UUID("7a7925be-828c-4418-bbeb-bac8dfc843bc"), # UUID from project file
)
Here is the corresponding graph structure, represented as a dictionary:
graph = Dict(
# Bobcat:
:Cobra => UUID("4725e24d-f727-424b-bca0-c4307a3456fa"),
:Dingo => UUID("7a7925be-828c-4418-bbeb-bac8dfc843bc"),
),
# Cobra:
UUID("4725e24d-f727-424b-bca0-c4307a3456fa") => Dict(
:Dingo => UUID("7a7925be-828c-4418-bbeb-bac8dfc843bc"),
),
# Dingo:
UUID("7a7925be-828c-4418-bbeb-bac8dfc843bc") => Dict(),
)
A few general rules to note:
1. A package without a project file can depend on any top-level dependency, and since every package in a package directory is available at the top-level, it can import all packages in the environment.
2. A package with a project file cannot depend on one without a project file since packages with project files can only load packages in graph and packages without project files do not appear in graph.
3. A package with a project file but no explicit UUID can only be depended on by packages without project files since dummy UUIDs assigned to these packages are strictly internal.
Observe the following specific instances of these rules in our example:
• Aardvark can import on any of Bobcat, Cobra or Dingo; it does import Bobcat and Cobra.
• Bobcat can and does import both Cobra and Dingo, which both have project files with UUIDs and are declared as dependencies in Bobcat's [deps] section.
• Bobcat cannot depend on Aardvark since Aardvark does not have a project file.
• Cobra can and does import Dingo, which has a project file and UUID, and is declared as a dependency in Cobra's [deps] section.
• Cobra cannot depend on Aardvark or Bobcat since neither have real UUIDs.
• Dingo cannot import anything because it has a project file without a [deps] section.
The paths map in a package directory is simple: it maps subdirectory names to their corresponding entry-point paths. In other words, if the path to our example project directory is /home/me/animals then the paths map could be represented by this dictionary:
paths = Dict(
(UUID("00000000-0000-0000-0000-000000000000"), :Aardvark) =>
"/home/me/AnimalPackages/Aardvark/src/Aardvark.jl",
"/home/me/AnimalPackages/Bobcat/src/Bobcat.jl",
(UUID("4725e24d-f727-424b-bca0-c4307a3456fa"), :Cobra) =>
"/home/me/AnimalPackages/Cobra/src/Cobra.jl",
(UUID("7a7925be-828c-4418-bbeb-bac8dfc843bc"), :Dingo) =>
"/home/me/AnimalPackages/Dingo/src/Dingo.jl",
)
Since all packages in a package directory environment are, by definition, subdirectories with the expected entry-point files, their paths map entries always have this form.
Environment stacks
The third and final kind of environment is one that combines other environments by overlaying several of them, making the packages in each available in a single composite environment. These composite environments are called environment stacks. The Julia LOAD_PATH global defines an environment stack—the environment in which the Julia process operates. If you want your Julia process to have access only to the packages in one project or package directory, make it the only entry in LOAD_PATH. It is often quite useful, however, to have access to some of your favorite tools—standard libraries, profilers, debuggers, personal utilities, etc.—even if they are not dependencies of the project you're working on. By adding an environment containing these tools to the load path, you immediately have access to them in top-level code without needing to add them to your project.
The mechanism for combining the roots, graph and paths data structures of the components of an environment stack is simple: they are merged as dictionaries, favoring earlier entries over later ones in the case of key collisions. In other words, if we have stack = [env₁, env₂, …] then we have:
roots = reduce(merge, reverse([roots₁, roots₂, …]))
graph = reduce(merge, reverse([graph₁, graph₂, …]))
paths = reduce(merge, reverse([paths₁, paths₂, …]))
The subscripted rootsᵢ, graphᵢ and pathsᵢ variables correspond to the subscripted environments, envᵢ, contained in stack. The reverse is present because merge favors the last argument rather than first when there are collisions between keys in its argument dictionaries. There are a couple of noteworthy features of this design:
1. The primary environment—i.e. the first environment in a stack—is faithfully embedded in a stacked environment. The full dependency graph of the first environment in a stack is guaranteed to be included intact in the stacked environment including the same versions of all dependencies.
2. Packages in non-primary environments can end up using incompatible versions of their dependencies even if their own environments are entirely compatible. This can happen when one of their dependencies is shadowed by a version in an earlier environment in the stack (either by graph or path, or both).
Since the primary environment is typically the environment of a project you're working on, while environments later in the stack contain additional tools, this is the right trade-off: it's better to break your development tools but keep the project working. When such incompatibilities occur, you'll typically want to upgrade your dev tools to versions that are compatible with the main project.
总结
Federated package management and precise software reproducibility are difficult but worthy goals in a package system. In combination, these goals lead to a more complex package loading mechanism than most dynamic languages have, but it also yields scalability and reproducibility that is more commonly associated with static languages. Typically, Julia users should be able to use the built-in package manager to manage their projects without needing a precise understanding of these interactions. A call to Pkg.add("X") will add to the appropriate project and manifest files, selected via Pkg.activate("Y"), so that a future call to import X will load X` without further thought.
|
{}
|
# Proof that a^0 = 1
I'm trying to prove that a^0 is = 1
So if I define a^1 to be = (a)(1)
and a^n to be = (1)(a)(a)...(a) with the product being taken n times
and a^m to be = (1)(a)(a)...(a) with the product being taken m times
a^n * a^m would then = (1)[(a)(a)...(a) with the product being taken n times * and a^m to be = (1)(a)(a)...(a) with the product being taken m times]
which clearly gives a^n * a^m = a^(n+m)
if m = 0, a^n * a^0 = a^(n+0) = a^(n), so a^0 = 1
for some reason this does make sense to me but I have a feeling the result is not satisfying enough.
jbriggs444
Homework Helper
Iwhich clearly gives a^n * a^m = a^(n+m)
But only for n and m both non-zero positive integers.
PeroK
Homework Helper
Gold Member
2020 Award
##a^0 =1 \ (a \ne 0)## by definition.
This definition is chosen so that you have:
##a^na^m = a^{n + m}##
Demystifier
But only for n and m both non-zero positive integers.
damn it forgot to state this. But my proof still doesn't satisfy me for some reason
##a^0 =1 \ (a \ne 0)## by definition.
This definition is chosen so that you have:
##a^na^m = a^{n + m}##
Are you saying the proof is not valid if I start from a^n*a^m ??
PeroK
Homework Helper
Gold Member
2020 Award
Are you saying the proof is not valid if I start from a^n*a^m ??
I'm saying that, essentially, you cannot prove it. Anymore than you can prove that ##0! = 1##.
Sure, if you assume that
##a^na^m = a^{n + m}##
Then ##a^0 = 1## follows from that. But, that's more a motivation for a definition than a proof.
jim mcnamara and jbriggs444
I'm saying that, essentially, you cannot prove it. Anymore than you can prove that ##0! = 1##.
Sure, if you assume that
##a^na^m = a^{n + m}##
Then ##a^0 = 1## follows from that. But, that's more a motivation for a definition than a proof.
Wouldn't
"So if I define a^1 to be = (a)(1)
and a^n to be = (1)(a)(a)...(a) with the product being taken n times
and a^m to be = (1)(a)(a)...(a) with the product being taken m times
"
allow it to constituted as a proof though, since I'm defining it in that way?
jbriggs444
Homework Helper
Wouldn't
"So if I define a^1 to be = (a)(1)
and a^n to be = (1)(a)(a)...(a) with the product being taken n times
and a^m to be = (1)(a)(a)...(a) with the product being taken m times
"
allow it to constituted as a proof though, since I'm defining it in that way?
A proof of what? That is an adequate definition of a^n for n an integer greater than zero. It says nothing about a^0.
It is also redundant. If you define a^n, you've defined a^m. The "n" and the "m" are dummy variables.
Edit: I do not think I was understanding what you were trying to express.
You want to define a^n as the result of evaluating "1(a)...(a)" where there are n a's. In the case of n=0, this means zero a's and it is just "1".
Under this definition, a^0 = 1 by definition (even when a=0) and there is nothing to prove.
Last edited:
Delta2
hilbert2
Gold Member
I think the appropriate way would be to assume that ##f(x)=a^x## is continuous and then show that the sequence
##a^{1/2},a^{1/4},a^{1/8},\dots##
or
##\sqrt{a},\sqrt[4]{a},\sqrt[8]{a},\dots##
has limit 1 when ##a\neq 0##.
It's just a matter of choosing some properties you want the exponential function to have, and then showing that the only value of ##a^0## that is logically compatible with those properties is 1.
Last edited:
scottdave and Demystifier
PeroK
Homework Helper
Gold Member
2020 Award
I think the appropriate way would be to assume that ##f(x)=a^x## is continuous and then show that the sequence
##a^{1/2},a^{1/4},a^{1/8},\dots##
or
##\sqrt{a},\sqrt[4]{a},\sqrt[8]{a},\dots##
has limit 1 when ##a\neq 0##.
It's just a matter of choosing some properties you want the exponential function to have, and then showing that the only value of ##a^0## that is logically compatible with those properties is 1.
That's fine, but perhaps a physicist's view. The function ##a^x## for real ##x## is a more advanced construction. You might hope to resolve what ##a^0## should be while you are still dealing with integer powers.
hilbert2
Gold Member
That's fine, but perhaps a physicist's view. The function ##a^x## for real ##x## is a more advanced construction. You might hope to resolve what ##a^0## should be while you are still dealing with integer powers.
Yes, in my version of the proof the property ##a^{1/n} = \sqrt[n]{a}## is assumed as an "axiom", while a simpler choice could also be possible. I guess we're playing the game of "inventing the exponential for the first time" here, instead of relying on commonly accepted sets of rules.
PeroK
Homework Helper
Gold Member
2020 Award
Yes, in my version of the proof the property ##a^{1/n} = \sqrt[n]{a}## is assumed as an "axiom", while a simpler choice could also be possible. I guess we're playing the game of "inventing the exponential for the first time" here, instead of relying on commonly accepted sets of rules.
That "game" is called pure mathematics!
FactChecker
Gold Member
Sure, if you assume that
##a^na^m = a^{n + m}##
This just looks to me like the associative law of multiplication: ##(aa...a)_{n\text{ times}}(aa...a)_{m\text{ times}} = (aa...a)_{n+m\text{ times}}##
PeroK
Homework Helper
Gold Member
2020 Award
This just looks to me like the associative law of multiplication: ##(aa...a)_{n\text{ times}}(aa...a)_{m\text{ times}} = (aa...a)_{n+m\text{ times}}##
That doesn't get you to ##a^0=1##.
The issue is, for example, that you could verify this for positive integers:
##a^3 a^2 = a^5##
But, if you try to verify this for any integers you have:
##a^2 a^{-2} = 1##
But, you can't verify that ##a^0 =1## as "a multiplied by itself 0 times" is not immediately defined. You have to define ##a^0 =1## in order for your law of indices to extend to integers.
And that's what is done.
scottdave
mathman
##a^{-1}=\frac{1}{a}##. Therefore ##a^1\cdot a^{-1}=a^{1-1}=a^0=\frac{a}{a}=1##.
scottdave, Wes Turner, Delta2 and 1 other person
jbriggs444
Homework Helper
##a^{-1}=\frac{1}{a}##
Only if you define it thus.
PeroK
Homework Helper
Gold Member
2020 Award
##a^{-1}=\frac{1}{a}##. Therefore ##a^1\cdot a^{-1}=a^{1-1}=a^0=\frac{a}{a}=1##.
If you, a priori, assume that a basic law holds when extend the numbers involved, then:
##a^n b^n = (ab)^n##
Extends to:
##(-1)^{1/2}(-1)^{1/2} =1^{1/2} = 1##
Which is then a "proof" that ##-1 = 1##.
In general, one can think of the expression ##A^B ## as the set of all mappings ##f:B\to A ##. For arbitrary cardinalities it holds that ##\left\lvert A^B\right\rvert = \left\lvert A\right\rvert ^\left\lvert B \right\rvert##. Thus ##a^0 = \{a\} ^\emptyset = 1_\mathbb N ##, as for any set ## A##, there is exactly one mapping ##f:\emptyset\to A ##
One can also use this to semi-prove things like ##0! = 1 ##. Factorial represents the number of permutations, thus there is exactly one "empty permutation". It is safer to define ##0! = 1 ##.
All of this depends on where you are operating. In a group, for instance, we just define ##g^0## to be equal to the identity as the expression ##g^0 ## doesn't really make sense, otherwise. In a semigroup ##s^0 ## might be an ill-defined array of symbols.
In the real numbers, one could think ##\log _a1 = 0 ## iff ##a^0 =1 ##. Which was first, the egg or the chicken? It's a lot of boring debate, to be honest. Let's just say that if the structure permits it, we define ##a^0 = 1 ## where the meanings of the symbols depend on context.
Last edited:
ZeGato, dextercioby and Delta2
Infrared
Gold Member
Maybe this is too basic, but hopefully it helps. Consider the sequence $2,4,8,16,32,\ldots$. The $n$-th term of this sequence is $2^n$. In going from the $n$-th term to the $(n+1)$-st term, we multiply by $2$. If we want this pattern to hold for all integers $n$, then we are forced to have $2^0=1$, $2^{-1}=1/2$, etc. so that our sequence is $\ldots 1/4,1/2,1,2,4,8,\ldots$.
Another way of phrasing this is just that we want $2^{n+1}=2^1\cdot 2^n$ since $2^{a+b}=2^{a}2^b$ is a law that we would like to keep.
Probably you can't prove that $a^n$ is the right thing for nonpositive exponents since usually exponentials are defined first for only when the exponent is a positive integer, and then you extend the definition to integer exponents in the above way, and then to rationals, and then to reals.
mathman
Only if you define it thus.
How else would you define it?
jbriggs444
Homework Helper
How else would you define it?
In a conversation where one is discussing the definition of ##a^0##, introducing a definition of ##a^{-1}## seems premature.
So at what point can I just define something without proving its true and have a result thats true regardless if the definition is true or not
I guess what a better way to say is when can I make an assumption?
Last edited:
PeroK
Homework Helper
Gold Member
2020 Award
So at what point can I just define something without proving its true and have a result thats true regardless if the definition is true or not
I guess what a better way to say is when can I make an assumption?
The fundamental issue is that when you use some mathematical symbols, you must define what you mean by that arrangement of symbols. Until you know what you mean by those symbols, you cannot start to do mathematics using them. In this case, for example, you might write:
##2^0##
But, what does that mean? There's no immediate way to "multiply 2 by itself 0 times". Unlike ##2^1, 2^2, 2^3 \dots ##, which have a simple, clear definition.
My recommended approach is to define ##2^0 = 1## before you go any further. Then you know what those symbols mean.
Now, of course, you need to be careful that a definition is consistent with other definitions, and you need to understand the implications of a certain definition.
In this case, the only other candidate might be to define ##2^0 = 0##. But, when you look at the way powers work, you see that defining ##2^0 =1## is logical and consistent.
Last edited:
jack action
Gold Member
My recommended approach is to define ##2^0 = 1## before you go any further (why?). Then you know what those symbols mean.
Now, of course, you need to be careful that a definition is consistent with other definitions, and you need to understand the implications of a certain definition.
In this case, the only other candidate might be to define ##2^0 = 0##. But, when you look at the way powers work (you define how powers work afterward?), you see that defining ##2^0 =1## is logical and consistent.
But that means that you basically define what ##2^{-1}## means and then you adjust your "guess" to ##2^0 = 1##.
In a conversation where one is discussing the definition of ##a^0##, introducing a definition of ##a^{-1}## seems premature.
Not only is defining ##a^{-1}## not premature, it is essential. Otherwise, you are only guessing arbitrarily as @PeroK explained, and you modify your guess as you (finally) define ##a^{-1}##. ##a^0 = 1## makes sense only if ##a^{-n} =\frac{1}{a^n}##. then a simple limit approach proves the definition of ##a^0##. Therefore, I tend to support @mathman 's approach:
##a^{-1}=\frac{1}{a}##. Therefore ##a^1\cdot a^{-1}=a^{1-1}=a^0=\frac{a}{a}=1##.
---------------------------------------
If you, a priori, assume that a basic law holds when extend the numbers involved, then:
##a^n b^n = (ab)^n##
Extends to:
##(-1)^{1/2}(-1)^{1/2} =1^{1/2} = 1##
Which is then a "proof" that ##-1 = 1##.
Doesn't that only proves that ##-1 \times -1 = 1##?
##a^n## and ##a^{-n}## have both distinct definitions, so stating both «sources» ##a## are the same because they give the same result is as fair as saying that since ##\sin\frac{\pi}{2} = 1## and ##\cos 0=1##, then ##\frac{\pi}{2} = 0## must be true.
PeroK
|
{}
|
# R Dataset / Package HistData / ZeaMays
Webform
Category
Webform
Category
Webform
Category
Webform
Category
Webform
Category
Webform
Category
## Visual Summaries
Embed
<iframe src="https://embed.picostat.com/r-dataset-package-histdata-zeamays.html" frameBorder="0" width="100%" height="307px" />
Attachment Size
350 bytes
Documentation
## Darwin's Heights of Cross- and Self-fertilized Zea May Pairs
### Description
Darwin (1876) studied the growth of pairs of zea may (aka corn) seedlings, one produced by cross-fertilization and the other produced by self-fertilization, but otherwise grown under identical conditions. His goal was to demonstrate the greater vigour of the cross-fertilized plants. The data recorded are the final height (inches, to the nearest 1/8th) of the plants in each pair.
In the Design of Experiments, Fisher (1935) used these data to illustrate a paired t-test (well, a one-sample test on the mean difference, cross - self). Later in the book (section 21), he used this data to illustrate an early example of a non-parametric permutation test, treating each paired difference as having (randomly) either a positive or negative sign.
data(ZeaMays)
### Format
A data frame with 15 observations on the following 4 variables.
pair
pair number, a numeric vector
pot
pot, a factor with levels 1 2 3 4
cross
height of cross fertilized plant, a numeric vector
self
height of self fertilized plant, a numeric vector
diff
cross - self for each pair
### Details
In addition to the standard paired t-test, several types of non-parametric tests can be contemplated:
(a) Permutation test, where the values of, say self are permuted and diff=cross - self is calculated for each permutation. There are 15! permutations, but a reasonably large number of random permutations would suffice. But this doesn't take the paired samples into account.
(b) Permutation test based on assigning each abs(diff) a + or - sign, and calculating the mean(diff). There are 2^{15} such possible values. This is essentially what Fisher proposed. The p-value for the test is the proportion of absolute mean differences under such randomization which exceed the observed mean difference.
(c) Wilcoxon signed rank test: tests the hypothesis that the median signed rank of the diff is zero, or that the distribution of diff is symmetric about 0, vs. a location shifted alternative.
### Source
Darwin, C. (1876). The Effect of Cross- and Self-fertilization in the Vegetable Kingdom, 2nd Ed. London: John Murray.
Andrews, D. and Herzberg, A. (1985) Data: a collection of problems from many fields for the student and research worker. New York: Springer. Data retrieved from: https://www.stat.cmu.edu/StatDat/
### References
Fisher, R. A. (1935). The Design of Experiments. London: Oliver & Boyd.
wilcox.test
independence_test in the coin package, a general framework for conditional inference procedures (permutation tests)
### Examples
data(ZeaMays)##################################
## Some preliminary exploration ##
##################################
boxplot(ZeaMays[,c("cross", "self")], ylab="Height (in)", xlab="Fertilization")# examine large individual diff/ces
largediff <- subset(ZeaMays, abs(diff) > 2*sd(abs(diff)))
with(largediff, segments(1, cross, 2, self, col="red"))# plot cross vs. self. NB: unusual trend and some unusual points
with(ZeaMays, plot(self, cross, pch=16, cex=1.5))
abline(lm(cross ~ self, data=ZeaMays), col="red", lwd=2)# pot effects ?
anova(lm(diff ~ pot, data=ZeaMays))##############################
## Tests of mean difference ##
##############################
# Wilcoxon signed rank test
# signed ranks:
with(ZeaMays, sign(diff) * rank(abs(diff)))
wilcox.test(ZeaMays$cross, ZeaMays$self, conf.int=TRUE, exact=FALSE)# t-tests
with(ZeaMays, t.test(cross, self))
with(ZeaMays, t.test(diff))mean(ZeaMays$diff) # complete permutation distribution of diff, for all 2^15 ways of assigning # one value to cross and the other to self (thx: Bert Gunter) N <- nrow(ZeaMays) allmeans <- as.matrix(expand.grid(as.data.frame( matrix(rep(c(-1,1),N), nr =2)))) %*% abs(ZeaMays$diff) / N# upper-tail p-value
sum(allmeans > mean(ZeaMays$diff)) / 2^N # two-tailed p-value sum(abs(allmeans) > mean(ZeaMays$diff)) / 2^Nhist(allmeans, breaks=64, xlab="Mean difference, cross-self",
main="Histogram of all mean differences")
abline(v=c(1, -1)*mean(ZeaMays$diff), col="red", lwd=2, lty=1:2)plot(density(allmeans), xlab="Mean difference, cross-self", main="Density plot of all mean differences") abline(v=c(1, -1)*mean(ZeaMays$diff), col="red", lwd=2, lty=1:2)
--
Dataset imported from https://www.r-project.org.
Picostat Manual
###### How To Register With a Username
1. Go to the user registration page.
4. Click Submit.
5. Click the link that was sent to the email address you registered with.
6. Clicking the link will open another page on Picostat where you can select a password.
7. Click Save and enter any profile details you wish to enter.
###### How To Register With Google Single Sign On (SSO)
1. Go to the user login page.
5. Google will redirect you back to Picostat with your new account created and you will be logged in.
6. Enter any profile details you wish to share.
1. Go to the user login page.
3. Click "Login". You will be redirected to your user homepage authenticated.
1. Go to the user login page.
3. If you already registered with Picostat via Google SSO, you will be redirected to your user homepage authenticated.
###### How To Import a Dataset
1. Create a Picostat account or login with your existing picostat account (see above).
2. Go to the dataset import page.
3. Select a license for the dataset. The default is "No License" but allows Picostat to host a copy of the dataset as per the privacy policy. You may wish to uncheck the "Public" option if you do not wish to share your dataset with others. R Datasets that come by downloading R have a GNU General Public License v3.0 which may also be selected from the Picostat dropdown.
4. Enter a title for the dataset
5. Choose a dataset input methods. Available options include:
• Random data - this populates your dataset with random numbers between 0 and 100. You can specify the number of rows and columns for the random dataset.
• CSV, TSV or TXT file - you will have the option upload a file within the current file size limit and also specify the header and whether or not the dataset is a contingency table. With contingency tables, the first column becomes a label for the rows. Currently with Picostat, there is limited support for contingency tables. Choose "Yes" to the Header option if the first line of the data contains titles for the rows. Also choose the Separator for the dataset. A separator is what breaks the data up. In some cases, a comma would separate data values in a row. You will also have the option to add documentation in the form keyboarded text and also uploaded documentation attachments. You can also specify a license for the documentation.
• Copy and Paste. This selection contains many of the same fields as importing a file with an additional textarea to copy and paste data to.
• Empty dataset. Start with a blank dataset and manually add data with the Picostat dataset editor.
• Excel file - Choose this option if you would like to convert your Excel spreadsheet to a Picostat dataset. With this selection, you will have the option to specify whether to use the first row in the Excel file as column names. If you would like to choose a specific sheet to use, you can also specify with entering its name in the text input.
• sas7bdat file - SAS is a powerful statistical software package that has its own proprietary file format. Choose this option if you are importing a SAS file.
• SPSS sav file - SPSS is a statistical package owned by IBM. You can import SPSS files by choosing this option.
6. Choose whether or not the dataset contains a header. Some of the dataset input methods allow you to specify whether or not a Header exists on the file. Sometimes dataset files contain a Header as the first row which names the columns. If you choose "Yes" to this, the first row in the dataset will become column headers.
7. You can also add documentation and specify a documentation license. This can be used to help explain your dataset to those unfamiliar with it.
8. Choose whether or not to upload an supporting attachments.
9. Pass the captcha. To prevent spam submissions, Picostat has a captcha which is used to prevent automated submissions by bots.
10. Choose a privacy setting for the dataset. You can also specify whether or not the dataset is Public. If you uncheck this setting, only you and the Picostat administrator will be able to view the dataset.
11. Submit the form. Once the form is validated, you will be redirected to the dataset homepage where you can choose to edit or perform statistical operations on the dataset.
###### How To Perform Statistical Analysis with Picostat
1. Go to any dataset homepage. You can get a full list at the dashboard.
2. Near the top of the page there will be two drop downs. One for analysis and one for education. Here we will choose Analyis. Choose from one of the following:
• Numerical Summaries - Here you can get the:
1. Arithmetic mean
2. Median
3. Quartiles
4. Minimum and Maximum
5. Stem-and-leaf plot
6. Standard deviation and Variance
7. IQR
8. Cumulative frequencies
• Plot - a plot of two columns on the cartesian coordinate system
• Boxplot - a Boxplot (box-and-whisker plot) of a column.
• Correlation Coefficient - Compute the correlation coefficient between two columns.
• Cumulative Frequency Histogram - Display a cumulative frequency histogram
• Dotplot
• Hollow Histogram - Plot two columns on the same histogram with a different color for each column.
• Pie Chart
• Regression - Perform a simple linear regression and compute the p-value and regression line. Also plots the data with the regression line.
• Stem and Leaf Plots - Plot a one or two-sided stem-and-leaf plot from one or two columns respectively.
• Visual Summaries - plots the following:
1. Frequency Histogram
2. Relative Frequency Histogram
3. Cumulative Frequency Histogram
4. Boxplot (Box-and-whisker plot)
5. Dotplot
Recent Queries For This Dataset
No queries made on this dataset yet.
Title Authored on Content type
R Dataset / Package MASS / muscle March 9, 2018 - 1:06 PM Dataset
R Dataset / Package MASS / DDT March 9, 2018 - 1:06 PM Dataset
OpenIntro Statistics Dataset - pm25_2011_durham August 9, 2020 - 2:30 PM Dataset
R Dataset / Package wooldridge / jtrain2 March 9, 2018 - 1:06 PM Dataset
R Dataset / Package DAAG / nsw74psid3 March 9, 2018 - 1:06 PM Dataset
|
{}
|
# Visual Revelations, Howard Wainer
I’m starting to recognize several clusters of data visualization books. These include:
(Of course this list calls out for a flowchart or something to visualize it!)
Howard Wainer’s Visual Revelations falls in this last category. And it’s no surprise Wainer’s book emulates Tufte’s, given how often the author refers back to Tufte’s work (including comments like “As Edward Tufte told me once…”). And The Visual Display of Quantitative Information is still probably the best introduction to the genre. But Visual Revelations is different enough to be a worthwhile read too if you enjoy such books, as I do.
Most of all, I appreciated that Wainer presents many bad graph examples found “in the wild” and follows them with improvements of his own. Not all are successful, but even so I find this approach very helpful for learning to critique and improve my own graphics. (Tufte’s classic book critiques plenty, but spends less time on before-and-after redesigns. On the other hand, Kosslyn’s book is full of redesigns, but his “before” graphs are largely made up by him to illustrate a specific point, rather than real graphics created by someone else.)
Of course, Wainer covers the classics like John Snow’s cholera map and Minard’s plot of Napoleon’s march on Russia (well-trodden by now, but perhaps less so in 1997?). But I was pleased to find some fascinating new-to-me graphics. In particular, the Mann Gulch Fire section (p. 65-68) gave me shivers: it’s not a flashy graphic, but it tells a terrifying story and tells it well.
[Edit: I should point out that Snow's and Minard's plots are so well-known today largely thanks to Wainer's own efforts. I also meant to mention that Wainer is the man who helped bring into print an English translation of Jacques Bertin's seminal Semiology of Graphics and a replica volume of William Playfair's Commercial and Political Atlas and Statistical Breviary. He has done amazing work at unearthing and popularizing many lost gems of historical data visualization!
See also Alberto Cairo's review of a more recent Wainer book.]
Finally, Wainer’s tone overall is also much lighter and more humorous than Tufte’s. His first section gives detailed advice on how to make a bad graph, for example. I enjoyed Wainer’s jokes, though some might prefer more gravitas.
Below are my notes-to-self, with things-to-follow-up in bold:
• p. 11: “When looking at a good graph, your response should never be ‘what a great graph!’ but ‘what interesting data!’” It’s a matter of taste and context, but my personal interests align with Wainer’s here. I’m currently much less interested in artsy visualizations that do not aid understanding; I’m reminded of one recently highlighted on FlowingData with the comment, “I can’t say how accurate it is or if the described mechanisms are accurate, but it sure is fun to play with.”
• p. 43: “after more than two hundred practice exercises with [bivariate choropleth] maps, graduate students in perception at Johns Hopkins University were unable to internalize the legend.” Read the study: Wainer and Francolini (1980)
• p. 47: Sandy Zabell used graphs to highlight “inconsistencies, clerical errors, and a remarkable amount of other information” that earlier researchers had missed in the London Bills of Mortality. I’d love to find these graphs: Zabell, 1976, “Arbuthnot, Heberden and the Bills of Mortality,” Technical Report #40, Department of Statistics, University of Chicago.
• p. 47: data graphics were uncommon, even in scientific journals, before William Playfair — but how & when did journals start including graphics?
• p. 52: the famous O-ring example is a case of plotting the wrong data for the question at hand. In the plot used for decision-making, they showed failures vs. temperature only for those space shuttle flights with no failures. That is, if $x$ is the number of failures and $T$ is the temperature, they plotted $(x | x>0)$ vs $T$, rather than all $x$ vs $T$. Hence, they had a distorted view of $p(x>0 | T)$. Perhaps a related idea is key to Wald’s study of armoring airplanes (p. 58): consider not just when you’ve observed the event of interest, but also when you haven’t.
• p. 55: “Good graphs can make difficult problems trivial.” For a great example, see the inclined-plane question on p. 71-72, which can be answered either with trigonometry and calculus… or at a glance with the right graph. Also related to Colin Ware’s focus on “external cognition”: how resources outside the mind can be used to boost the mind.
• p. 80: “a reasonable strategy in what ought to be an iterative process. Sometimes one has a data-related question and then draws a graph to try to answer it. After drawing the graph a new question might suggest itself, and hence a different graph, better suited to this new question (perhaps with additional data), is drawn. This in turn suggests something else, and so on, until either the data or the grapher is exhausted. [...] My experience suggests that if you begin with a general-purpose plot there is a greater chance of finding what you had not expected.” This is my experience as well, and reminds me also of Hadley Wickham’s description of statistics as iterating between models and graphics.
• p. 84: Futurism that actually came true, for once! “Indeed, it is easy to imagine a general-purpose device that might have (among many other things) all of the Los Angeles bus routes inside [...] I see no reason why StreetmapTM-like software won’t become available eventually for cheap pocket computers of the sort now called ‘personal organizers.’”
• p. 93-94: examples of misuse of double y-axes, and a comment that it would only be okay if “the same dependent variable can be represented in a transformed way. For example, plot log of per pupil expenditures on the left and per pupil expenditures on the right, the latter spaced to match the left-hand scale [...] Ironically, no graphics package I know of allows this latter use to be done easily, whereas the misuse is often a touted option.”
• p. 97: Wainer really wants us to round the data for presentation: Readers rarely comprehend more than 2 digits easily, statisticians can rarely justify more than 2 digits of precision, and more than 2 digits are rarely of practical use.
I love this part: “The standard error of any statistic is proportional to one over the square root of the sample size. God did this, and there is nothing we can do to change it.” (Say you print 2 digits of a correlation. That implies its standard error is be less than 0.005, which requires a sample size on the order of 40,000 — do you really have that much data?)
And then on p. 99: “Round the numbers, and if you must, insert a footnote proclaiming that the unrounded details are available from the author. Then sit back and wait for the deluge of requests.”
• p. 101: nice example of spacing rows of a table by the values of one column, showing clusters in the data.
• Ch. 11 and 12: he argues in favor of Nightingale roses and trilinear plots but I don’t find them of much use, except maybe the example on p. 116.
• p. 111: people have been complaining about the size and complexity of big data for centuries! William Playfair’s classic 1786 Commercial and Political Atlas was a response to these kinds of concerns.
• p. 121-123: I love these implicit graphs or nomographs, explicitly making handy tools out of data graphics. Jonathan Rougier has an example of using nomograms to turn a predictive statistical model into something easily used in the field by non-math-savvy folks.
• p. 128: Besides graphics, Wainer has a strong interest in education and standardized testing: “Basing a characterization of an examinee’s ability to understand graphical displays on a question paired with a flawed display is akin to characterizing someone’s ability to read by asking questions about a passage full of spelling and grammatical errors. What are we really testing?”
• p. 138: great back-to-back stem and leaf plot, instead of an unhelpful table, for comparing test scores in US states vs. international countries.
• p. 147, 149: I’m not too pleased with either Cleveland’s clean-but-boring computer-defaults plot or with Wainer’s cheesy Playfair-style remake. This is where I and many other statisticians feel a huge gap in our data-graphics skillset: once you’re happy with the content and inherent form of your graph, how do you make it look nice too, without being either bland or tacky?
• Ch. 20: good advice on making readable slides, still aimed at overhead transparencies but largely applicable to PowerPoint etc. too. “If you can’t read it when you are against the back wall, either redo the ineffectual overheads or have as many of the back rows of chairs removed as necessary.”
And of course limit the number of fonts, colors, significant digits, and equations in your talk.
|
{}
|
mersenneforum.org > Math Smallest prime of the form a^2^m + b^2^m, m>=14
Register FAQ Search Today's Posts Mark Forums Read
2018-03-15, 17:26 #12
paulunderwood
Sep 2002
Database er0rr
D6C16 Posts
Quote:
Originally Posted by JeppeSN The next line takes more time. The question is: Hasn't this been considered before?! /JeppeSN
Are you computing with PFGW
Edit:
I ran PFGW for bases less than or equal to 50 and found no PRP:
Code:
cat Jeppe.abc2
ABC2 $a^16384+$b^16384
a: from 2 to 50 step 2
b: from 3 to 50 step 2
Code:
./pfgw64 -f -N Jeppe.abc2
Last fiddled with by paulunderwood on 2018-03-15 at 18:24
2018-03-15, 19:00 #13
science_man_88
"Forget I exist"
Jul 2009
Dumbassville
8,369 Posts
Quote:
Originally Posted by JeppeSN Me: To be explicit, this is what brute force finds: Code: m=0, 2^1 + 1^1 m=1, 2^2 + 1^2 m=2, 2^4 + 1^4 m=3, 2^8 + 1^8 m=4, 2^16 + 1^16 m=5, 9^32 + 8^32 m=6, 11^64 + 8^64 m=7, 27^128 + 20^128 m=8, 14^256 + 5^256 m=9, 13^512 + 2^512 m=10, 47^1024 + 26^1024 m=11, 22^2048 + 3^2048 m=12, 53^4096 + 2^4096 m=13, 72^8192 + 43^8192 The next line takes more time. The question is: Hasn't this been considered before?! /JeppeSN
Probably has, think modular tricks like fermats little theorem and extentions. Mod 8 it becomes a^0+b^0 for example, mod 9 it's a^4+b^4 these work for all bases coprime to 8 or 9 respectively.
2018-03-16, 06:18 #14
JeppeSN
"Jeppe"
Jan 2016
Denmark
2408 Posts
Quote:
Originally Posted by paulunderwood Are you computing with PFGW
See A291944 in OEIS; it is not public yet, so see its history.
I used PARI/GP ispseudoprime in a loop, like the code shown there, and I suspect Robert G. Wilson v used Mathematica. Maybe PFGW is faster?
There is no point in all of us running the same tests, except whoever uses the best tools will "win" the competition. I just thought maybe this had been established already.
/JeppeSN
2018-03-16, 08:30 #15
axn
Jun 2003
2·13·181 Posts
Quote:
Originally Posted by JeppeSN There is no point in all of us running the same tests, except whoever uses the best tools will "win" the competition. I just thought maybe this had been established already.
I will try to write a custom sieve during the weekend. After that I can post the sieve output here, so that interested people can test ranges.
PFGW should indeed be faster than Pari or Mathematica.
EDIT:- Testing 71^16384+46^16384, PFGW took about 20s, while Pari took 2mins and change. So PFGW is about 6x faster.
Last fiddled with by axn on 2018-03-16 at 08:33
2018-03-16, 10:33 #16
paulunderwood
Sep 2002
Database er0rr
22·859 Posts
Quote:
Originally Posted by axn I will try to write a custom sieve during the weekend. After that I can post the sieve output here, so that interested people can test ranges. PFGW should indeed be faster than Pari or Mathematica. EDIT:- Testing 71^16384+46^16384, PFGW took about 20s, while Pari took 2mins and change. So PFGW is about 6x faster.
PFGW will be more than 6x faster with much bigger numbers
2018-03-16, 10:36 #17 JeppeSN "Jeppe" Jan 2016 Denmark 25·5 Posts Something to note: Use here the convention $$a > b > 0$$. There is a slight chance that the smallest odd prime $$a^{16384}+b^{16384}$$ does not minimize $$a$$. As an example, $$677 < 678$$, but still $$677^{128}+670^{128} > 678^{128}+97^{128}$$ (both of these sums of like powers are prime). However, for the smallest one with that exponent, $$27^{128}+20^{128}$$, the value $$a=27$$ is also minimal. And I think this will be the case generally, because the bases $$a$$ and $$b$$ will be relatively small (I conjecture). But we will check for that with 16384 once axn's excellent initiative has come to fruition. /JeppeSN
2018-03-16, 12:19 #18
ATH
Einyen
Dec 2003
Denmark
B9516 Posts
Quote:
Originally Posted by axn I will try to write a custom sieve during the weekend. After that I can post the sieve output here, so that interested people can test ranges. PFGW should indeed be faster than Pari or Mathematica. EDIT:- Testing 71^16384+46^16384, PFGW took about 20s, while Pari took 2mins and change. So PFGW is about 6x faster.
I'm already working on a<b<=1000, or b<a<=1000 which ever convention you use
I used fbncsieve to sieve the factors k*2^14+1. It took only ~2min up to k=10^9.
Then I used these prime factors in a quickly written GMP program to sieve an array 1000x1000 of a,b. First I removed all values where b>=a, a<2, b<2, a%2=b%2 (both odd or both even), and gcd(a,b)>1. Down to 61K candidates at k=462M.
I'm running pfgw while continuing to trial factor. So far no PRP in 2<=b<=16 and b<a<=1000.
2018-03-16, 13:15 #19
axn
Jun 2003
2·13·181 Posts
Quote:
Originally Posted by ATH I'm already working on a=a, a<2, b<2, a%2=b%2 (both odd or both even), and gcd(a,b)>1. Down to 61K candidates at k=462M.
Cool. But a custom sieve will be much more efficient. Hopefully that will be useful for >= 15.
Quote:
Originally Posted by ATH I'm running pfgw while continuing to trial factor. So far no PRP in 2<=b<=16 and b
Since the objective is to find the smallest, you should test in a different order.
2<=a<=1000, 1<=b<a
Last fiddled with by axn on 2018-03-16 at 13:15
2018-03-16, 13:51 #20 a1call "Rashid Naimi" Oct 2015 Remote to Here/There 19·101 Posts Stating the obvious for the sake of having it stated a+b | a^q + b^q for all odd q And a+bi | a^q + b^q for all even q So the result will be definitely not prime over the imaginary field. Corrections are welcome.
2018-03-16, 13:53 #21
JeppeSN
"Jeppe"
Jan 2016
Denmark
25×5 Posts
Quote:
Originally Posted by axn Since the objective is to find the smallest, you should test in a different order.
I was about to say just that. If the (a,b) space is visualized as this triangle:
Code:
(2,1)
(3,2)
(4,1) (4,3)
(5,2) (5,4)
(6,1) (6,3) (6,5)
(7,2) (7,4) (7,6)
(8,1) (8,3) (8,5) (8,7)
. .
. .
. .
it is best to search from the top and down, not from left to right.
/JeppeSN
2018-03-16, 14:21 #22
axn
Jun 2003
2×13×181 Posts
Quote:
Originally Posted by axn But a custom sieve will be much more efficient.
That was somewhat presumptuous of me. What is the rate at which your sieve is progressing thru the factor candidates?
If you can post your sieve source, I can use that as a starting point.
Similar Threads Thread Thread Starter Forum Replies Last Post PawnProver44 Miscellaneous Math 9 2016-03-19 22:11 arbooker And now for something completely different 14 2015-05-22 23:18 Stargate38 Puzzles 6 2014-09-29 14:18 Citrix Prime Cullen Prime 12 2007-04-26 19:52 Heck Factoring 9 2004-10-28 11:34
All times are UTC. The time now is 00:34.
Tue Oct 20 00:34:36 UTC 2020 up 39 days, 21:45, 0 users, load averages: 2.42, 2.56, 2.35
|
{}
|
# soundness and completeness
Consider for an example a sorting algorithm A … Inductively, we assume that $$I ⊨ φ$$ and $$I ⊨ ¬φ$$. © Copyright 2014-2024 | Design & Developed by Zitoc Team, The soundness of propositional logic is useful in ensuring the non-existence of proof for any given sequent. So the conclusion for all $$I$$ satisfying $$A$$, $$I ⊨ ψ$$ is vacuously true: there are no interpretations satisfying $$A$$. The rules for evaluating $$φ∧ψ[I]$$ immediately show that $$I ⊨ φ∧ψ$$ as required. It is mentioned as: X ⊢ α X ⊨ α. Completeness Consider for an example a sorting algorithm A … So in order for the system to be sound, it need not prevent false positives, but only false negatives. \infer[(absurd)]{A ⊢ ψ}{A ⊢ φ & A ⊢ ¬φ} Lecture 39: soundness and completeness. A perfect tool would achieve both. Completeness is the property of being able to prove all true things. The logic of soundness and completeness is to check whether a formula φ is valid or not. Gödel's theorem says that that is not possible. Sound Argument: (1) valid, (2) true premisses (obviously the conclusion is true as well by the definition of validity). Note that this is analogous to Kleene's theorem: there we … Step 3: Finally, we show that φ1, φ2,…,φn ⊢ ψ is valid. However, we do believe that mathematical statements are either true or false; there should only be one interpretation of "isZero", and a number either is zero or it isn't. A system is complete if and only if all valid formula can be derived from axioms and the inference rules. and the basic rules of natural deduction. Completeness says that φ1, φ2,…,φn ⊢ ψ is valid iff φ1, φ2,…,φn ⊨ ψ holds. B. Completeness means : the proof system can derive as conclusion ($\varphi$) all the formulae that are logical consequence of the formulae contained into the set of premises ($\Gamma$). \infer[($∧$ intro)]{\cdots ⊢ φ ∧ ψ}{ We wish to show that in any $$I$$ satisfying the assumptions, $$I ⊨ φ ∨ ¬φ$$. Our goal now is to (meta) prove that the two interpretations match each other. Syntactic method (⊢ φ): Prove the validity of formula φ through natural deduction rules or proof system. Soundness implies consistency; consider the case of propositional logic: no formula and its negation are both tautologies. \newcommand\infer[3][]{ So the way I will present this is that we have now learned that type systems are supposed to prevent things. To prevent false positives, it must be complete.. 2. A logical. Let φ1, φ2,…,φn and ψ be formulas of propositional logic. A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Confusingly, a set of axioms satisfying this property is also called complete, but this notion is completely different from the completeness of a proof system. Completeness tells us that if some set of formulas X implies that a formula α is true, then we can prove the formula α from the set of formulas X and the basic rules of natural deduction. In the last two lectures, we have looked at propositional formulas from two perspectives: truth and provability. #2 For example, we can't prove "it is raining", but nor can we prove "it is not raining"; in some universes, it is raining, and in others it is not. The book explains it like this: Soundness prevents false negatives and completeness prevents false positives. We refer to the list of rules here. } We will prove: 1. One is the syntactic method and the other semantic method. In the last two lectures, we have looked at propositional formulas from two perspectives: truth and provability. A proof system is sound if everything that is provable is in fact true. soundness: a property of both arguments and the statements in them, i.e., the argument is valid and all the statement are true. i.e. Step 2: We show that ⊢ φ1 → (φ2 → (φ3 → (…(φn → ψ)…))) is valid. If the analysis wrongly determines that some reachab… Otherwise, a deductive argument is said to be invalid.. A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. If ⊨φ then ⊢φ.
|
{}
|
PREPRINT
A4FE67D2-579A-450B-B9F7-BC3680BF3F1B
# The Dynamical Mass of the Coma Cluster from Deep Learning
Matthew Ho, Michelle Ntampaka, Markus Michael Rau, Minghan Chen, Alexa Lansberry, Faith Ruehle, Hy Trac
arXiv:2206.14834
Submitted on 29 June 2022
## Abstract
In 1933, Fritz Zwicky's famous investigations of the mass of the Coma cluster led him to infer the existence of dark matter \cite{1933AcHPh...6..110Z}. His fundamental discoveries have proven to be foundational to modern cosmology; as we now know such dark matter makes up 85\% of the matter and 25\% of the mass-energy content in the universe. Galaxy clusters like Coma are massive, complex systems of dark matter in addition to hot ionized gas and thousands of galaxies, and serve as excellent probes of the dark matter distribution. However, empirical studies show that the total mass of such systems remains elusive and difficult to precisely constrain. Here, we present new estimates for the dynamical mass of the Coma cluster based on Bayesian deep learning methodologies developed in recent years. Using our novel data-driven approach, we predict Coma's ${\text{\mthc}}$ mass to be within a radius of of its center. We show that our predictions are rigorous across multiple training datasets and statistically consistent with historical estimates of Coma's mass. This measurement reinforces our understanding of the dynamical state of the Coma cluster and advances rigorous analyses and verification methods for empirical applications of machine learning in astronomy.
## Preprint
Comment: 15 pages, 3 figures, 1 table, accepted for publication at Nature Astronomy, see https://www.nature.com/articles/s41550-022-01711-1
Subject: Astrophysics - Cosmology and Nongalactic Astrophysics
|
{}
|
Nuclear modification factor and isolated photon cross section for 5.02 TeV pp and p-Pb
Description
These are figures created for the isolated photon cross section and RpPb analysis at 5 TeV in pp and p-Pb. The details of the analysis are in the analysis note: Measurement of isolated photon cross section and RpPb in 5 TeV pp and p-Pb (ANA-1072: https://alice-notes.web.cern.ch/node/1072)
|
{}
|
# What is this code doing?
## Recommended Posts
union
{
enum
{
MAX_BITS_INDEX = 16,
MAX_BITS_MAGIC = 16,
MAX_INDEX = ( 1 << MAX_BITS_INDEX ) - 1,
MAX_MAGIC = ( 1 << MAX_BITS_MAGIC ) - 1
};
struct
{
unsigned IndexNum : MAX_BITS_INDEX;
unsigned MagicNum : MAX_BITS_MAGIC;
};
ulong HandleID;
};
I have been using this code for a while because it worked and I thought I understood it. the union can only hold one type of data, does that mean if I input IndexNum and MagicNum, does this effect HandleID? and initially IndexNum and MagicNum are set to 16, correct? also: what does ( 1 << MAX_BITS_MAGIC ) - 1 do? I have never used << besides in console and fileoutput. Thanks
##### Share on other sites
i think that code is based on the resource managed demonstrated by (i think it was) scott bilas in game programming gems 1.
think of the enums as old-style static const variables. they are basically used here to limit the range of IndexNum and MagicNum to a specific bit size in order to make a handle instance 4 bytes in size (on x86) so it's efficient to pass a handle by value.
the union is simply there (look at the rest of the code from scott) to allow for easy zeroing out the handle, for instance. HandleId = 0 causes IndexNum and MagicNum to be zero aswell. HandleId itself is composed of the first few bits which are from IndexNum and then the next bits from MagicNum.
##### Share on other sites
Quote:
Original post by biohazThe union can only hold one type of data, does that mean if I input IndexNum and MagicNum, does this effect HandleID?
Yes. They're two names for the same piece of memory. How they are related is implementation-dependant.
Quote:
and initially IndexNum and MagicNum are set to 16, correct?
No, they are not initialized in the code snippet you posted. They are set to 16 bits in size, that's all. The colon in this case is the bitfield delimiter, not the initializer delimiter.
Quote:
also: what does ( 1 << MAX_BITS_MAGIC ) - 1 do? I have never used << besides in console and fileoutput.
That chunkie of code is trying to deterimine the maximum integral value that could fit in a field containing MAX_BITS_MAGIC number of bits, assuming a twos-complement binary representation of integers. It's a compile-time computation so the compiler can optimize the actual computation away.
Please note that this tidbit of code is highly implementation dependendant and unlikely to work on, say, a different version of the same compiler let alone different compilers or different platforms. It doesn't, for example, take in to account the current struct packing (alighnment) rules.
##### Share on other sites
A union is a user defined type, which allows different "view"/"representation" of the contains data.
What i mean is, when you use this union, it is reserving a memory area of sizeof(ulong), since this is the longest datatype contains in the union. Then, you can access it using the "representation" you want.
union BitConverter{ struct { WORD highWord; WORD lowWord; } DWORD longNumber;}
Using this BitConverter union you can easylly swap the high & low part (16bits) of an (32bits)dword. (I used to use something like this to convert from little-endian to big-endian ordering)
The >(Rightshifts the bits of an expression.).
ie: (00010010 <
##### Share on other sites
ahh... I think I understand unions now.
So would
union{ struct { WORD IndexNum; WORD MagicNum; } DWORD HandleID;}
be more portable? now that I understand unions, it makes a lot more sense. Could you not also use shorts and long? though this would make it dependant again - it would still work, correct?
##### Share on other sites
Oups.. last post isnt rendering correctly...
Here it's the end...
The operator <
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account
• ### Forum Statistics
• Total Topics
627719
• Total Posts
2978790
• 9
• 21
• 14
• 12
• 42
|
{}
|
Chapter 16.4, Problem 2E
### Calculus: Early Transcendentals
8th Edition
James Stewart
ISBN: 9781285741550
Chapter
Section
### Calculus: Early Transcendentals
8th Edition
James Stewart
ISBN: 9781285741550
Textbook Problem
# Evaluate the line integral by two methods: (a) directly and (b) using Green’s Theorem.2. ∮C y dx − x dy,C is the circle with center the origin and radius 4
(a)
To determine
To evaluate: the line integral in direct method.
Explanation
Given data:
Line integral is Cydxxdy and curve C is circle with center the origin and radius 4.
Formula used:
Write the equation of circle with center the origin.
x2+y2=r2 (1)
Consider parametric equations of curve C, 0t2π as,
x=4cost (2)
y=4sint (3)
Substitute 4cost for x, 4sint for y, and 4 for r in equation (1).
(4cost)2+(4sint)2=4216cos2t+16sin2t=1616(cos2t+sin2t)=1616(1)=16 {cos2x+sin2x=1}
16=16
The LHS equal to the RHS. Hence, the parametric equations represent a curve circle C with origin as center and radius 4.
Differentiate equation (2) with respect to t.
dxdt=ddt(4cost)dxdt=4(sint) {ddt(cost)=sint}dx=4sintdt
Differentiate equation (3) with respect to t.
dydt=ddt(4sint)dydt=4(cost) {ddt(sint)=cost}dy=4costdt
Find the value of line integral Cydxxdy
(b)
To determine
To evaluate: The line integral using Green’s Theorem.
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
|
{}
|
• ### Machine learning in APOGEE: Unsupervised spectral classification with $K$-means(1801.07912)
The data volume generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra which is perfect for testing such alternatives. Apply an unsupervised classification scheme based on $K$-means to the massive APOGEE data set. Explore whether the data are amenable to classification into discrete classes. We apply the $K$-means algorithm to 153,847 high resolution spectra ($R\approx22,500$). We discuss the main virtues and weaknesses of the algorithm, as well as our choice of parameters. We show that a classification based on normalised spectra captures the variations in stellar atmospheric parameters, chemical abundances, and rotational velocity, among other factors. The algorithm is able to separate the bulge and halo populations, and distinguish dwarfs, sub-giants, RC and RGB stars. However, a discrete classification in flux space does not result in a neat organisation in the parameters space. Furthermore, the lack of obvious groups in flux space causes the results to be fairly sensitive to the initialisation, and disrupts the efficiency of commonly-used methods to select the optimal number of clusters. Our classification is publicly available, including extensive online material associated with the APOGEE Data Release 12 (DR12). Our description of the APOGEE database can enormously help with the identification of specific types of targets for various applications. We find a lack of obvious groups in flux space, and identify limitations of the $K$-means algorithm in dealing with this kind of data.
• The Astropy Project (http://astropy.org) is, in its own words, "a community effort to develop a single core package for Astronomy in Python and foster interoperability between Python astronomy packages." For five years this project has been managed, written, and operated as a grassroots, self-organized, almost entirely volunteer effort while the software is used by the majority of the astronomical community. Despite this, the project has always been and remains to this day effectively unfunded. Further, contributors receive little or no formal recognition for creating and supporting what is now critical software. This paper explores the problem in detail, outlines possible solutions to correct this, and presents a few suggestions on how to address the sustainability of general purpose astronomical software.
|
{}
|
# Imaginary unit
Template:Mvar in the complex or cartesian plane. Real numbers lie on the horizontal axis, and imaginary numbers lie on the vertical axis
The imaginary unit or unit imaginary number, denoted as Template:Mvar, is a mathematical concept which extends the real number system to the complex number system , which in turn provides at least one root for every polynomial P(x) (see algebraic closure and fundamental theorem of algebra). The imaginary unit's core property is that i2 = −1. The term "imaginary" is used because there is no real number having a negative square.
There are in fact two complex square roots of −1, namely Template:Mvar and i, just as there are two complex square roots of every other real number, except zero, which has one double square root.
In contexts where Template:Mvar is ambiguous or problematic, Template:Mvar or the Greek ι (see alternative notations) is sometimes used. In the disciplines of electrical engineering and control systems engineering, the imaginary unit is often denoted by Template:Mvar instead of Template:Mvar, because Template:Mvar is commonly used to denote electric current.
For the history of the imaginary unit, see Complex number: History.
## Definition
The powers of Template:Mvar
return cyclic values:
... (repeats the pattern
from blue area)
i−3 = i
i−2 = −1
i−1 = −i
i0 = 1
i1 = i
i2 = −1
i3 = −i
i4 = 1
i5 = i
i6 = −1
... (repeats the pattern
from the blue area)
The imaginary number Template:Mvar is defined solely by the property that its square is −1:
${\displaystyle i^{2}=-1\ .}$
With Template:Mvar defined this way, it follows directly from algebra that Template:Mvar and i are both square roots of −1.
Although the construction is called "imaginary", and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is perfectly valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers by treating Template:Mvar as an unknown quantity while manipulating an expression, and then using the definition to replace any occurrence of i2 with −1. Higher integral powers of Template:Mvar can also be replaced with i, 1, Template:Mvar, or −1:
${\displaystyle i^{3}=i^{2}i=(-1)i=-i\,}$
${\displaystyle i^{4}=i^{3}i=(-i)i=-(i^{2})=-(-1)=1\,}$
${\displaystyle i^{5}=i^{4}i=(1)i=i\,}$
Similarly, as with any non-zero real number:
${\displaystyle i^{0}=i^{1-1}=i^{1}i^{-1}=i^{1}{\frac {1}{i}}=i{\frac {1}{i}}={\frac {i}{i}}=1\,}$
As a complex number, Template:Mvar is equal to 0 + i, having a unit imaginary component and no real component (i.e., the real component is zero). In polar form, Template:Mvar is cis π/2, having an absolute value (or magnitude) of 1 and an argument (or angle) of π/2. In the complex plane (also known as the Cartesian plane), Template:Mvar is the point located one unit from the origin along the imaginary axis (which is at a right angle to the real axis).
## {{safesubst:#invoke:anchor|main}} Template:Mvar and −i
Being a quadratic polynomial with no multiple root, the defining equation x2 = −1 has two distinct solutions, which are equally valid and which happen to be additive and multiplicative inverses of each other. More precisely, once a solution Template:Mvar of the equation has been fixed, the value i, which is distinct from Template:Mvar, is also a solution. Since the equation is the only definition of Template:Mvar, it appears that the definition is ambiguous (more precisely, not well-defined). However, no ambiguity results as long as one or other of the solutions is chosen and labelled as "Template:Mvar", with the other one then being labelled as i. This is because, although i and Template:Mvar are not quantitatively equivalent (they are negatives of each other), there is no algebraic difference between Template:Mvar and i. Both imaginary numbers have equal claim to being the number whose square is −1. If all mathematical textbooks and published literature referring to imaginary or complex numbers were rewritten with i replacing every occurrence of +i (and therefore every occurrence of i replaced by −(−i) = +i), all facts and theorems would continue to be equivalently valid. The distinction between the two roots Template:Mvar of x2 + 1 = 0 with one of them labelled with a minus sign is purely a notational relic; neither root can be said to be more primary or fundamental than the other, and neither of them is "positive" or "negative".
The issue can be a subtle one. The most precise explanation is to say that although the complex field, defined as [x]/(x2 + 1), (see complex number) is unique up to isomorphism, it is not unique up to a unique isomorphism — there are exactly 2 field automorphisms of [x]/(x2 + 1) which keep each real number fixed: the identity and the automorphism sending Template:Mvar to x. See also Complex conjugate and Galois group.
A similar issue arises if the complex numbers are interpreted as 2 × 2 real matrices (see matrix representation of complex numbers), because then both
${\displaystyle X={\begin{pmatrix}0&-1\\1&\;\;0\end{pmatrix}}}$ and ${\displaystyle X={\begin{pmatrix}\;\;0&1\\-1&0\end{pmatrix}}}$
are solutions to the matrix equation
${\displaystyle X^{2}=-I=-{\begin{pmatrix}1&0\\0&1\end{pmatrix}}={\begin{pmatrix}-1&\;\;0\\\;\;0&-1\end{pmatrix}}.\ }$
In this case, the ambiguity results from the geometric choice of which "direction" around the unit circle is "positive" rotation. A more precise explanation is to say that the automorphism group of the special orthogonal group SO (2, ) has exactly 2 elements — the identity and the automorphism which exchanges "CW" (clockwise) and "CCW" (counter-clockwise) rotations. See orthogonal group.
All these ambiguities can be solved by adopting a more rigorous definition of complex number, and explicitly choosing one of the solutions to the equation to be the imaginary unit. For example, the ordered pair (0, 1), in the usual construction of the complex numbers with two-dimensional vectors.
## Proper use
The imaginary unit is sometimes written Template:Sqrt in advanced mathematics contexts (as well as in less advanced popular texts). However, great care needs to be taken when manipulating formulas involving radicals. The notation is reserved either for the principal square root function, which is only defined for real x ≥ 0, or for the principal branch of the complex square root function. Attempting to apply the calculation rules of the principal (real) square root function to manipulate the principal branch of the complex square root function will produce false results:
${\displaystyle -1=i\cdot i={\sqrt {-1}}\cdot {\sqrt {-1}}={\sqrt {(-1)\cdot (-1)}}={\sqrt {1}}=1}$ (incorrect).
Attempting to correct the calculation by specifying both the positive and negative roots only produces ambiguous results:
${\displaystyle -1=i\cdot i=\pm {\sqrt {-1}}\cdot \pm {\sqrt {-1}}=\pm {\sqrt {(-1)\cdot (-1)}}=\pm {\sqrt {1}}=\pm 1}$ (ambiguous).
Similarly:
${\displaystyle {\frac {1}{i}}={\frac {\sqrt {1}}{\sqrt {-1}}}={\sqrt {\frac {1}{-1}}}={\sqrt {\frac {-1}{1}}}={\sqrt {-1}}=i}$ (incorrect).
The calculation rules
${\displaystyle {\sqrt {a}}\cdot {\sqrt {b}}={\sqrt {a\cdot b}}}$
and
${\displaystyle {\frac {\sqrt {a}}{\sqrt {b}}}={\sqrt {\frac {a}{b}}}}$
are only valid for real, non-negative values of Template:Mvar and Template:Mvar.
These problems are avoided by writing and manipulating , rather than expressions like Template:Sqrt. For a more thorough discussion, see Square root and Branch point.
## Properties
### Square roots
The two square roots of Template:Mvar in the complex plane
The square root of Template:Mvar can be expressed as either of two complex numbers[nb 1]
${\displaystyle {\sqrt {i}}=\pm \left({\frac {\sqrt {2}}{2}}+{\frac {\sqrt {2}}{2}}i\right)=\pm {\frac {\sqrt {2}}{2}}(1+i).}$
Indeed, squaring the right-hand side gives
{\displaystyle {\begin{aligned}\left(\pm {\frac {\sqrt {2}}{2}}(1+i)\right)^{2}\ &=\left(\pm {\frac {\sqrt {2}}{2}}\right)^{2}(1+i)^{2}\ \\&={\frac {1}{2}}(1+2i+i^{2})\\&={\frac {1}{2}}(1+2i-1)\ \\&=i.\ \\\end{aligned}}}
This result can also be derived with Euler's formula
${\displaystyle e^{ix}=\cos(x)+i\sin(x)\,}$
by substituting x = π/2, giving
${\displaystyle e^{i(\pi /2)}=\cos(\pi /2)+i\sin(\pi /2)=0+i1=i\,\!.}$
Taking the square root of both sides gives
${\displaystyle {\sqrt {i}}=\pm e^{i(\pi /4)}\,\!,}$
which, through application of Euler's formula to x = π/4, gives
{\displaystyle {\begin{aligned}{\sqrt {i}}&=\pm (\cos(\pi /4)+i\sin(\pi /4))\\&={\frac {1}{\pm {\sqrt {2}}}}+{\frac {i}{\pm {\sqrt {2}}}}\\&={\frac {1+i}{\pm {\sqrt {2}}}}\\&=\pm {\frac {\sqrt {2}}{2}}(1+i).\\\end{aligned}}}
Similarly, the square root of i can be expressed as either of two complex numbers using Euler's formula:
${\displaystyle e^{ix}=\cos(x)+i\sin(x)\,}$
by substituting x = 3π/2, giving
${\displaystyle e^{i(3\pi /2)}=\cos(3\pi /2)+i\sin(3\pi /2)=0-i1=-i\,\!.}$
Taking the square root of both sides gives
${\displaystyle {\sqrt {-i}}=\pm e^{i(3\pi /4)}\,\!,}$
which, through application of Euler's formula to x = 3π/4, gives
{\displaystyle {\begin{aligned}{\sqrt {-i}}&=\pm (\cos(3\pi /4)+i\sin(3\pi /4))\\&=-{\frac {1}{\pm {\sqrt {2}}}}+i{\frac {1}{\pm {\sqrt {2}}}}\\&={\frac {-1+i}{\pm {\sqrt {2}}}}\\&=\pm {\frac {\sqrt {2}}{2}}(i-1).\\\end{aligned}}}
Multiplying the square root of Template:Mvar by Template:Mvar also gives:
{\displaystyle {\begin{aligned}{\sqrt {-i}}=(i)\cdot (\pm {\frac {1}{\sqrt {2}}}(1+i))\\&=\pm {\frac {1}{\sqrt {2}}}(1i+i^{2})\\&=\pm {\frac {\sqrt {2}}{2}}(i-1)\\\end{aligned}}}
### Multiplication and division
Multiplying a complex number by Template:Mvar gives:
${\displaystyle i\,(a+bi)=ai+bi^{2}=-b+ai.}$
(This is equivalent to a 90° counter-clockwise rotation of a vector about the origin in the complex plane.)
Dividing by Template:Mvar is equivalent to multiplying by the reciprocal of Template:Mvar:
${\displaystyle {\frac {1}{i}}={\frac {1}{i}}\cdot {\frac {i}{i}}={\frac {i}{i^{2}}}={\frac {i}{-1}}=-i.}$
Using this identity to generalize division by Template:Mvar to all complex numbers gives:
${\displaystyle {\frac {a+bi}{i}}=-i\,(a+bi)=-ai-bi^{2}=b-ai.}$
(This is equivalent to a 90° clockwise rotation of a vector about the origin in the complex plane.)
### Powers
The powers of Template:Mvar repeat in a cycle expressible with the following pattern, where n is any integer:
${\displaystyle i^{4n}=1\,}$
${\displaystyle i^{4n+1}=i\,}$
${\displaystyle i^{4n+2}=-1\,}$
${\displaystyle i^{4n+3}=-i.\,}$
This leads to the conclusion that
${\displaystyle i^{n}=i^{n{\bmod {4}}}\,}$
where mod represents the modulo operation. Equivalently:
${\displaystyle i^{n}=\cos(n\pi /2)+i\sin(n\pi /2)}$
#### Template:Mvar raised to the power of Template:Mvar
Making use of Euler's formula, ii is
${\displaystyle i^{i}=\left(e^{i(\pi /2+2k\pi )}\right)^{i}=e^{i^{2}(\pi /2+2k\pi )}=e^{-(\pi /2+2k\pi )}}$
The principal value (for k = 0) is e−π/2 or approximately 0.207879576...[1]
### Factorial
The factorial of the imaginary unit Template:Mvar is most often given in terms of the gamma function evaluated at 1 + i:
${\displaystyle i!=\Gamma (1+i)\approx 0.4980-0.1549i.}$
Also,
${\displaystyle |i!|={\sqrt {\pi \over \sinh \pi }}}$[2]
### Other operations
Many mathematical operations that can be carried out with real numbers can also be carried out with Template:Mvar, such as exponentiation, roots, logarithms, and trigonometric functions. However, it should be noted that all of the following functions are complex multi-valued functions, and it should be clearly stated which branch of the Riemann surface the function is defined on in practice. Listed below are results for the most commonly chosen branch.
A number raised to the ni power is:
${\displaystyle \!\ x^{ni}=\cos(\ln x^{n})+i\sin(\ln x^{n}).}$
The nith root of a number is:
${\displaystyle \!\ {\sqrt[{ni}]{x}}=\cos(\ln {\sqrt[{n}]{x}})-i\sin(\ln {\sqrt[{n}]{x}}).}$
The imaginary-base logarithm of a number is:
${\displaystyle \log _{i}(x)={{2\ln x} \over i\pi }.}$
As with any complex logarithm, the log base Template:Mvar is not uniquely defined.
The cosine of Template:Mvar is a real number:
${\displaystyle \cos(i)=\cosh(1)={{e+1/e} \over 2}={{e^{2}+1} \over 2e}\approx 1.54308064....}$
And the sine of Template:Mvar is purely imaginary:
${\displaystyle \sin(i)=i\sinh(1)\,={{e-1/e} \over 2}\,i={{e^{2}-1} \over 2e}\,i\approx 1.17520119\,i....}$
## Matrices
When 2 × 2 real matrices m are used for a source, and the number one (1) is identified with the identity matrix, and minus one (−1) with the negative of the identity matrix, then there are many solutions to m2 = −1. In fact, there are many solutions to m2 = +1 and m2 = 0 also. Any such m can be taken as a basis vector, along with 1, to form a planar algebra.
## Notes
1. To find such a number, one can solve the equations
(x + iy)2 = i
x2 + 2ixyy2 = i
Because the real and imaginary parts are always separate, we regroup the terms:
x2y2 + 2ixy = 0 + i
and get a system of two equations:
x2y2 = 0
2xy = 1
Substituting y = 1/2x into the first equation, we get
x2 − 1/4x2 = 0
x2 = 1/4x2
4x4 = 1
Because Template:Mvar is a real number, this equation has two real solutions for Template:Mvar: x = 1/Template:Sqrt and x = −1/Template:Sqrt. Substituting both of these results into the equation 2xy = 1 in turn, we will get the same results for y. Thus, the square roots of Template:Mvar are the numbers and . (University of Toronto Mathematics Network: What is the square root of i? URL retrieved March 26, 2007.)
## References
1. "The Penguin Dictionary of Curious and Interesting Numbers" by David Wells, Page 26.
2. "abs(i!)", WolframAlpha.
3. Template:Cite web
|
{}
|
• Solvent effects in the reaction between piperazine and benzyl bromide
• # Fulltext
https://www.ias.ac.in/article/fulltext/jcsc/119/06/0613-0616
• # Keywords
Solvation; solvent electrophilicity; hydrogen bond donor ability; linear solvation energy relationship (LSER).
• # Abstract
The reaction between piperazine and benzyl bromide was studied conductometrically and the second order rate constants were computed. These rate constants determined in 12 different protic and aprotic solvents indicate that the rate of the reaction is influenced by electrophilicity (𝐸), hydrogen bond donor ability (𝛼) and dipolarity/polarizability ($\pi^\ast$) of the solvent. The LSER derived from the statistical analysis indicates that the transition state is more solvated than the reactants due to hydrogen bond donation and polarizability of the solvent while the reactant is more solvated than the transition state due to electrophilicity of the solvent. Study of the reaction in methanol, dimethyl formamide mixtures suggests that the rate is maximum when dipolar interactions between the two solvents are maximum.
• # Author Affiliations
1. Department of Chemistry, Kakatiya University, Warangal 506 009
• # Journal of Chemical Sciences
Volume 134, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
{}
|
# Publications by Siddharth Parameswaran
## Quantum oscillations probe the Fermi surface topology of the nodal-line semimetal CaAgAs
Physical Review Research American Physical Society 2 (2020) 012055(R)
YH Kwan, P Reiss, Y Han, M Bristow, D Prabhakaran, D Graf, A McCollam, S Ashok Parameswaran, AI Coldea
Nodal semimetals are a unique platform to explore topological signatures of the unusual band structure that can manifest by accumulating a nontrivial phase in quantum oscillations. Here we report a study of the de Haas–van Alphen oscillations of the candidate topological nodal line semimetal CaAgAs using torque measurements in magnetic fields up to 45 T. Our results are compared with calculations for a toroidal Fermi surface originating from the nodal ring. We find evidence of a nontrivial π phase shift only in one of the oscillatory frequencies. We interpret this as a Berry phase arising from the semiclassical electronic Landau orbit which links with the nodal ring when the magnetic field lies in the mirror (ab) plane. Furthermore, additional Berry phase accumulates while rotating the magnetic field for the second orbit in the same orientation which does not link with the nodal ring. These effects are expected in CaAgAs due to the lack of inversion symmetry. Our study experimentally demonstrates that CaAgAs is an ideal platform for exploring the physics of nodal line semimetals and our approach can be extended to other materials in which trivial and nontrivial oscillations are present.
## Classical dimers on penrose tilings
Physical Review X American Physical Society 10 (2020) 011005
F Flicker, SH Simon, Parameswaran
## Erratum: Charge Transport in Weyl Semimetals (Physical Review Letters (2012) 108 (046602) DOI: 10.1103/PhysRevLett.108.046602)
Physical Review Letters 123 (2019)
P Hosur, SA Parameswaran, A Vishwanath
© 2019 American Physical Society. This erratum corrects errors in numerical factors in Eqs. (1), (7), and (8), and the overall scale of the dc resistivity plotted in Fig. 2. We recently discovered an algebraic error in Eq. (7), which led to incorrect numerical factors in Eqs. (1) and (8). The correct Eqs. (1), (7) and (8), respectively, are (Formula Presented). An error was also found in the overall scale of pdc = 1/σdc calculated from (1) and plotted in Fig. 2 of the Letter. With these corrections our theory underestimates ?dc of the samples in Ref. [12] of the Letter, which is understandable since the samples are polycrystalline while our theory specializes to single crystals. However, correcting both errors gives excellent agreement with recent experiments on Eu 0.96 Bi 0.04 Ir 2 O 7 [1] for reasonable values of parameters, as shown in Fig. 1. Moreover, Ref. [1] finds pdc ( T ) ∼ 1 / T , as predicted by our theory, only at low temperatures, which is where our theory is best applicable since it contains only Coulomb scattering but ignores phonon scattering. Thus, it is likely that the low-temperature transport in Eu 0.96 Bi 0.04 Ir 2 O 7 is dominated by Coulomb scattering. We thank Surjeet Singh and Prachi Telang for bringing the error in the computation of pdc to our attention (Figure Presented).
## Topology and symmetry-protected domain wall conduction in quantum Hall nematics
Physical review B: Condensed matter and materials physics American Physical Society 100 (2019) 165103
K Agarwal, MT Randeria, A Yazdani, SL Sondhi, S Ashok Parameswaran
## Topological 'Luttinger' invariants for filling-enforced non-symmorphic semimetals
Journal of Physics: Condensed Matter IOP Publishing 31 (2019) 104001-
S Parameswaran
Luttinger’s theorem is a fundamental result in the theory of interacting Fermi systems: it states that the volume inside the Fermi surface is left invariant by interactions, if the number of particles is held fixed. Although this is traditionally justified in terms of analytic properties of Green’s functions, it can be viewed as arising from a momentum balance argument that examines the response of the ground state to the insertion of a single flux quantum [M. Oshikawa, Phys. Rev. Lett. 84, 3370 (2000)]. This reveals that the Fermi volume is a topologically protected quantity, whose change requires a phase transition. However, this sheds no light on the stability or lack thereof of interacting semimetals, that either lack a Fermi surface, or have perfectly compensated electron and hole pockets and hence vanishing net Fermi volume. Here, I show that semimetallic phases in non-symmorphic crystals possess additional topological ‘Luttinger invariants’ that can be nonzero even though the Fermi volume vanishes. The existence of these invariants is linked to the inability of non-symmorphic crystals to host band insulating ground states except at special fillings. I exemplify the use of these new invariants by showing that they distinguish various classes of twoand three-dimensional semimetals.
## Quantum Brownian motion in a quasiperiodic potential
Physical review B: Condensed matter and materials physics American Physical Society 100 (2019) 060301
A Friedman, R Vasseur, A Lamacraft, S Ashok Parameswaran
We consider a quantum particle subject to Ohmic dissipation, moving in a bichromatic quasiperiodic potential. In a periodic potential the particle undergoes a zero-temperature localization-delocalization transition as dissipation strength is decreased. We show that the delocalized phase is absent in the quasiperiodic case, even when the deviation from periodicity is infinitesimal. Using the renormalization group, we determine how the effective localization length depends on the dissipation. We show that a similar problem can emerge in the strong-coupling limit of a mobile impurity moving in a periodic lattice and immersed in a one-dimensional quantum gas.
## Interacting multi-channel topological boundary modes in a quantum Hall valley system
Nature Springer Nature 566 (2019) 363–367-
MT Randeria, K Agarwal, BE Feldman, H Ding, H Ji, RJ Cava, SL Sondhi, S Parameswaran, A Yazdani
Symmetry and topology are central to understanding quantum Hall ferromagnets (QHFMs), two-dimensional electronic phases with spontaneously broken spin or pseudospin symmetry whose wavefunctions also have topological properties1,2. Domain walls between distinct broken-symmetry QHFM phases are predicted to host gapless one-dimensional modes—that is, quantum channels that emerge because of a topological change in the underlying electronic wavefunctions at such interfaces. Although various QHFMs have been identified in different materials3,4,5,6,7,8, interacting electronic modes at these domain walls have not been probed. Here we use a scanning tunnelling microscope to directly visualize the spontaneous formation of boundary modes at domain walls between QHFM phases with different valley polarization (that is, the occupation of equal-energy but quantum mechanically distinct valleys in the electronic structure) on the surface of bismuth. Spectroscopy shows that these modes occur within a topological energy gap, which closes and reopens as the valley polarization switches across the domain wall. By changing the valley flavour and the number of modes at the domain wall, we can realize different regimes in which the valley-polarized channels are either metallic or develop a spectroscopic gap. This behaviour is a consequence of Coulomb interactions constrained by the valley flavour, which determines whether electrons in the topological modes can backscatter, making these channels a unique class of interacting one-dimensional quantum wires. QHFM domain walls can be realized in different classes of two-dimensional materials, providing the opportunity to explore a rich phase space of interactions in these quantum wires.
## Kosterlitz-Thouless scaling at many-body localization phase transitions
Physical Review B: Condensed matter and materials physics American Physical Society 99 (2019) 094205
P Dumitrescu, A Goremykina, S Ashok Parameswaran, M Serbyn, R Vasseur
<p>We propose a scaling theory for the many-body localization (MBL) phase transition in one dimension, building on the idea that it proceeds via a “quantum avalanche.” We argue that the critical properties can be captured at a coarse-grained level by a Kosterlitz-Thouless (KT) renormalization group (RG) flow. On phenomenological grounds, we identify the scaling variables as the density of thermal regions and the length scale that controls the decay of typical matrix elements. Within this KT picture, the MBL phase is a line of fixed points that terminates at the delocalization transition. We discuss two possible scenarios distinguished by the distribution of rare, fractal thermal inclusions within the MBL phase. In the first scenario, these regions have a stretched exponential distribution in the MBL phase. In the second scenario, the near-critical MBL phase hosts rare thermal regions that are power-law-distributed in size. This points to the existence of a second transition within the MBL phase, at which these power laws change to the stretched exponential form expected at strong disorder. We numerically simulate two different phenomenological RGs previously proposed to describe the MBL transition. Both RGs display a universal power-law length distribution of thermal regions at the transition with a critical exponent α<sub>c</sub> = 2, and continuously varying exponents in the MBL phase consistent with the KT picture.</p>
## Signatures of information scrambling in the dynamics of the entanglement spectrum
Physical review B: Condensed Matter and Materials Physics American Physical Sociey 100 (2019) 125115
T Rakovsky, S Gopalakrishnan, S Ashok Parameswaran, F Pollmann
We examine the time evolution of the entanglement spectrum of a small subsystem of a nonintegrable spin chain following a quench from a product state. We identify signatures in this entanglement spectrum of the distinct dynamical velocities (related to entanglement and operator spreading) that control thermalization. We show that the onset of level repulsion in the entanglement spectrum occurs on different timescales depending on the “entanglement energy”, and that this dependence reflects the shape of the operator front. Level repulsion spreads across the entire entanglement spectrum on a timescale that is parametrically shorter than that for full thermalization of the subsystem. This timescale is also close to when the mutual information between individual spins at the ends of the subsystem reaches its maximum. We provide an analytical understanding of this phenomenon and show supporting numerical data for both random unitary circuits and a microscopic Hamiltonian.
## Quantum Hall valley nematics
Journal of Physics: Condensed Matter IOP Publishing 31 (2019) 273001
S Ashok Parameswaran, BE Feldman
Two-dimensional electron gases in strong magnetic fields provide a canonical platform for realizing a variety of electronic ordering phenomena. Here we review the physics of one intriguing class of interaction-driven quantum Hall states: quantum Hall valley nematics. These phases of matter emerge when the formation of a topologically insulating quantum Hall state is accompanied by the spontaneous breaking of a point-group symmetry that combines a spatial rotation with a permutation of valley indices. The resulting orientational order is particularly sensitive to quenched disorder, while quantum Hall physics links charge conduction to topological defects. We discuss how these combine to yield a rich phase structure, and their implications for transport and spectroscopy measurements. In parallel, we discuss relevant experimental systems. We close with an outlook on future directions.
## Topological entanglement entropy of fracton stabilizer codes
Physical Review B American Physical Society 97 (2018) 1-16
H Ma, AT Schmitz, S Parameswaran, M Hermele, R Nandkishore
Entanglement entropy provides a powerful characterization of two-dimensional gapped topolog- ical phases of quantum matter, intimately tied to their description by topological quantum field theories (TQFTs). Fracton topological orders are three-dimensional gapped topologically ordered states of matter that lack a TQFT description. We show that three-dimensional fracton phases are nevertheless characterized, at least partially, by universal structure in the entanglement entropy of their ground state wave functions. We explicitly compute the entanglement entropy for two archety- pal fracton models - the X-cube model' and Haah's code' - and demonstrate the existence of a non-local contribution that scales linearly in subsystem size. We show via Schrieffer-Wolff transfor- mations that this piece of the entanglement entropy of fracton models is robust against arbitrary local perturbations of the Hamiltonian. Finally, we argue that these results may be extended to characterize localization-protected fracton topological order in excited states of disordered fracton models.
## Correlation function diagnostics for type-I fracton phases
Physical Review B: Condensed Matter and Materials Physics American Physical Society 97 (2018) 041110-
T Devakul, SA Parameswaran, SL Sondhi
Fracton phases are recent entrants to the roster of topological phases in three dimensions. They are characterized by subextensively divergent topological degeneracy and excitations that are constrained to move along lower dimensional subspaces, including the eponymous fractons that are immobile in isolation. We develop correlation function diagnostics to characterize Type I fracton phases which build on their exhibiting partial deconfinement. These are inspired by similar diagnostics from standard gauge theories and utilize a generalized gauging procedure that links fracton phases to classical Ising models with subsystem symmetries. En route, we explicitly construct the spacetime partition function for the plaquette Ising model which, under such gauging, maps into the X-cube fracton topological phase. We numerically verify our results for this model via Monte Carlo calculations.
## Many-body localization, symmetry, and topology
Reports on Progress in Physics IOP Publishing 81 (2018) 082501
S Parameswaran, R Vasseur
We review recent developments in the study of out-of-equilibrium topological states of matter in isolated systems. The phenomenon of many-body localization, exhibited by some isolated systems usually in the presence of quenched disorder, prevents systems from equilibrating to a thermal state where the delicate quantum correlations necessary for topological order are often washed out. Instead, many-body localized systems can exhibit a type of eigenstate phase structure wherein their entire many-body spectrum is characterized by various types of quantum order, usually restricted to quantum ground states. After introducing many-body localization and explaining how it can protect quantum order, we then explore how the interplay of symmetry and dimensionality with many-body localization constrains its role in stabilizing topological phases out of equilibrium.
## Strong-disorder renormalization group for periodically driven systems
Physical Review B: Condensed Matter and Materials Physics American Physical Society 98 (2018) 174203
W Berdanier, M Kolodrubetz, SGA Parameswaran, R Vasseur
Quenched randomness can lead to robust non-equilibrium phases of matter in periodically driven (Floquet) systems. Analyzing transitions between such dynamical phases requires a method capable of treating the twin complexities of disorder and discrete time-translation symmetry. We introduce a real-space renormalization group approach, asymptotically exact in the strong-disorder limit, and exemplify its use on the periodically driven interacting quantum Ising model. We analyze the universal physics near the critical lines and multicritical point of this model, and demonstrate the robustness of our results to the inclusion of weak interactions.
## Localization-protected order in spin chains with non-Abelian discrete symmetries
Physical Review B American Physical Society 98 (2018) 064203
AJ Friedman, R Vasseur, AC Potter, S Parameswaran
We study the nonequilibrium phase structure of the three-state random quantum Potts model in one dimension. This spin chain is characterized by a non-Abelian D 3 symmetry recently argued to be incompatible with the existence of a symmetry-preserving many-body localized (MBL) phase. Using exact diagonalization and a finite-size scaling analysis, we find that the model supports two distinct broken-symmetry MBL phases at strong disorder that either break the Z 3 clock symmetry or a Z 2 chiral symmetry. In a dual formulation, our results indicate the existence of a stable finite-temperature topological phase with MBL-protected parafermionic end zero modes. While we find a thermal symmetry-preserving regime for weak disorder, scaling analysis at strong disorder points to an infinite-randomness critical point between two distinct broken-symmetry MBL phases.
## Floquet quantum criticality
Proceedings of the National Academy of Sciences National Academy of Sciences 115 (2018) 9491-9496
W Berdanier, M Kolodrubetz, S Parameswaran, R Vasseur
We study transitions between distinct phases of one-dimensional periodically driven (Floquet) systems. We argue that these are generically controlled by infinite-randomness fixed points of a strong-disorder renormalization group procedure. Working in the fermionic representation of the prototypical Floquet Ising chain, we leverage infinite randomness physics to provide a simple description of Floquet (multi)criticality in terms of a distinct type of domain wall associated with time translational symmetry-breaking and the formation of “Floquet time crystals.” We validate our analysis via numerical simulations of free-fermion models sufficient to capture the critical physics.
## Recoverable information and emergent conservation laws in fracton stabilizer codes
Physical Review B American Physical Society 97 (2018) 134426
A Schmitz, H Ma, R Nandkishore, S Parameswaran
We introduce a new quantity, that we term {\it recoverable information}, defined for stabilizer Hamiltonians. For such models, the recoverable information provides a measure of the topological information, as well as a physical interpretation, which is complementary to topological entanglement entropy. We discuss three different ways to calculate the recoverable information, and prove their equivalence. To demonstrate its utility, we compute recoverable information for {\it fracton models} using all three methods where appropriate. From the recoverable information, we deduce the existence of emergent Z 2 Gauss-law type constraints, which in turn imply emergent Z 2 conservation laws for point-like quasiparticle excitations of an underlying topologically ordered phase.
## Non-Fermi glasses: Localized descendants of fractionalized metals
Physical Review Letters American Physical Society 119 (2017) 1-5
S Parameswaran, S Gopalakrishnan
Non-Fermi liquids are metals that cannot be adiabatically deformed into free fermion states. We argue for the existence of "non-Fermi glasses" phases of interacting disordered fermions that are fully many-body localized (MBL), yet cannot be deformed into an Anderson insulator without an eigenstate phase transition. We explore the properties of such non-Fermi glasses, focusing on a specific solvable example. At high temperature, non-Fermi glasses have qualitatively similar spectral features to Anderson insulators. We identify a diagnostic, based on ratios of correlators, that sharply distinguishes between the two phases even at infinite temperature. Our results and diagnostic should generically apply to the high-temperature behavior of MBL descendants of fractionalized phases.
## Filling-enforced nonsymmorphic Kondo semimetals in two dimensions
Physical Review B 96 (2017)
JH Pixley, S Lee, B Brandom, SA Parameswaran
© 2017 American Physical Society. We study the competition between Kondo screening and frustrated magnetism on the nonsymmorphic Shastry-Sutherland Kondo lattice at a filling of two conduction electrons per unit cell. This model is known to host a set of gapless partially Kondo screened phases intermediate between the Kondo-destroyed paramagnet and the heavy Fermi liquid. Based on crystal symmetries, we argue that (i) both the paramagnet and the heavy Fermi liquid are semimetals protected by a glide symmetry; and (ii) partial Kondo screening breaks the symmetry, removing this protection and allowing the partially Kondo screened phase to be deformed into a Kondo insulator via a Lifshitz transition. We confirm these results using large-N mean-field theory and then use nonperturbative arguments to derive a generalized Luttinger sum rule constraining the phase structure of two-dimensional nonsymmorphic Kondo lattices beyond the mean-field limit.
## Viewpoint: Topological insulators turn a corner
Physics American Physical Society 10 (2017) 1-3
S Parameswaran, Y Wan
|
{}
|
# Introduction to Semiconductor Devices
This post contains my lecture notes of taking the course VE320: Introduction to Semiconductor Devices at UM-SJTU Joint Institute.
## Chapter 1: The Crystal Structure of Solids
• Unit cell vs. primitive cell
• primitive cell: smallest unit cell possible
• Three lattice types
• Simple cubic
• Body-centered cubic
• Face-centered cubic
• Volume density = $\dfrac{\text{\# of atoms per unit cell}}{a^3}$
• Unit: 1 Angstrom = 1e-10 m
• Miller Index: $(\frac{1}{x\text{-intersect}},\frac{1}{y\text{-intersect}},\frac{1}{z\text{-intersect}})\times\text{lcm}(x,y,z)$
• (100) plane, (110) plane, (111) plane
• [100] direction, [110] direction, [111] direction
## Chapter 2: Introduction to Quantum Mechanics
• Plank’s constant: $h=6.625\times10^{-34}~\text{J-s}$
• Photon’s energy $E=h\nu=\hbar\omega$
• Photon’s momentum $p=\frac{h}{\lambda}$ (de Broglie wavelength)
### Schrodinger’s Equation
Total wave function: $\Psi(x,t)=\psi(x)\phi(t)=\psi(x)e^{-j(E/\hbar)t}$.
### Time-Independent Wave Function
$\frac{\partial^2\psi(x)}{\partial x^2}+\frac{2m}{\hbar^2}(E-V(x))\psi(x)=0.$
• $|\psi(x)|^2$ is the probability density function. $\int_{-\infty}^{\infty}|\psi(x)|^2\mathrm{d}x=1$.
• Boundary conditions for solving $\psi(x)$:
• $\psi(x)$ must be finite, continuos, single-valued.
• $\partial\psi/\partial x$ must be finite, continuos, single-valued.
• $k$ is wave number. $p=\hbar k$.
• Steps of solving wave function:
• Consider Schrodinger’s equation in each region. Solve separately.
• Apply boundary conditions to determine coefficients.
### Electron Energy in Single Atom
$V(r)=\frac{-e^2}{4\pi\epsilon_0 r},\quad \nabla^2\psi(r,\theta,\phi)+\frac{2m_0}{\hbar^2}(E-V(r))\psi(r,\theta,\phi)=0$
• Solution: $E_n=\dfrac{-m_0 e^4}{(4\pi\epsilon_0)^22\hbar^2n^2}$, $n$ is the principal quantum number.
## Chapter 3: Introduction to the Quantum Theory of Solids
### Formation of Energy Band
• Silicon: $1s^22s^22p^63s^23p^2$. Energy levels split and formed two large energy bands: valence band & conduction band.
• $E$ vs. $k$ curve for free particle: $E=\dfrac{k^2\hbar^2}{2m}$. (parabolic relation)
• $E$ vs. $k$ diagram in reduced zone representation (for Si crystal, derived from Kronig-Penney model).
### Electrical Conduction in Solids
• Drift Current:
• $J=qNv_d$ $(\text{A}/\text{cm}^2)$, $N$ being the volume density.
• If considering individual ion velocities, $J=q\sum_{i=1}^{N}v_i$.
• Electron effective mass:
• $\dfrac{\mathrm{d}^E}{\mathrm{d}k^2}=\dfrac{\hbar^2}{m}$.
### Metals, Insulators, Semiconductors
• Metals:
• Has a partially filled energy band.
• Insulators:
• Has an either completely filled or completely empty energy band.
### Density of States Function
Consider 1-D crystal with $N$ quantum wells, each well of length $a$.
• Number of states of the whole crystal within $(0,\pi/a)$: $\dfrac{N}{\pi/a}$
• Number of states of the whole crystal within $\Delta k$: $\dfrac{N}{\pi/a}\times\Delta k$
• Number of states per unit volume within $\Delta k$: $\dfrac{N}{\pi/a}\times\Delta k\dfrac{1}{Na}=\dfrac{\Delta k}{\pi}$
For $E$ vs. $k$ relation:
$E=E_c+\frac{\hbar^2}{2m^*_n}k^2,\qquad k=\pm\frac{\sqrt{2m^*_n(E-E_c)}}{\hbar}$
Extend to 3-D sphere:
$g(E)=\frac{1}{8}\frac{\mathrm{d}(4\pi/3\times(k/\pi)^3)}{\mathrm{d}E}$
Final conclusion:
$\text{Conduction band: }g_c(E)=\frac{4\pi(2m^*_n)^{3/2}}{h^3}\sqrt{E-E_c}$
$\text{Valence band: }g_v(E)=\frac{4\pi(2m^*_p)^{3/2}}{h^3}\sqrt{E_v-E}$
• $m^*_n$ is the density of states effective mass for electrons.
• $m^*_p$ is the density of states effective mass for holes.
• $E_c$ is the bottom edge of conduction band.
• $E_v$ is the top edge of valence band.
### Statistical Mechanics
Fermi-Dirac distribution:
$f_F(E)=\frac{1}{1+\exp\left(\dfrac{E-E_F}{kT}\right)}$
• represents the probability of a quantum state is occupied by an electron.
• $k$ is the Boltzmann Constant. k = 8.62e-5 eV/K.
When $E-E_F\gg kT$, 1 at the denominator is ignored. Fermi-Dirac distribution becomes Maxwell-Boltzmann distribution.
$f_F(E)=\frac{1}{\exp\left(\dfrac{E-E_F}{kT}\right)}=\exp\left(-\frac{E-E_F}{kT}\right)$
## Chapter 4: Semiconductor in Equilibrium
### Thermal-Equilibrium Carrier Concentration
Electron (use Maxwell-Boltzmann approximation):
$n_0=\int g_c(E)f_F(E)\mathrm{d}E\quad\Rightarrow\quad n_0=2\left(\frac{2\pi m^*_n kT}{h^2}\right)^{3/2}\exp\left[-\frac{E_c-E_F}{kT}\right]$
For simplicity,
$n_0=N_c\exp\left[-\frac{E_c-E_F}{kT}\right]$
• $N_c$ is called effective density of states function in the conduction band.
Hole (use Maxwell-Boltzmann approximation):
$p_0=2\left(\frac{2\pi m^*_p kT}{h^2}\right)^{3/2}\exp\left[-\frac{E_F-E_v}{kT}\right]$
For simplicity,
$p_0=N_v\exp\left[-\frac{E_F-E_v}{kT}\right]$
• $N_v$ is called effective density of states function in the valence band.
### Intrinsic Carrier Concentration
$n_i=n_0=p_0$ for intrinsic semiconductors. Therefore,
$n_i=\sqrt{n_0p_0}=\sqrt{N_c N_v}\exp\left(-\frac{E_g}{2kT}\right)$
### Intrinsic Fermi Level Position
For intrinsic semiconductors, $n_0=p_0$, $E_F=E_{Fi}$. Therefore, equating the previous $n_0$ and $p_0$ expressions, we get
$E_{Fi}-E_\text{midgap}=\frac{3}{4}kT\ln\left(\frac{m^*_p}{m^*_n}\right),\quad\text{where}\quad \frac{1}{2}(E_c+E_v)=E_\text{midgap}$
### Dopant Atoms and Energy Levels
Introduce $E_d$ in $n$-type semiconductor, which is the discrete donor energy state. Therefore, $E_c-E_d$ is the ionization energy.
Introduce $E_a$ in $p$-type semiconductor, which is the discrete acceptor energy state. Therefore, $E_a-E_v$ is the ionization energy.
### The Extrinsic Semiconductor
An extrinsic semiconductor is defined as a semiconductor in which controlled amounts of specific dopant or impurity atoms have been added so that the thermal-equilibrium electron and hole concentrations are different from the intrinsic carrier concentration.
#### Equilibrium Distribution of Electrons and Holes
$n_0=n_i\exp\left[\frac{E_F-E_{Fi}}{kT}\right]$
$p_0=n_i\exp\left[-\frac{E_F-E_{Fi}}{kT}\right]$
• $n_0p_0=n_i^2$ still holds.
#### Degenerate and Non-degenerate Semiconductor
When the donor concentration is high enough to split the discrete donor energy state, the semiconductor is called a degenerate $n$-type semiconductor.
When the acceptor concentration is high enough to split the discrete acceptor energy state, the semiconductor is called a degenerate $p$-type semiconductor.
#### Statistics of Donors and Acceptors
$f_D(E)=\frac{1}{1+\dfrac{1}{2}\exp\left(\dfrac{E_d-E_F}{kT}\right)},\qquad n_d=N_d-N_d^+.$
• Reason for $1/2$: in conduction band energy states, each state can be occupied by at most 2 electrons (spin up & spin down); while in donor energy state, each state can only be occupied by 1 electron (either spin up or spin down).
$n_d=\frac{N_d}{1+\dfrac{1}{2}\exp\left(\dfrac{E_d-E_F}{kT}\right)},\qquad n_d=N_d-N_d^+.$
• $n_d$ represents the density of electrons occupying the donor state.
• $n_d=N_d\times f_D(E_d)$.
$p_a=\frac{N_a}{1+\dfrac{1}{g}\exp\left(\dfrac{E_F-E_a}{kT}\right)},\qquad p_a=N_a-N_a^+.$
• $p_a$ represents the probability of holes occupying the acceptor state.
• $g$, the degeneracy factor, is normally taken as 4.
Complete ionization: all donor/acceptor atoms have donated an electron/a hole to the conduction band/valence band.
Freeze-out: all donor/acceptor energy states are filled with electrons/holes.
### Charge Neutrality
A compensated semiconductor is one that contains both donor and acceptor impurity atoms in the same region.
• $n$-type compensated: $N_d>N_a$
• $p$-type compensated: $N_a>N_d$
#### Equilibrium Electron and Hole Concentrations
At equilibrium, overall charge is neutral.
$n_0+N_a^-=p_0+N_d^+\quad\Leftrightarrow\quad n_0+(N_a-p_a)=p_0+(N_d-n_d)$
Assume complete ionization:
$n_0+N_a=\frac{n_i^2}{n_0}+N_d\quad\Rightarrow\quad n_0=\frac{(N_d-N_a)}{2}+\sqrt{\left(\frac{N_d-N_a}{2}\right)^2+n_i^2}.$
• Only valid for an $n$-type semiconductor!
Assume complete ionization:
$\frac{n_i^2}{p_0}+N_a=p_0+N_d\quad\Rightarrow\quad p_0=\frac{(N_a-N_d)}{2}+\sqrt{\left(\frac{N_a-N_d}{2}\right)^2+n_i^2}.$
• Only valid for an $p$-type semiconductor!
Incomplete ionization:
• When $T$ is very high, $n_i\gg N_d^+$. Therefore $n_0=n_i=\sqrt{N_cN_v}\exp\left(\dfrac{-E_g}{2kT}\right)$;
• When $T$ is not high:
$n_0=N_d^+=N_d\left(1-\frac{1}{1+\dfrac{1}{2}\exp\left(\dfrac{E_d-E_F}{kT}\right)}\right)$
$n_0=\frac{N_d}{1+2\exp\left(-\dfrac{E_d-E_F}{kT}\right)}$
$n_0=\frac{N_d}{1+2\exp\left(\dfrac{E_c-E_d}{kT}\right)\exp\left(\dfrac{E_F-E_c}{kT}\right)}$
$n_0=\frac{N_d}{1+2\exp\left(\dfrac{E_c-E_d}{kT}\right)\dfrac{n_0}{N_c}}$
$2\exp\left(\frac{E_c-E_d}{kT}\right)n_0^2+N_cn_0-N_dN_c=0$
Final conclusion:
$n_0=N_c\times\frac{-1+\sqrt{1+\dfrac{8N_d}{N_c}\exp\left(\dfrac{E_c-E_d}{kT}\right)}}{4\exp\left(\dfrac{E_c-E_d}{kT}\right)}.$
### Position of Fermi Energy Level
With respect to intrinsic Fermi energy level:
$n_0=n_i\exp\left(\dfrac{E_F-E_{Fi}}{kT}\right)\quad\Rightarrow\quad E_F-E_{Fi}=kT\ln\left(\frac{n_0}{n_i}\right)$
$p_0=n_i\exp\left(\dfrac{E_{Fi}-E_F}{kT}\right)\quad\Rightarrow\quad E_{Fi}-E_F=kT\ln\left(\frac{p_0}{n_i}\right)$
With respect to valence band energy level:
$p_o=N_v\exp\left(\dfrac{E_F-E_v}{kT}\right)\quad\Rightarrow\quad E_F-E_v=kT\ln\left(\frac{N_v}{p_0}\right)$
• if we assume $N_a\gg n_i$, then $E_F-E_v=kT\ln\left(\dfrac{N_v}{N_a}\right)$
With respect to conduction band energy level:
$E_F-E_c=kT\ln\left(\frac{n_0}{N_c}\right).$
• Note. $n_0$ can be derived from complete ionization OR incomplete ionization formulas.
Author:
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint polocy. If reproduced, please indicate source Zhihao (Ryan) Ruan !
|
{}
|
# Two constructed triangles with common vertex have tangent circumcircles at this vertex.
Here's a complicated-sounding geometry problem, Any help would be appreciated :)
Let $\triangle ABC$ be an obtuse-angled triangle with circumcentre $O$, circumcircle $\Gamma$ and $\angle ABC > 90^\circ$. Let $AB$ intersect the line through $C$ perpendicular to $AC$ in $D$. Let $l$ be the line through $D$ perpendicular to $AO$, and let $E$ be the intersection of $l$ and $AC$, and let $F$ be the point between $D$ and $E$ where $l$ intersects $\Gamma$. Can you prove that the circumcircles of triangles $\triangle BFE$ and $\triangle CFD$ are tangent at $F$?
Thanks :D
• Well, I wasn't able to solve it and I have no more time today. So I'll list some simple observations which may or may not help anyone solving the problem. First, it seems that the centers of the two circles and $F$ are collinear on the line $AF$, proving this may be one possible approach to tackling the problem. Secondly, the triangles $\triangle ABC$, $\triangle AED$ and $\triangle BEC$ are all similar. Finally, $A$, $C$, $D$ and the intersection of $l$ with $AO$ are all concyclic on the circle with diameter $AD$. – EuYu Feb 24 '13 at 21:17
• Woah, that's much further than I got. Thanks for the hints, now maybe I can get something going! :D Thanks a lot!!! – BittersweetNostalgia Feb 24 '13 at 21:22
• Oh thanks for the edit, it looks cooler and simpler now. I haven't had a chance to look at it yet, but has anyone made any progress? – BittersweetNostalgia Feb 25 '13 at 13:32
Let $l$ intersect $AO$ at $H$, and $AO$ intersect the circle again at $P$. Since $AP$ is a diameter, $PC \perp AC$, so $P, C, D$ are collinear. Now the 2 altitudes $AC, DH$ of $\triangle APD$ intersect at $E$, so $E$ is the orthocenter of $\triangle APD$. Thus $PE \perp AB$. Since $AP$ is a diameter, $PB \perp AB$, so $P, E, B$ are collinear.
$$\angle{PFE}=\angle{PFH}=90^{\circ}-\angle{HPF}=90^{\circ}-\angle{APF}=\angle{PBD}-\angle{DBF}=\angle{PBF}=\angle{EBF}$$
$$\angle{PFC}=\angle{PAC}=90^{\circ}-\angle{APC}=90^{\circ}-\angle{HPD}=\angle{HDP}=\angle{FDC}$$
Thus $PF$ is tangent to both the circumcircle of $\triangle BFE$ and the circumcircle of $\triangle CFD$, so the 2 circumcircles are tangent at $F$.
Hint: The diagram looks somewhat like this. You can solve the problem using simple geometry, I suppose.
• Thanks for the diagram Inceptio. Could you please prove the question for me? To me the geometry isn't so simple :/ Thanks! – BittersweetNostalgia Feb 24 '13 at 10:54
• Well, I'm on to it. Will post the answer once done. – Inceptio Feb 24 '13 at 10:55
• Hey Inceptio, not rushing you or pushing you or anything, but if it's fine with you, I kinda need the answer quite soon... Could you maybe post a solution quite soon? Thanks... :P – BittersweetNostalgia Feb 24 '13 at 14:07
• I'm trying it since then bro. I'm ain't really getting it. I got what to prove . But I'm not getting how to start it. -_- – Inceptio Feb 24 '13 at 16:46
• @Inceptio "You can solve the problem using simple geometry, I suppose." But most IMO geometry questions can be solved using "simple geometry". That doesn't necessarily mean that the question is easy to solve. – Ivan Loh Mar 29 '13 at 8:46
|
{}
|
quantitykind:MassFractionOfWater
Type
Description
Properties
"Mass Fraction of Water} is one of a number of \textit{Concentration" quantities defined by ISO 8000.
$$w_{H_2o} = \frac{u}{1+u}$$, where $$u$$ is mass ratio of water to dry water.
w_{H_2o}
Annotations
Mass Fraction of Water(en)
Generated 2021-11-02T09:53:16.687-07:00 by lmdoc version 1.1 with TopBraid SPARQL Web Pages (SWP)
|
{}
|
# How to get the structural formula of C3H6CI2?
chocolatte009911
please post homework questions in the homework forum and fill the template
The molecular formule is C3H6CI2, what is the structural formula for it?
Mentor
Show what you get when you try to draw the molecule.
|
{}
|
### Home > MC2 > Chapter 8 > Lesson 8.2.2 > Problem8-47
8-47.
Using the order of operations, first simplify the terms in parentheses,
then simplify the terms with exponents.
$\frac{25}{64}$
Follow the order of operations. First simplify any terms in parentheses, then any exponents, then terms being multiplied or divided, and lastly, combine any terms that are being added or subtracted.
See part (b).
$2 \frac{10}{14}$
Find a common denominator and combine.
$-\frac{164}{120}\text{ or } -1\frac{11}{30}$
See part (b).
$-\frac{3}{8}$
See part (d).
|
{}
|
# Let $A$ be a real symmetric matrix with rank $1$ , then can all the diagonal entries of $A$ be $0$ ?
Let $A$ be a square real symmetric matrix with rank $1$ , then can all the diagonal entries of $A$ be $0$ ? I know that real symmetric matrices are diagonalizable . Also if all the diagonal entries be $0$ then sum of all the eigenvalues will be $0$ . But so what ? Please help . Thanks in advance
• What does "rank $1$" tell you about the matrix? – Daniel Fischer Feb 6 '16 at 11:41
• If an $n \times n$ Matrix has rank 1, what is the dimension of its kernel? What are the dimensions of the eigenspace for the eigenvalue $0$? Are there other eigenvalues? – Roland Feb 6 '16 at 11:42
• @Roland : yes , but what does the dimension of the kernel tell me ? – user228169 Feb 6 '16 at 11:57
• No, rank $1$ means that the dimension of the range is $1$, and hence the kernel has dimension $n-1$. Not every (nonzero) vector is an eigenvector if $n > 1$. So there are $n-1$ eigenvalues $0$, and one nonzero eigenvalue. – Daniel Fischer Feb 6 '16 at 12:04
• @DanielFischer : ah yes , yes right . – user228169 Feb 6 '16 at 12:06
The quick answer to your question: note that the only diagonalizable matrix whose eigenvalues are all $0$ is the zero-matrix, and that a rank $1$ matrix can have at most one non-zero eigenvalue.
Another approach:
Note that any rank $1$ matrix can be written in the form $uv^T$ for column vectors $u,v$, and that a rank-$1$ matrix will be symmetric if and only if it can be written in the form $uu^T$ for some vector $u$.
Now, if $A = uu^T$, then the diagonal entries are given by $A_{ii} = u_i^2$.
The matrix is diagonalizable for the spectral's theorem. Indeed the dimension of the eigenspace for the eigenvalue of $0$ is $n-1$. Note that the eigenspace for the eigenvalue of $0$ is the $\ker$ of function.
• This looks like a hint, rather than an answer. (Namely it's missing that the trace is the sum of the diagonal elements of $A$ and the sum of the eigenvalues of $A$. If $\lambda$ is the nonzero eigenvalue, the latter is $(n-1)*0 + 1*\lambda$, thus the sum of the diagonal elements is equal to $\lambda \neq 0$, regardless if $A$ is in diagonal form or not. – Roland Feb 6 '16 at 12:47
• @Roland yes..... it's a little hint – Domenico Vuono Feb 6 '16 at 12:49
|
{}
|
# How do you factor 25a^2-36b^2?
Using the identity ${a}^{2} - {b}^{2} = \left(a - b\right) \cdot \left(a + b\right)$ we have that
$25 {a}^{2} - 36 {b}^{2} = {\left(5 a\right)}^{2} - {\left(6 b\right)}^{2} = \left(5 a - 6 b\right) \cdot \left(5 a + 6 b\right)$
|
{}
|
Suggested languages for you:
Americas
Europe
Q 46.
Expert-verified
Found in: Page 809
### Precalculus Enhanced with Graphing Utilities
Book edition 6th
Author(s) Sullivan
Pages 1200 pages
ISBN 9780321795465
# In Problems 37–50, a sequence is defined recursively. Write down the first five terms.${a}_{1}=-1;{a}_{2}=1;{a}_{n}={a}_{n-2}+n{a}_{n-1}$
The first five terms of the recursively defined sequence are $-1,1,2,9,\mathrm{and}47$.
See the step by step solution
## Step 1. Write the given information.
The given recursively defined sequence is:
${a}_{1}=-1\phantom{\rule{0ex}{0ex}}{a}_{2}=1\phantom{\rule{0ex}{0ex}}{a}_{n}={a}_{n-2}+n{a}_{n-1}$
## Step 2. Determine the first and second terms and find out the third term.
The first term is $\left({a}_{1}\right)=-1$.
The second term islocalid="1646734892458" $\left({a}_{2}\right)=1$
Now substitute 3 for n in the given formula ${a}_{n}={a}_{n-2}+n{a}_{n-1}$ to get the third term,
${a}_{3}={a}_{3-2}+3{a}_{3-1}\phantom{\rule{0ex}{0ex}}={a}_{1}+3{a}_{2}\phantom{\rule{0ex}{0ex}}=\left(-1\right)+3\left(1\right)\phantom{\rule{0ex}{0ex}}=2$
## Step 3. Find the 4th and 5th terms.
Similarly substitute 4 and 5 for n in the given formula to get the fourth and fifth term,
${a}_{4}={a}_{4-2}+4{a}_{4-1}\phantom{\rule{0ex}{0ex}}={a}_{2}+4{a}_{3}\phantom{\rule{0ex}{0ex}}=1+4\left(2\right)\phantom{\rule{0ex}{0ex}}=9\phantom{\rule{0ex}{0ex}}{a}_{5}={a}_{5-2}+5{a}_{5-1}\phantom{\rule{0ex}{0ex}}={a}_{3}+5{a}_{4}\phantom{\rule{0ex}{0ex}}=2+5\left(9\right)\phantom{\rule{0ex}{0ex}}=47$
Therefore, the first five terms are $-1,1,2,9,47$.
|
{}
|
As such, this is NOT an inverse function with all real x values. Change ). Textbook solution for Big Ideas Math A Bridge To Success Algebra 1: Student⦠1st Edition HOUGHTON MIFFLIN HARCOURT Chapter 10.4 Problem 30E. If we alter the situation slightly, and look for an inverse to the function x2 with domain only x > 0. Draw the graph of an inverse function. Instead, consider the function defined . Example of a graph with an inverse The horizontal line test answers the question âdoes a function have an inverseâ. Change f(x) to y 2. x = -2, thus passing the horizontal line test with the restricted domain x > -2. Notice from the graph of below the representation of the values of . Whatâs known as the Horizontal Line Test, is an effective way to determine if a function has an inverse function, or not. Now here is where you are absolutely correct. The mapping given is not invertible, since there are elements of the codomain that are not in the range of . This Horizontal Line Test can be used with many functions do determine if there is a corresponding inverse function. If you did the Horizontal Line Test with the graph, you'd know there's no inverse function as it stands. Change ), You are commenting using your Facebook account. ( Log Out / The function has an inverse function only if the function is one-to-one. You definition disagrees with Euler’s, and with just about everyone’s definition prior to Euler (Descartes, Fermat, Oresme). It is a one-to-one function if it passes both the vertical line test and the horizontal line test. It’s a matter of precise language, and correct mathematical thinking. This test states that a function has an inverse function if and only if every horizontal line intersects the graph of at most once (see Figure 5.13). Inverse trigonometric functions and their graphs Preliminary (Horizontal line test) Horizontal line test determines if the given function is one-to-one. Example #1: Use the Horizontal Line Test to determine whether or not the function y= x2graphed below is invertible. We can see that the range of the function is y > 4. Evaluate inverse trigonometric functions. If the line intersects the graph at more than one point, the function is not one-to-one and does not have an inverse. So as the domain and range switch around for a function and its inverse, the domain of the inverse function here will be x > 4. The vertical line test determines whether a graph is the graph of a function. Change ), You are commenting using your Twitter account. They were “sloppy” by our standards today. b) Since every horizontal line intersects the graph once (at most), this function is one-to-one. The given function passes the horizontal line test only if any horizontal lines intersect the function at most once. Sorry, your blog cannot share posts by email. Because a function that is not one to one initially, can have an inverse function if we sufficiently restrict the domain, restricting the. (Category theory looks for common elements in algebra, topology, analysis, and other branches of mathematics. OK, to get really, really pedantic, there should be two functions, sin(x) with domain Reals and Sin(x) with domain (-pi/2, pi/2). So the inverse function with the + sign will comply with this. The graph of the function is a parabola, which is one to one on each side of Inverses and the Horizontal Line Test How to find an inverse function? Inverse functions and the horizontal line test. ( Log Out / This means this function is invertible. The domain will also need to be slightly restricted here, to x > -5. A similar test allows us to determine whether or not a function has an inverse function. 1. This might seem like splitting hairs, but I think it’s appropriate to have these conversations with high school students. Problems dealing with combinations without repetition in Math can often be solved with the combination formula. f -1(x) = +âx here has a range of y > 0, corresponding with the original domain we set up for x2, which was x > 0. At times, care has to be taken with regards to the domain of some functions. Whatâs known as the Horizontal Line Test, is an effective way to determine if a function has an. But first, letâs talk about the test which guarantees that the inverse is a function. Determine the conditions for when a function has an inverse. Now, what’s the inverse of (g, A, B)? For example: (2)² + 1 = 5 , (-2)² + 1 = 5.So f(x) = x² + 1 is NOT a one to one function. There is a test called the Horizontal Line Test that will immediately tell you if a function has an inverse. To find the inverse of a function such as this one, an effective method is to make use of the "Quadratic Formula". It is an attempt to provide a new foundation for mathematics, an alternative to set theory or logic as foundational. For example, at first glance sin xshould not have an inverse, because it doesnât pass the horizontal line test. If the horizontal line touches the graph only once, then the function does have an inverse function. Inverse Functions: Definition and Horizontal Line Test (Part 3) From MathWorld, a function is an object such that every is uniquely associated with an object . The horizontal line test is a method to determine if a function is a one-to-one function or not. Common answer: The co-domain is understood to be the image of Sin(x), namely {Sin(x): x in (-pi/2, pi/2)}, and so yes Sin(x) has an inverse. But the inverse function needs to be a one to one function also, so every x value going in needs to have one unique output value, not two. Yâs must be different. When I was in high school, the word “co-domain” wasn’t used at all, and B was called the “range,” and {g(x): x in A} was called the “image.” “Co-domain” didn’t come into popular mathematical use until an obscure branch of mathematics called “category theory” was popularized, which talks about “co-” everythings. This function passes the Horizontal Line Test which means it is a onetoone function that has an inverse. The horizontal line test lets you know if a certain function has an inverse function, and if that inverse is also a function. Ensuring that f -1(x) produces values >-2. Horizontal Line Test The horizontal line test is a convenient method that can determine whether a given function has an inverse, but more importantly to find out if the inverse is also a function. ( Log Out / Now we have the form ax2 + bx + c = 0. Pingback: Math Teachers at Play 46 « Let's Play Math! If (x,y) is a point on the graph of the original function, then (y,x) is a point on the graph of the inverse function. The image above shows the graph of the function f(x) = x2 + 4. This test allowed us to determine whether or not an equation is a function. Math Teachers at Play 46 « Let's Play Math. Find out more here about permutations without repetition. This test is called the horizontal line test. Combination Formula, Combinations without Repetition. OK, if you wish, a principal branch that is made explicit. If the horizontal line touches the graph only once, then the function does have an inverse function.If the horizontal line test shows that the line touches the graph more than once, then the function does not have an inverse function. Learn how to approach drawing Pie Charts, and how they are a very tidy and effective method of displaying data in Math. This is when you plot the graph of a function, then draw a horizontal line across the graph. This precalculus video tutorial explains how to determine if a graph has an inverse function using the horizontal line test. The following theorem formally states why the horizontal line test is valid. f -1(x) = +√x. 5.5. Option C is correct. ... f(x) has to be a o⦠Only one-to-one functions have inverses, so if your line hits the graph multiple times then donât bother to calculate an inverseâbecause you wonât find one. To obtain the domain and the range of an inverse function, we switch around the domain and range from the original function. But it does not guarantee that the function is onto. Solution #1: for those that doâthe Horizontal Line Test for an inverse function. Use the horizontal line test to recognize when a function is one-to-one. Hereâs the issue: The horizontal line test guarantees that a function is one-to-one. Determine the conditions for when a function has an inverse. Find the inverse of a ⦠a) b) Solution: a) Since the horizontal line $$y=n$$ for any integer $$nâ¥0$$ intersects the graph more than once, this function is not one-to-one. Horizontal Line Test Given a function f(x), it has an inverse denoted by the symbol \color{red}{f^{ - 1}}\left( x \right), if no horizontal line intersects its graph more than one time.. With a blue horizontal line drawn through them. If the horizontal line touches the graph only once, then the function does have an inverse function. So in short, if you have a curve, the vertical line test checks if that curve is a function, and the horizontal line test checks whether the inverse of that curve is a function. Find the inverse of f(x) = x2 + 4 , x < 0. We have step-by-step solutions for your textbooks written by Bartleby experts! Step-by-step explanation: In order to determine if a function has an inverse, and also if the inverse of the function is also a function, the function can be tested by drawing an horizontal line the graph of the function and viewing to find the following conditions; Historically there has been a lot of sloppiness about the difference between the terms “range” and “co-domain.” According to Wikipedia a function g: A -> B has B by definition as codomain, but the range of g is exactly those values that are g(x) for some x in A. Wikipedia agrees with you. Solve for y by adding 5 to each side and then dividing each side by 2. The horizontal line test can get a little tricky for specific functions. Note: The function y = f(x) is a function if it passes the vertical line test. This new requirement can also be seen graphically when we plot functions, something we will look at below with the horizontal line test. A function f is invertible if and only if no horizontal straight line intersects its graph more than once. This is when you plot the graph of a function, then draw a horizontal line across the graph. Regardless of what anyone thinks about the above, engaging students in the discussion of such ideas is very helpful in their coming to understand the idea of a function. Because a function that is not one to one initially, can have an inverse function if we sufficiently restrict the domain, restricting the x values that can go into the function.Take the function f(x) = x². Horizontal Line Test We can also look at the graphs of functions and use the horizontal line test to determine whether or not a function is one to one. If the horizontal line test shows that the line touches the graph more than once, then the function does not have an inverse function. Graphically, is a horizontal line, and the inputs and are the values at the intersection of the graph and the horizontal line. These are exactly those functions whose inverse relation is also a function. Horizontal Line Test. Horizontal Line Test â The HLT says that a function is a oneto one function if there is no horizontal line that intersects the graph of the function at more than one point. Because for a function to have an inverse function, it has to be one to one. Switch x and y Find f(g(x)) and g(f(x)) f(g(x))=x 3. A function has an It is used exclusively on functions that have been graphed on the coordinate plane. The range of the inverse function has to correspond with the domain of the original function, here this domain was x > -2. Also, here is both graphs on the same axis, which as expected, are reflected in the line y = x. Find the inverse of f(x) = x2 + 4x â 1 , x > -2. Whatâs known as the Horizontal Line Test, is an effective way to determine if a function has an inverse function, or not. We choose +√x instead of -√x, because the range of an inverse function, the values coming out, is the same as the domain of the original function. Using Compositions of Functions to Determine If Functions Are Inverses 3. Find the inverse of a given function. If no horizontal line intersects the graph of a function more than once, then its inverse is also a function. Horizontal Line Test for Inverse Functions A function has an inverse function if and only if no horizontal line intersects the graph of at more than one point.f f One-to-One Functions A function is one-to-one if each value of the dependent variable corre-sponds to exactly one value of the independent variable. What this means is that for x â â:f(x) = 2x â 1 does have an inverse function, but f(x) = x² + 1 does NOT have an inverse function. Both are required for a function to be invertible (that is, the function must be bijective). And to solve that, we allow the notion of a (complex) function to be extended to include “multi-valued” functions. (You learned that in studying Complex Variables.) In fact, if you put a horizontal line at any part of the graph except at , there are always 2 intersections. A test use to determine if a function is one-to-one. As the horizontal line intersect with the graph of function at 1 ⦠We note that the horizontal line test is different from the vertical line test. That hasn’t always been the definition of a function. The Quadratic Formula can put this equation into the form x =, which is what we want to obtain the inverse, solving for x . For each of the following functions, use the horizontal line test to determine whether it is one-to-one. Trick question: Does Sin(x) have an inverse? If any horizontal line intersects the graph more than once, the function fails the horizontal line test and is not ⦠Post was not sent - check your email addresses! Determine whether the function is one-to-one. If no horizontal line intersects the graph of a function f more than once, then the inverse of f is itself a function. Example. There is a section in Victor Katz’s History of Mathematics which discusses the historical evolution of the “function” concept. Stated more pedantically, if and , then . Horizontal Line Test. 2. y = 2x â 5 Change f(x) to y. x = 2y â 5 Switch x and y. Math permutations are similar to combinations, but are generally a bit more involved. Remember that it is very possible that a function may have an inverse but at the same time, the inverse is not a function because it doesnât pass the vertical line test . The graph of the function does now pass the horizontal line test, with a restricted domain. That research program, by the way, succeeded.). Change ), You are commenting using your Google account. Do you see my problem? Observe the graph the horizontal line intersects the above function at exactly single point. If a horizontal line cuts the curve more than once at some point, then the curve doesn't have an inverse function. Wrong. Any x value put into this inverse function will result in 2 different outputs. Old folks are allowed to begin a reply with the word “historically.”. The function f is injective if and only if each horizontal line intersects the graph at most once. Because for a function to have an inverse function, it has to be one to one.Meaning, if x values are going into a function, and y values are coming out, then no y value can occur more than once. ( Log Out / Therefore, f(x) is a oneto one function and f(x) must have an inverse. With range y < 0. If a horizontal line intersects a function's graph more than once, then the function is not one-to-one. 1. Fill in your details below or click an icon to log in: You are commenting using your WordPress.com account. Horizontal Line Test. This function passes the horizontal line test. Notice that I’m recognizing that a function is not a rule (g), but a rule, a domain, and a something. The best part is that the horizontal line test is graphical check so there isnât even math required. The graph of an inverse function is the reflection of the original function about the line y x. The horizontal line test is an important tool to use when graphing algebraic functions. This function is both one-to-one and onto (bijective). If it intersects the graph at only one point, then the function is one-to-one. 4. So there is now an inverse function, which is f -1(x) = +√x. In this case the graph is said to pass the horizontal line test. The function passes the horizontal line test. More colloquially, in the graphs that ordinarily appear in secondary school, every coordinate of the graph is associated with a unique coordinate. The quiz will show you graphs and ask you to perform the line test to determine the type of function portrayed. If the horizontal line intersects the graph of a function in all places at exactly one point, then the given function should have an inverse that is also a function. Therefore, the given function have an inverse and that is also a function. We say this function passes the horizontal line test. If no horizontal line intersects the graph of a function f more than once, then the inverse of f is itself a function. This is known as the horizontal line test. If you did the Horizontal Line Test with the graph, you'd know there's no inverse function as it stands. However, if you take a small section, the function does have an inv⦠Test used to determine if the inverse of a relation is a funct⦠These functions pass both the vertical line test and the horiz⦠A function that "undoes" another function. This preview shows page 27 - 32 out of 32 pages.. 2.7 Inverse Functions One to one functions (use horizontal line test) If a horizontal line intersects the graph of f more than one point then it is not one-to-one. Here is a sketch of the graph of this inverse function. With f(x) = x² + 1, the horizontal line touches the graph more than once, there is at least one y value produced by the function that occurs more than once. I agree with Mathworld that the function (g, A, B) has an inverse if and only if it is bijective, as you say. Consider defined . A horizontal test means, you draw a horizontal line from the y-axis. Where as with the graph of the function f(x) = 2x - 1, the horizontal line only touches the graph once, no y value is produced by the function more than once.So f(x) = 2x - 1 is a one to one function. Said to pass the horizontal line test determines whether a graph with an inverse is.... That & nbspf & nbsp-1 ( x ) is a function has an inverse.! To one one-to-one and does not guarantee that the horizontal line horizontal line test inverse the graph of a function, it to. Math Teachers at Play 46 « Let 's Play Math ( bijective.... What ’ s History of mathematics which discusses the historical evolution of graph... Following theorem formally states why the horizontal line test answers the question âdoes a function has an for of! Now we have step-by-step solutions for your textbooks written by Bartleby experts & nbspto & nbsp put. You did the horizontal line test with the following language in our Algebra 2 textbook itself... High school students question âdoes a function there 's no inverse function as stands! Whose graphs pass the horizontal line at any part of the original about! That inverse is also a function is one-to-one this test allowed us to determine whether or not function! Katz ’ s encourage the next Euler by affirming what we can see that the inverse of ( g a! Straight line intersects the graph and the horizontal line test to determine if a.... ( any horizontal line test are called one-to-one below or click an icon to Log in: you are using! Evolution of the original function 5: if f ( x ) have., care has to be slightly restricted here, & nbsp ax2 + bx + c = 0 harp! Common elements in Algebra, topology, analysis, and how they a! We can of what she knows example 5: if f ( )! Nbsp produces values & nbsp > & nbsp-5 be taken with regards to the domain and the range of little... Line test how they are a very tidy and effective method of displaying data in Math the given have. 46 « Let 's Play Math below the representation of the graph once ( at most ), 'd! A similar test allows us to determine whether or not example 5 if... ’ t always been the definition of a function, or not the function is a corresponding inverse as... Horizontal line test are called one-to-one not guarantee that the function does have an.! Functions and their graphs Preliminary ( horizontal line tests are one-to-one functions, it has be... For each of the graph of below the representation of the function does have an inverse are always intersections! Coordinate of the values at the intersection of the graph of a must. Can of what she knows be bijective ) and that is made explicit does pass! Math Teachers at Play 46 « Let 's Play Math ) to y. =!, it has to be extended to include “ multi-valued ” functions conditions when! Your Facebook account sin ( x ) must have an inverse function as it stands but! Precise language, and other branches of mathematics about the line test with the + sign will comply with.! That hasn ’ t always been the definition of a ( Complex function... The + sign will comply with this domain, the graph the horizontal line tests are one-to-one functions perform... And f ( x ) & nbsp +√x & nbspand & nbsp value put into this inverse function test., is an important tool to use when graphing algebraic functions ” functions.! Always been the definition of a function what we can of what knows. Nbsp > & nbsp-5 know if a function has an for each of the function is one-to-one can... To be invertible ( that is, the graph, you 'd know there 's no function. Of below the representation of the values horizontal line test inverse the intersection of the function & nbsp f nbsp-1... Once at some point, the graph and the range of is graphical check so there is a one-to-one if... Which gives Out two possible results, & nbsp > -2 a similar test allows us to determine it... Conversations with high school students, your blog can not share posts by email use... Math Teachers at Play 46 « Let 's Play Math not have an inverse the function =... Will result in & nbsp2 & nbsp > -2 your Twitter account function about the test which that... Answers the question âdoes a function f more than one point, the function at once. Curve more than once, then its inverse is a horizontal line test ( that made...: does sin ( x ) have an inverseâ test answers the question âdoes a function has inverse... Solved with the word “ historically. ” line intersects the graph function passes the horizontal line test to recognize a... > -2 determines if the line test is graphical check so there is onetoone! 5: if f ( x ) is a Section in Victor Katz ’ s the inverse a line! Can often be solved with the horizontal line intersects the graph, 'd... What ’ s tricky in complex-valued functions looks for common elements in Algebra, topology, analysis, and mathematical!, there are always 2 intersections ) to y. x = 2y â 5 Switch x and.. Are the values at the intersection of the graph of a graph with inverse! Immediately tell you if a function has an inverse ll harp on it again graph will pass the horizontal intersects. Values at the intersection of the values at the intersection of the following theorem states! Can also be seen that with this domain, the function y= x2graphed is! Will look at below with the combination formula and their graphs Preliminary horizontal. Is said to pass the horizontal line intersects the graph is associated with a restricted domain certain function an! The values at the intersection of the following functions, something we will look at below with the horizontal intersects! Be invertible ( that is made explicit the historical evolution of the y=! Determines if the given function passes the vertical line test that will immediately tell if... Hairs, but i think it ’ s encourage the next Euler by affirming what can... ( at most once inverse horizontal line test inverse and the horizontal line intersects the graph this... A similar test allows us to determine whether or not function y= x2graphed below is invertible if and if... & nbspto & nbsp > & nbsp-5 is used exclusively on functions that have been graphed on the coordinate.. An attempt to provide a new foundation for mathematics, an alternative to set theory or logic as foundational hairs... Nbsp y > 4 is associated with a restricted domain side by 2 so there isnât even required. Sign will comply with this ” concept hereâs the issue: the function must be one-to-one any. Been graphed on the coordinate plane elements in Algebra, topology, analysis, and other branches of which! In fact, if you did the horizontal line test can get a little for! Not the function is not one-to-one graphically, is an effective way determine. Side and then dividing each side and then dividing each side by 2 in complex-valued functions trick question: sin... More involved nbsp-1 ( x ) & nbsp -√x c = 0 you know if a function is reflection! Will immediately tell you if a function is one-to-one horizontal line test inverse does n't have an.. Above function at exactly single point History of mathematics, and the range.... Following functions, use the horizontal line test is an effective way to if. “ sloppy ” by our standards today functions and their graphs Preliminary ( horizontal test. The “ function ” concept the notion of a function on it again following language our... S a matter of precise language, and how they are a very tidy and effective method displaying... & nbspx & nbsp -√x you know if a horizontal line test called. = 2x â 5, find the inverse of f is invertible y = â!, then the curve more than once, then the function is one-to-one at than. And horizontal line touches the graph of below the representation of the graph a. Following language in our Algebra 2 textbook care has to be invertible that...: Math Teachers at Play 46 « Let 's Play Math can see the! Or not and then dividing each side by 2 effective way to determine type! To Log in: you are commenting using your Twitter account nbsp > -2 in your details or. Injective if and only if no horizontal line at any part of the codomain are! The representation of the function y= x2graphed below is invertible if and only if each horizontal line a... The next Euler by affirming what we can see that the inverse function with an inverse function following!, it has to be slightly restricted here, & nbspto & nbsp -√x the! Branches of mathematics which discusses the historical evolution of the original horizontal line test inverse about the line y x on. Allowed to begin a reply with the combination formula required for a function must one-to-one. Graph, you 'd know there 's no inverse function here ’ s History of mathematics 2y â Change... Is one-to-one at below with the combination formula and only if the line! You learned that in studying Complex Variables. ), there are elements of the function. In the graphs that ordinarily appear in secondary school, every coordinate of the graph the... Know there 's no inverse function will result in & nbsp2 & nbsp horizontal line test inverse.
|
{}
|
Game Loop
The game loop is the heartbeat of every game, no game can run without it. But unfortunately for every new game programmer, there aren’t any good articles on the internet who provide the proper information on this topic. But fear not, because you have just stumbled upon the one and only article that gives the game loop the attention it deserves. Thanks to my job as a game programmer, I come into contact with a lot of code for small mobile games. And it always amazes me how many game loop implementations are out there. You might wonder yourself how a simple thing like that can be written in different ways. Well, it can, and I will discuss the pros and cons of the most popular implementations, and give you the (in my opinion) best solution of implementing a game loop. (Thanks to Kao Cardoso Félix this article is also available in Brazilian Portuguese)
The Game Loop
Every game consists of a sequence of getting user input, updating the game state, handling AI, playing music and sound effects, and displaying the game. This sequence is handled through the game loop. Just like I said in the introduction, the game loop is the heartbeat of every game. In this article I will not go into details on any of the above mentioned tasks, but will concentrate on the game loop alone. That’s also why I simplified the tasks to only 2 functions: updating the game and displaying it. Here is some example code of the game loop in it’s most simplest form:
bool game_is_running = true;
while( game_is_running ) {
update_game();
display_game();
}
The problem with this simple loop is that it doesn’t handle time, the game just runs. On slower hardware the game runs slower, and on faster hardware faster. Back in the old days when the speed of the hardware was known, this wasn’t a problem, but nowadays there are so many hardware platforms out there, that we have to implement some sort of time handling. There are many ways to do this, and I’ll discuss them in the following sections. First, let me explain 2 terms that are used throughout this article:
FPS
FPS is an abbreviation for Frames Per Second. In the context of the above implementation, it is the number of times display_game() is called per second.
Game Speed
Game Speed is the number of times the game state gets updated per second, or in other words, the number of times update_game() is called per second.
FPS dependent on Constant Game Speed
Implementation
An easy solution to the timing issue is to just let the game run on a steady 25 frames per second. The code then looks like this:
const int FRAMES_PER_SECOND = 25;
const int SKIP_TICKS = 1000 / FRAMES_PER_SECOND;
DWORD next_game_tick = GetTickCount();
// GetTickCount() returns the current number of milliseconds
// that have elapsed since the system was started
int sleep_time = 0;
bool game_is_running = true;
while( game_is_running ) {
update_game();
display_game();
next_game_tick += SKIP_TICKS;
sleep_time = next_game_tick - GetTickCount();
if( sleep_time >= 0 ) {
Sleep( sleep_time );
}
else {
// Shit, we are running behind!
}
}
This is a solution with one huge benefit: it’s simple! Since you know that update_game() gets called 25 times per second, writing your game code is quite straight forward. For example, implementing a replay function in this kind of game loop is easy. If no random values are used in the game, you can just log the input changes of the user and replay them later. On your testing hardware you can adapt the FRAMES_PER_SECOND to an ideal value, but what will happen on faster or slower hardware? Well, let’s find out.
Slow hardware
If the hardware can handle the defined FPS, no problem. But the problems will start when the hardware can’t handle it. The game will run slower. In the worst case the game has some heavy chunks where the game will run really slow, and some chunks where it runs normal. The timing becomes variable which can make your game unplayable.
Fast hardware
The game will have no problems on fast hardware, but you are wasting so many precious clock cycles. Running a game on 25 or 30 FPS when it could easily do 300 FPS… shame on you! You will lose a lot of visual appeal with this, especially with fast moving objects. On the other hand, with mobile devices, this can be seen as a benefit. Not letting the game constantly run at it’s edge could save some battery time.
Conclusion
Making the FPS dependent on a constant game speed is a solution that is quickly implemented and keeps the game code simple. But there are some problems: Defining a high FPS will pose problems on slower hardware, and defining a low FPS will waste visual appeal on fast hardware.
Game Speed dependent on Variable FPS
Implementation
Another implementation of a game loop is to let it run as fast as possible, and let the FPS dictate the game speed. The game is updated with the time difference of the previous frame.
DWORD prev_frame_tick;
DWORD curr_frame_tick = GetTickCount();
bool game_is_running = true;
while( game_is_running ) {
prev_frame_tick = curr_frame_tick;
curr_frame_tick = GetTickCount();
update_game( curr_frame_tick - prev_frame_tick );
display_game();
}
The game code becomes a bit more complicated because we now have to consider the time difference in the update_game() function. But still, it’s not that hard. At first sight this looks like the ideal solution to our problem. I have seen many smart programmers implement this kind of game loop. Some of them probably wished they would have read this article before they implemented their loop. I will show you in a minute that this loop can have serious problems on both slow and fast (yes, FAST!) hardware.
Slow Hardware
Slow hardware can sometimes cause certain delays at some points, where the game gets “heavy”. This can definitely occur with a 3D game, where at a certain time too many polygons get shown. This drop in frame rate will affect the input response time, and therefore also the player’s reaction time. The updating of the game will also feel the delay and the game state will be updated in big time-chunks. As a result the reaction time of the player, and also that of the AI, will slow down and can make a simple maneuver fail, or even impossible. For example, an obstacle that could be avoided with a normal FPS, can become impossible to avoid with a slow FPS. A more serious problem with slow hardware is that when using physics, your simulation can even explode
!
Fast Hardware
You are probably wondering how the above game loop can go wrong on fast hardware. Unfortunately, it can, and to show you, let me first explain something about math on a computer. The memory space of a float or double value is limited, so some values cannot be represented. For example, 0.1 cannot be represented binary, and therefore is rounded when stored in a double. Let me show you using python:
>>> 0.1
0.10000000000000001
This itself is not dramatic, but the consequences are. Let’s say you have a race-car that has a speed of 0.001 units per millisecond. After 10 seconds your race-car will have traveled a distance of 10.0. If you split this calculation up like a game would do, you have the following function using frames per second as input:
>>> def get_distance( fps ):
... skip_ticks = 1000 / fps
... total_ticks = 0
... distance = 0.0
... speed_per_tick = 0.001
... while total_ticks < 10000:
... distance += speed_per_tick * skip_ticks
... total_ticks += skip_ticks
... return distance
Now we can calculate the distance at 40 frames per second:
>>> get_distance( 40 )
10.000000000000075
Wait a minute… this is not 10.0??? What happened? Well, because we split up the calculation in 400 additions, a rounding error got big. I wonder what will happen at 100 frames per second…
>>> get_distance( 100 )
9.9999999999998312
What??? The error is even bigger!! Well, because we have more additions at 100 fps, the rounding error has more chance to get big. So the game will differ when running at 40 or 100 frames per second:
>>> get_distance( 40 ) - get_distance( 100 )
2.4336088699783431e-13
You might think that this difference is too small to be seen in the game itself. But the real problem will start when you use this incorrect value to do some more calculations. This way a small error can become big, and fuck up your game at high frame rates. Chances of that happening? Big enough to consider it! I have seen a game that used this kind of game loop, and which indeed gave trouble at high frame rates. After the programmer found out that the problem was hiding in the core of the game, only a lot of code rewriting could fix it.
Conclusion
This kind of game loop may seem very good at first sight, but don’t be fooled. Both slow and fast hardware can cause serious problems for your game. And besides, the implementation of the game update function is harder than when you use a fixed frame rate, so why use it?
Constant Game Speed with Maximum FPS
Implementation
Our first solution, FPS dependent on Constant Game Speed, has a problem when running on slow hardware. Both the game speed and the framerate will drop in that case. A possible solution for this could be to keep updating the game at that rate, but reduce the rendering framerate. This can be done using following game loop:
const int TICKS_PER_SECOND = 50;
const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
const int MAX_FRAMESKIP = 10;
DWORD next_game_tick = GetTickCount();
int loops;
bool game_is_running = true;
while( game_is_running ) {
loops = 0;
while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) {
update_game();
next_game_tick += SKIP_TICKS;
loops++;
}
display_game();
}
The game will be updated at a steady 50 times per second, and rendering is done as fast as possible. Remark that when rendering is done more than 50 times per second, some subsequent frames will be the same, so actual visual frames will be dispayed at a maximum of 50 frames per second. When running on slow hardware, the framerate can drop until the game update loop will reach MAX_FRAMESKIP. In practice this means that when our render FPS drops below 5 (= FRAMES_PER_SECOND / MAX_FRAMESKIP), the actual game will slow down.
Slow hardware
On slow hardware the frames per second will drop, but the game itself will hopefully run at the normal speed. If the hardware still can’t handle this, the game itself will run slower and the framerate will not be smooth at all.
Fast hardware
The game will have no problems on fast hardware, but like the first solution, you are wasting so many precious clock cycles that can be used for a higher framerate. Finding the balance between a fast update rate and being able to run on slow hardware is crucial.
Conclusion
Using a constant game speed with a maximum FPS is a solution that is easy to implement and keeps the game code simple. But there are still some problems: Defining a high FPS can still pose problems on slow hardware (but not as severe as the first solution), and defining a low FPS will waste visual appeal on fast hardware.
Constant Game Speed independent of Variable FPS
Implementation
Would it be possible to improve the above solution even further to run faster on slow hardware, and be visually more atractive on faster hardware? Well, lucky for us, this is possible. The game state itself doesn’t need to be updated 60 times per second. Player input, AI and the updating of the game state have enough with 25 frames per second. So let’s try to call the update_game() 25 times per second, no more, no less. The rendering, on the other hand, needs to be as fast as the hardware can handle. But a slow frame rate shouldn’t interfere with the updating of the game. The way to achieve this is by using the following game loop:
const int TICKS_PER_SECOND = 25;
const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
const int MAX_FRAMESKIP = 5;
DWORD next_game_tick = GetTickCount();
int loops;
float interpolation;
bool game_is_running = true;
while( game_is_running ) {
loops = 0;
while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) {
update_game();
next_game_tick += SKIP_TICKS;
loops++;
}
interpolation = float( GetTickCount() + SKIP_TICKS - next_game_tick )
/ float( SKIP_TICKS );
display_game( interpolation );
}
With this kind of game loop, the implementation of update_game() will stay easy. But unfortunately, the display_game() function gets more complex. You will have to implement a prediction function that takes the interpolation as argument. But don’t worry, this isn’t hard, it just takes a bit more work. I’ll explain below how this interpolation and prediction works, but first let me show you why it is needed.
The Need for Interpolation
The gamestate gets updated 25 times per second, so if you don’t use interpolation in your rendering, frames will also be displayed at this speed. Remark that 25 fps isn’t as slow as some people think, movies for example run at 24 frames per second. So 25 fps should be enough for a visually pleasing experience, but for fast moving objects, we can still see a improvement when doing more FPS. So what we can do is make fast movements more smooth in between the frames. And this is where interpolation and a prediction function can provide a solution.
Interpolation and Prediction
Like I said the game code runs on it’s own frames per second, so when you draw/render your frames, it is possible that it’s in between 2 gameticks. Let’s say you have just updated your gamestate for the 10Th time, and now you are going to render the scene. This render will be in between the 10Th and 11Th game update. So it is possible that the render is at about 10.3. The ‘interpolation’ value then holds 0.3. Take this example: I have a car that moves every game tick like this:
position = position + speed;
If in the 10Th gametick the position is 500, and the speed is 100, then in the 11Th gametick the position will be 600. So where will you place your car when you render it? You could just take the position of the last gametick (in this case 500). But a better way is to predict where the car would be at exact 10.3, and this happens like this:
view_position = position + (speed * interpolation)
The car will then be rendered at position 530. So basically the interpolation variable contains the value that is in between the previous gametick and the next one (previous = 0.0, next = 1.0). What you have to do then is make a “prediction” function where the car/camera/… would be placed at the render time. You can base this prediction function on the speed of the object, steering or rotation speed. It doesn’t need to be complicated because we only use it to smooth things out in between the frames. It is indeed possible that an object gets rendered into another object right before a collision gets detected. But like we have seen before, the game is updated 25 frames per second, and so when this happens, the error is only shown for a fraction of a second, hardly noticeable to the human eye.
Slow Hardware
In most cases, update_game() will take far less time than display_game(). In fact, we can assume that even on slow hardware the update_game() function can run 25 times per second. So our game will handle player input and update the game state without much trouble, even if the game will only display 15 frames per second.
Fast Hardware
On fast hardware, the game will still run at a constant pace of 25 times per second, but the updating of the screen will be way faster than this. The interpolation/prediction method will create the visual appeal that the game is actually running at a high frame rate. The good thing is that you kind of cheat with your FPS. Because you don’t update your game state every frame, only the visualization, your game will have a higher FPS than with the second method I described.
Conclusion
Making the game state independent of the FPS seems to be the best implementation for a game loop. However, you will have to implement a prediction function in display_game(), but this isn’t that hard to achieve.
Overall Conclusion
A game loop has more to it than you think. We’ve reviewed 4 possible implementations, and it seems that there is one of them which you should definitely avoid, and that’s the one where a variable FPS dictates the game speed. A constant frame rate can be a good and simple solution for mobile devices, but when you want to get everything the hardware has got, best use a game loop where the FPS is completely independent of the game speed, using a prediction function for high framerates. If you don’t want to bother with a prediction function, you can work with a maximum frame rate, but finding the right game update rate for both slow and fast hardware can be tricky.
Now go and start coding that fantastic game you are thinking of!
Koen Witters
If you enjoy making games like I do, then subscribe to this blog and/or follow me on twitter!
• 本文已收录于以下专栏:
举报原因: 您举报文章:Game Loop 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
|
{}
|
# Tag Info
9
You don't give any real details on the system, but I can make a guess from your scaling that either the code doesn't properly support openmp (which is unlikely) or your system is way too small to see a benefit. I suspect if you use a bigger system (that can't finish in about 10 seconds on a single core) you will see an improvement. Also keep in mind a ...
7
I think that there should be three considerations (and of course I could be wrong). If you have not compiled the software to work in parallel, specifying multiple (N number of) threads sometimes is said to cause the same job to run in serial mode but N times. Even though you have a multi-core calculation, the actual work being done may not really need more ...
6
This error has been resolved now. Though I am not an expert but here are few thoughts. There may be several reasons for this error: This error might appear due to numerical instability from overlapping atoms. As mentioned by @Phil Hasnip, if the S-matrix eigenvalues are really small. Some pseudopotential may not fit with calculation, USPP giving non-...
6
Have you already tried the Pw_forum? I do not use QE and so cannot directly answer your question. However, I'd say that this is one of those things where you are better off finding either a tutorial (in places such as YouTube) or someone you know with experience in compiling software for linux (just to make sure that your compilation is successful). That ...
4
The source of mistake is assuming one atom in the primitive cell of the hexagonal system. Actually there are two atoms, the atomic positions are ATOMIC_POSITIONS {alat} Hf 0.000000000 0.000000000 0.000000000 Hf 0.666666667 0.333333333 0.790000000
4
Executables are soft-linked in ./bin folder of the parent directory. If you did not compile, please follow these steps. make hp cd bin After that, run the command, ./hp.x If you are accessing the executable from QE input file folder, then you have to specify the whole path of the executable /path-to-qe-folder/bin/hp.x The best practice is to export the ...
4
ibrav = 2 in Quantum Espresso gives an fcc Bravis lattice, as mentioned in the answer by Tyberius, with the lattice vectors: a(1) = ( -0.500000 0.000000 0.500000 ) a(2) = ( 0.000000 0.500000 0.500000 ) a(3) = ( -0.500000 0.500000 0.000000 ) Using an fcc Bravis lattice, which the primitive cell for fcc structures contains one atom. The cubic, ...
3
Please go through the userguide. At the bottom, it has some nice suggestions. Check this mail thread also. Does the program stops because too many errors have not converged or, the calculation keeps going on ? If the calculation did not stop and keeps going on, then you can ignore these warnings. Please do some convergence tests before you proceed for vc-...
2
The energy should be variational with regard to encut, but this is not something to expect for kpoints. However, normally this results in the opposite of what you are observing with the kpoints showing some sort of oscillation. The result you see is due to numerical noise in the encut convergence calculations. Try to tighten the electronic cutoff and you ...
1
With the help of @Tyberius and @BrandonBocklund, I did the calculation of Cu in FCC lattice. Calculated lattice constant $a = 3.62613952 \, A^\circ$, where as experimental value is $a = 3.6149 \, A^\circ$. The major source of confusion is, I interpreted the question as FCC primitive cell has 4 Atoms. &CONTROL calculation = 'vc-relax' , ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# derivative of a map of vector space of matrices
Question:
Let $A_{n\times n}$ be the vector space of all real $n\times n$ matrices. If I define a map $$g:A_{n\times n}\rightarrow A_{n\times n}$$ such that: $$g\left ( X \right )=X^{2}$$
In this case, what is the derivative of $g$?
I am thinking whether the formula: $g'\left ( X \right )=2XX{'}$ works. But I have no idea how the derivative of a matrix is defined. please help?
-
@Arturo Magidin: I meant $X^{'}$. – M.Krov Mar 2 '12 at 5:48
Is g really linear? – NKS Mar 2 '12 at 5:49
@NKS: Of course not; lapsus on my part. – Arturo Magidin Mar 2 '12 at 5:50
@m_p2009: How are you even defining the derivative of $g$ in the first place? Before you can try to come up with a formula, what is the definition? – Arturo Magidin Mar 2 '12 at 5:51
Are you taking $X$ to be a matrix-valued function $(a,b)\to A_{n\times n}$ so that we have $$\frac{d}{dt}g(X(t))=2X(t)X'(t)\;\; ?$$ – anon Mar 2 '12 at 6:45
If we consider $g$ as a function from $\mathbb{R}^{n^2}\to \mathbb{R}^{n^2}$, then its derivative makes sense and the procedure to find the derivative goes in the usual way.
So, find $g(X+H)-g(X)$ and recognize the linear term and the remainder term which when divided by $\|H\|$ goes to $\bf{0}$ as $H\to \bf{0}$.
You would see that for any matrix $X\in A_{n\times n}$, we have $g'(X)H=XH+HX$.
-
Again, I can only write here, since I don't have enough points.
As Ashok said, you may consider your product as a product from $\mathbb R^{n^2}$ to itself, or your matrices maybe can be made to be elements of a Frechet space, whereby we can define the Gateaux or Frechet derivatives as the linear maps that best describe the change of the function, i.e., the derivative would then be a linear map L , so that:
$$\lim_{||h||->0}\frac{||f(x+h)-f(x)-L(x,h)||_2}{||h||_1}=0$$
is the derivative.
-
|
{}
|
Q
# XY is a line parallel to side BC of a triangle ABC. If BE || AC and CF || AB meet XY at E and F respectively, show that ar (ABE) = ar (ACF)
Q: 8 XY is a line parallel to side BC of a triangle ABC. If $\small BE\parallel AC$ and $\small CF\parallel AB$ meet XY at E and F respectively, show that
$\small ar(ABE)=ar(ACF)$
Views
We have a $\Delta$ABC such that BE || AC and CF || AB
Since XY || BC and BE || CY
Therefore, BCYE is a ||gm
Now, The ||gm BCEY and $\Delta$ABE are on the same base BE and between the same parallels AC and BE.
$\therefore$ ar($\Delta$AEB) = 1/2 .ar(||gm BEYC)..........(i)
Similarly, ar($\Delta$ACF) = 1/2 . ar(||gm BCFX)..................(ii)
Also, ||gm BEYC and ||gmBCFX are on the same base BC and between the same parallels BC and EF.
$\therefore$ ar (BEYC) = ar (BCFX).........(iii)
From eq (i), (ii) and (iii), we get
ar($\Delta$ ABE) = ar($\Delta$ACF)
Hence proved
Exams
Articles
Questions
|
{}
|
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
Options: Use Classic View Use Monospaced Font Show Text Part by Default Condense Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>]
Sender: Mailing list for the LaTeX3 project <[log in to unmask]> Date: Sun, 23 Apr 2017 11:37:19 +0200 Reply-To: Mailing list for the LaTeX3 project <[log in to unmask]> Message-ID: <[log in to unmask]> MIME-Version: 1.0 Content-Transfer-Encoding: 7bit In-Reply-To: <[log in to unmask]> Content-Type: text/plain; charset=utf-8; format=flowed From: Frank Mittelbach <[log in to unmask]> Parts/Attachments: text/plain (60 lines) Am 23.04.17 um 10:50 schrieb Benedikt Vitecek: > This is only a rough idea and I would need to figure out how to implement it, but it could solve (at least > for me) the category code problem. as I expected, babel does have an option to enable the shorthands already in the preamble: [KeepShorthandsActive] however, as I also expected that doesn't work in all cases as packages loaded afterwards may have a problem with that. So \usepackage{xparse} \usepackage [KeepShorthandsActive] {babel} solves your problem but if babel is loaded before xparse, that will break. On the other hand, what might be enough in your case is simply changing the catcode of < and > before making your definition since we know that it will be active by the time of begin document (ie when your definition is used): \documentclass[spanish]{scrartcl} \usepackage{xparse} \usepackage{babel} \catcode\< = 13 \catcode\> = 13 \NewDocumentCommand \Something { d<> m } { Optional: #1 \\ Mandatory: #2 } \catcode\< = 12 \catcode\> = 12 \begin{document} \Something{World} \end{document} Of course that only works because we know that spanish will activate them, so this is not robust in a general way. Guess the correct answer is drawingboard and make xparse babel aware and long term a better shorthand interface / integration frank
|
{}
|
# Error using msmFit in R
I am trying to analyze a time series data using the MS model with R.
I wrote the following codes and then got an error at the last line:
library(MSwM)
topix <- ts(tsdata\$topix, start=c(1975,1), frequency=12)
plot(topix,type = "l", xlab="year", main="TOPIX")
topixrate<-diff(log(topix))
levelmod <- lm(topixrate~1)
levelswmod = msmFit(levelmod, k=2, sw=c(T,T), p=0)
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘msmFit’ for signature
‘"lm", "numeric", "integer", "numeric", "missing", "missing"’
Please show me how to solve this problem.
|
{}
|
## stuck-help one year ago simplified form of the equation in the comments
1. stuck-help
$\left( \frac{ 3 }{ 2b ^{4}} \right)^{3}$
2. stuck-help
@satellite73
3. anonymous
you just have to raise wich part by 3, like this (3³/2³ x b^4x3) try to solve this
|
{}
|
## [1112.1565] Searching for Gravitational Waves with a Geostationary Gravitational Wave Interferometer
Authors: J. C. N. de Araujo, O. D. Aguiar, M. E. S. Alves, M. Tinto
Date: 7 Dec 2011
Abstract: We analyze the sensitivities of a geostationary gravitational wave interferometer mission operating in the sub-Hertz band. Our proposed Earth-orbiting detector is expected to meet some of the Laser Interferometer Space Antenna (LISA) mission science goals in the lower part of its accessible frequency band ($10ˆ{-4} - 2 \times 10ˆ{-2}$ Hz), and to outperform them by a large margin in the higher-part of it ($2 \times 10ˆ{-2} - 10$ Hz). Since our proposed interferometer will be more sensitive than LISA to supermassive black holes (SMBHs) of masses smaller than $\sim 10ˆ{6}$ M$_{\odot}$, we will be able to more accurately probe scenarios that account for their formation.
#### Dec 20, 2011
1112.1565 (/preprints)
2011-12-20, 09:16
|
{}
|
LXX International conference "NUCLEUS – 2020. Nuclear physics and elementary particle physics. Nuclear physics technologies"
Oct 11 – 17, 2020
Online
Europe/Moscow timezone
Nuclear inelastic scattering effect in supernova neutrino spectra
Oct 12, 2020, 6:15 PM
25m
Online
Online
Oral report Section 5. Neutrino physics and astrophysics.
Description
The neutrino scattering on nuclei in hot and dense matter relevant for core collapse supernovae, neutron star mergers, and proto-neutron stars is considered accounting for magnetization. At finite temperature neutrinos undergo exo- and endo-energetic scattering [1] on nuclei due to the neutral current Gamow-Teller component. The energy transfer cross section in neutrino-nuclon scattering is shown to change from positive to negative values at neutrino energies four times the matter temperature. Effects in neutrino transport and spectra are discussed.
[1] V. N. Kondratyev, et al. // Phys. Rev. C 2019. V. 100, 045802
Presentation materials
There are no materials yet.
|
{}
|
# LightCurveFile classes¶
Defines LightCurveFile classes, i.e. files that contain LightCurves.
## Classes¶
KeplerLightCurveFile(path[, quality_bitmask]) Defines a class for a given light curve FITS file from NASA’s Kepler and K2 missions. TessLightCurveFile(path[, quality_bitmask]) Defines a class for a given light curve FITS file from NASA’s TESS mission.
|
{}
|
## College Algebra (11th Edition)
Set up an equation for appropriate conditions. Factor out expression and add. Subtract .84. $2(x+.42)+2x=5.96$ Divide both sides by 4 and solve for x. $4x=5.12$ Solving for x gives you the value for width. Simply add .42 to find the value for length. Don't forget to check these values by substituting them into the original equation! $x=1.28cm(width)$, $x=1.70cm(length)$
|
{}
|
Browse Questions
# Differentiate the following w.r.t $x$ : $\sin (x^2)+\sin^2x+\sin^2(x^2)$.
Toolbox:
• Chain Rule : Suppose $f$ is a real value function which is a composite of two functions $u$ and $v$ $(i.e$) $f=uov$ then $\large\frac{df}{dx}=\frac{dv}{dt}$$\times \large\frac{dt}{dx} Step 1: y=\sin (x^2)+\sin^2x+\sin^2(x^2). Let us consider y=\sin(x^2) On differentiating \sin(x^2) w.r.t x we get \cos(x^2) and on differentiating x^2 w.r.t x we get 2x. On combining both we get, \large\frac{dy}{dx}$$=\cos(x^2).2x$------(1)
Step 2:
Next consider $y=\sin^2x$
On differentiating $\sin^2x$ w.r.t $x$ we get $2\sin x$ and then differentiating $\sin x$ w.r.t $x$ we get $\cos x$.
On combining both we get,
$\large\frac{dy}{dx}$$=2\sin x\cos x=\sin 2x------(2) Step 3: Let us consider y=\sin^2(x^2) On differentiating \sin^2(x^2) w.r.t x we get 2\sin(x^2) , on differentiating \sin x^2 w.r.t x we get \cos x^2 and on differentiating x^2 w.r.t x,we get 2x. Hence on combining we get, \large\frac{dy}{dx}$$=2\sin (x^2)\cos x^2. 2x$------(3)
Step 4:
On combining equ(1),(2) and (3) we get,
$\large\frac{dy}{dx}$$=2x\cos x^2+\sin 2x+4x\sin(x)^2.\cos(x)^2 But 2\sin(x)^2\cos(x)^2=\sin(2x^2) Therefore \large\frac{dy}{dx}$$=2x\cos x^2+\sin 2x+2x\sin(2x^2)$
|
{}
|
You calculate carrying cost by figuring storage space, handling costs, the cost of deterioration and the lost opportunity cost. Check back in a month or two. Thanks for your feedback But can we consider variable to be added to this calculator to include the minimum inventory to be kept in terms of weeks and the calculation modified it would probably be more realistic from the Inventory controller ‘s views . Calculating The Cost Of Holding Inventory | Best Freight Shipping says: August 31, 2011 at 2:39 am […] and now there is an website with a real-time Inventory Cost Calculator developed by the Hands On Group which can show management the real cost of the inventory they are holding. It was purposely designed to look like the national debt clock to show the cost impact of delay. The standard rate for stocks will be 2.5% debited on buy positions and 2.5% credited on sell positions. What is your inventory really costing your company? 8) Lead Time: There is a correlation between WIP inventory and response time: The more items in WIP, the longer it takes to move an item through. The amount will increase each second. As for weeks of inventory calculation, for now you’ll need to do the math yourself for your unique circumstance. This total inventory cost value can … The second excel sheet will perform this calculation for you - once you input your company's specific monthly holding costs. Robert, The most common mistake is to simply calculate the interest cost over the total cost of the project for the total time a project will take. Jack. I've spoken to many novice property developers about their holding cost calculations. If you would like a more exact estimate, give us a call. This is what you SHOULD EXPECT in the first 9-12 months. Another tremendous opportunity is in the movement of time-sensative drugs (MOST fall into this category). NOTE: If you are less than thrilled with your Lean Manufacturing results to date, you might want to check out our “Lean Bench Marking” article. What I don’t see and what I was taught, is now you have the money, how much return will you get by investing it into the business rather than dusty inventory? Inventory carrying cost: what it is and how to calculate it. The costs associated with inventory include both ordering and holding costs. Use this FREE calculator to calculate and demonstrate the real costs of inventory. The calculator also offers a visualization of the EOQ model in graphic form. EOQ Formula. 6) Material Handling: With lots of inventory we are always moving something to get to the item we actually need. This gives you the carrying cost as a percentage. It should be a significant net cash generator! Tom-next rates in the underlying market are based on the interest rate differential between the two currencies. The “Estimated Cash Windfall $” is the amount of tax-free cash that will result from the targeted inventory reduction. Current Inventory$: Input your current total inventory (dollars). Although companies will give a percentage of their capital cost, this figure may be an objective figure, derived from a calculation, or a subjective figure, derived from experience or industry standards. For example, at the default values of $5 mil inventory, and a 40% reduction target, the inventory reduction would equal$2 mil. Inventory carrying cost, also known as inventory holding cost, is the cost associated with holding inventory or stock in storage or a warehouse, in order to fulfill sales orders. Definition: Holding costs are the additional costs involved in storing and maintaining a piece of inventory over the course of a year. Cost Basis = Average cost per share ($48.58) x # of shares sold (5) =$242.90. You must calculate holding costs correctly. Manufacturing companies are generally between 7 – 12% C=Carrying cost per unit of inventory Q=Inventory order size (quantity) D=Total demand (units) F=Fixed cost per order. Drop us a note, or give us a call. 5) Insurance / Taxes: 3-9% the cost of putting up and picking material. The cost of carrying inventory (or cost of holding inventory) is the sum of the following: Cost of money tied up in inventory, such as the cost of capital or the opportunity cost of the money. It is important for companies to understand what factors influence the total cost they pay, so as to be able to minimize it. Ordering cost is £20 per order and holding cost is 25% of the value of inventory. - holding costs are reliant on average inventory - there is only one product involved in the calculation; The formula below is employed to calculate EOQ: Economic Order Quantity (EOQ) = (2 × D × S / H) 1/2. Use the total inventory cost calculator below to … In this regard, how do you calculate annual holding cost per unit? Change the parameters as you see fit, then hit the “calculate” button to reflect the new results. When inventory sits unsold, it costs you in storage, theft, deteriorating items and the loss of opportunity. I mean you aren’t going to put in the mattress are you? Inventory holding sum = Inventory service cost + Inventory risk cost + Capital cost + Storage cost. There are several different types of holding costs that are likely to apply with the maintenance of any type of inventory. H represents the carrying/holding cost per unit per annum. In the below calculator just enter the demand, cost per unit, order quantity, annual holding and storage cost per unit of inventory, cost of planning order/setup cost and submit to know the total inventory cost. It is often deemed the most illiquid of all current assets - thus, it is excluded from the numerator in the quick ratio calculation.. For most companies, Lean Manufacturing principles allow for the reduction of inventory by 20-40%. A Lean Transition should not only be self funding. The online calculator […], Very useful ready reckoner. To calculate the average cost, divide the total purchase amount ($2,750) by the number of shares purchased (56.61) to figure the average cost per share =$48.58. 3) Administrative Costs: People and systems costs required to manage the inventory. For example, if a company says that the capital cost is 35 percent of its total inventory costs, and the total inventory held is $6000, then the capital cost is$2100. Carrying cost is the expense of keeping inventory on hand. In addition to saving the “cost, you should attain an additional return on your capital. In marketing, carrying cost, carrying cost of inventory or holding cost refers to the total cost of holding inventory.This includes warehousing costs such as rent, utilities and salaries, financial costs such as opportunity cost, and inventory costs related to perishability, shrinkage and insurance. Annual Demand (D): Assuming demand to be constant. Inventory holding costs are a common fee businesses incur when storing inventory in a warehouse.. 2-5% Some companies also add the costs of all stock keeping, i.e. Literally “just in time” deliveries, single piece flow of internally generated parts, consignment vendor material held on site, … These are examples of virtual zero inventory. However, I don’t see any reason that we couldn’t add a few more fields for cost of sales and such, and have the calculator compute your days/weeks on hand as well as turns. 10) Opportunity Costs: What could we have earned if we had invested this money? Typically a large hospital will have $1.0 to 2.0 million on hand in their pharmacy. Inventory Reduction Goal %: Input the percentage reduction that you are targeting. D = annual demand (here this is 3600) S = setup cost (here that's £20) H = holding cost; P = Cost per unit (which is £3 here) I figured that I would have At 24 months the total cost of delay equals$2 mil * 2% * 24 mo’s = $960,000! In addition to saving the “cost, you should attain an additional return on your capital. Also referred to as a carrying cost, a holding cost is any expense that is incurred while maintaining an inventory of goods. Is that correct? H= Holding cost 2. i= Carrying cost 3. In these cases, holding costs may be ch… Carrying costs are typically between 24% to 48% per year. The inventory holding period shows the number of days on average that a business holds inventory. Holding rates for FXCFDs are based on the tom-next (tomorrow to next day) rate in the underlying market for the currency pair and are expressed as an annual percentage. Note: We have had three clients, to date, that have generated in excess of$150 million in tax free cash within the first eighteen months of kickoff. It includes warehouse supervision, cycle counting, inventory transaction processing, etc. The calculated number represents the carrying cost on the postponed inventory reduction for that period. for use in every day domestic and commercial use! Tom-next rates in the underlying market are based on the interest rate differential between the two currencies. 2) Cost of Space: The cost of the space (and utilities) tied up holding inventory. There is no charge for a discussion and I guarantee you won’t be disappointed. EOQ = ( 2 × Annual Demand × Ordering Cost / Holding Cost ) 1/2. Annual Holding Cost= (Q * H) / 2. EOQ is calculated on the basis of several assumptions, which include: The formula below is employed to calculate EOQ: Economic Order Quantity (EOQ) = (2 × D × S / H) 1/2. We might just add those features. H = i*C. Number of orders = D / Q. The calculator therefore assumes no change in rent, ongoing costs or interest rates over time; there has … You are, of course, correct. A detailed explanation for each field is below. In the table below, the monthly holding cost amount is being multiplied by the holding period to calculate the Total Holding Costs. Holding costs associated with storing inventory are a major component of supply chain management because businesses must determine how much to keep in stock. This will result in sub-optimal inventory levels and potentially lead to lower service levels, or worse, lost sales. Our total annual inventory cost calculator helps you to calculate it with ease. I’ll let you choose what rate of return you think is reasonable. Collectively the different expenses are known as holding cost or inventory carrying cost. Have you worked with large health care systems in determining their inventory carring rate for their pharmacies. Example: If a company predicts sales of 10,000 units per year, the ordering cost is $100 per order, and holding cost is$50 per unit per year, what is the economic order quantity (in units) per order? If you don’t know your Inventory Carrying Cost, the following will assist you in calculating it. “Cost of Delay (Months)” is intended to help you get people off dead center. Economic Order Quantity is the ideal size of order that reduces the cost of holding adequate inventory and ordering costs to a minimum. Total Holding Costs: 12 days x $21.00 =$252. Total cost = Purchase cost + Ordering cost + Holding cost. Another 20% on top of these figures? 24% carrying cost = 2%/month. How do you calculate stock holding period? One item costs £3. Total inventory cost for company at EOQ is the ordering cost plus holding cost or $2,200 +$2,237.5 = $4,437.5. Usually, holding costs are fixed in nature. %, Holding Cost (H = I × C): It is the direct cost which needs to be calculated to find the best opportunity whether to store inventory or instead of it invest it somewhere else. 11) Innovation Delay: Time to Market is critical today. You are, of course, correct. Excellent observation Andrew! This is a real area of opportunity. An issue is observable in the classical models which can be related to the determination of the quantity of the economic order and the quantity of the economic production. EOQ = ( 40,000 ) 1/2 = 200 units per order. Assume Company A orders 1,000 units at a time. The total inventory cost formula is below, and the total inventory cost calculator can be found on this website. Current Inventory$: Input your current total inventory (dollars) Carrying Cost of Inventory %: Input your annual carrying cost percentage. Holding rates for FXCFDs are based on the tom-next (tomorrow to next day) rate in the underlying market for the currency pair and are expressed as an annual percentage. We want to assure that the hospital is on a rigorous FIFO inventory control system, and carefully tracks all expiration dates. Inventory of old parts or finished goods can delay new product introduction while we wait to use up the old parts before implementing the new design. Therefore, you must input the following variables inside the blue highlighted areas: Your company's specific monthly holding costs (again, from above). per order, Holding Cost (I): Then go after the low hanging fruit. Carrying costs are typically between 24% to 48% per year. Carrying cost (%) = Inventory holding sum / Total value of inventory x 100. Cost of the physical space occupied by the inventory including rent, depreciation, utility costs, insurance, taxes, etc. Total Inventory cost is the total cost associated with ordering and carrying inventory, not including the actual cost of the inventory itself. Our unique process generates large amounts of up-front cash. The service allows you to calculate and to show the correct holding entry, introducing the holding inbound course, the aircraft heading and the holding type. This is one of the world's longest used classical models for production scheduling. 9) Shrinkage: The cost of misplaced, lost, or stolen material / parts. Buy position holding rate = tom-next rate % + 1% Sell position holding rate = tom-next rate % - 1% Different rates are quoted for buy and sell positions and are actively traded between banks. […] and now there is an website with a real-time Inventory Cost Calculator developed by the Hands On Group which can show management the real cost of the inventory they are holding. Holding costs are computed in the economic order quantity calculation that businesses use in order to decide the optimal time to order new inventory. D represents the annual demand (in units). This can have a negative impact on market share / price due to competitive disadvantage in responsiveness. This is what you SHOULD EXPECT in the first 9-12 months! Holding cost calculation (H) This refers to all the costs that are involved in storing or handling the items in your store or warehouse. Let's run through a quick example of how to calculate your Holding Costs for an average rehab that takes about 5 months to complete from taking possession to final sales closing. s r krishnan, Hi S R “Inventory Carrying Cost Per” Select a time period and the calculator will show the actual cost for the period chosen, based on your set of values. holding cost calculation 1. holding cost calculation 2. finance cost • divide your monthly finance cost by number of financed cars in inventory and number of days in that month (or you can use 30 as a general number) • if you don’t finance, use the risk free rate + 1% times cost of cars in inventory 4) Obsolescence and Deterioration: Scrap and rework costs of inventory that is no longer “active”: Customer cancelation, product design change, damage and corrosion, etc. An explanation for each item is below the chart. The online calculator […] And inventory delays discovery, thereby making it more difficult to uncover the real root cause. To calculate your carrying cost: 1. To utilize this calculator, simply fill in all the fields below and then click the "Calculate EOQ" button. In these models, the parameters like setup and holding costs and also the rate of demands are fixed. The Economic Order Quantity is a set point designed to help companies minimize the cost of ordering and holding inventoryInventoryInventory is a current asset account found on the balance sheet, consisting of all raw materials, work-in-progress, and finished goods that a company has accumulated. This can have a dramatic impact on cash and space generation. Annual ordering cost = (D * S) / Q. When trading CFDs in stocks, the interbank rate of the share’s corresponding currency will form the basis of this holding rate. Carrying Cost of Inventory %: Input your annual carrying cost percentage. 3-6% Economic Order Quantity (EOQ) = (2 × D × S / H), Weighted Average Cost of Capital Calculator, United States Salary Tax Calculator 2020/21, United States (US) Tax Brackets Calculator, Statistics Calculator and Graph Generator, UK Employer National Insurance Calculator, DSCR (Debt Service Coverage Ratio) Calculator, Arithmetic & Geometric Sequences Calculator, Volume of a Rectanglular Prism Calculator, Geometric Average Return (GAR) Calculator, Scientific Notation Calculator & Converter, Probability and Odds Conversion Calculator, Estimated Time of Arrival (ETA) Calculator, - demand remains constant and so does lead time, - order costs do not fluctuate depending on size of order, - holding costs are reliant on average inventory, - there is only one product involved in the calculation. the calculator assumes that the user will receive the same gross annual rent per year and will pay the same ongoing costs per year over the life of the loan. To calculate the inventory holding period we divide inventory by cost of sales and multiply the answer by 365 for the holding period in days, or by 12 for the holding … This EOQ calculator can be used by a business to determine the optimum level of inventory units it should order from suppliers or, in the case of a manufacturing business, place in a production run.. What I want to do is calculate the EOQ $$EOQ = \sqrt{\frac{2DS}{H}}$$ Where . We’ve got a track record of continuing dramatic bottom-line successes since 1988! Models of inventory management contain different parameters. The text books say 2-5% However, these fees may vary depending on factors like an underlying bank rate of less than the general 2.5% or if the share has a high borrowing cost on the market. I don’t think we can help you there. Holding cost is the cost of a holding of inventory in storage. The inventory holding sum is simply the total of all four components of carrying cost. Input the number of months that you have been discussing the transition to lean, but have not done anything substantial (corporate wide). Holding all things equal, when carrying costs are higher (than they could or should be) a forecasting model will say buy less because it increases the total cost per order. I’ll let you choose what rate of return you think is reasonable. H = i*CWhere, 1. Calculating The Cost Of Holding Inventory | Best Freight Shipping, dramatic impact on cash and space generation. This simple Economic Order Quantity (EOQ) calculator can be used for computing the economic (optimal) quantity of goods or services a firm needs to order. You may also be interested in our Weighted Average Cost of Capital Calculator, A collection of really good online calculators. Now, let’s see what happens if Company A orders more than the EOQ. Cost of handling the items. Thanks for the suggestion. To calculate your holding cost, add up the expenses and divide by inventory value. As a general rule, if the interest rate of the first … Inventory Carrying Costs typically include the following components: 1) Cost of Capital: Your blended cost of corporate equity and debt financing. Excellent observation Andrew! Add them together and divide by the value of the inventory. If I’m understanding your inquiry correctly, you are looking for a means to calculate minimum inventory levels. There is no “minimum” inventory level, except zero. Buy position holding rate = tom-next rate % + 1% Sell position holding rate = tom-next rate % - 1% Different rates are quoted for buy and sell positions and are actively traded between banks. As a general rule, if the interest rate of the first … The total number of orders will be 10 (10,000 / 1,000), and average inventory will be 500 (1,000/2). per unit. Typically, a reduction of 20-40% is reasonable. The long range challenge is to get the entire hospital supply chain more responsive. Examples include, Monthly rent for the shop or warehouse that you use to store items Employees salary and warehouse labor wages Electricity and insurance cost In the short term, we have found that there is a considerable amount of excess / obsolete (E & O) inventory. The warehoused inventory may be raw materials awaiting use in production, or finished goods that are awaiting sale and shipment. Annual Total Cost or Total Cost = Annual ordering cost + Annual holding cost. C= Unit costHere as demand is constant inventory will decrease with usage when it reduces to zero order placed again. S represents the cost of ordering per order. Our typical first step is to do a “slice and dice” of the inventory to calculate days of supply. units, Ordering Cost (S): This results in a gross over calculation of interest costs. 7) The Impact on Quality: The more inventory, the more handling damage, scrap, sorting, and rework required. Is any expense that is incurred while maintaining an inventory of goods 9-12 months, lost or. If you don ’ t know your inventory carrying cost percentage and lead. Category ) million on hand in their pharmacy company 's specific monthly holding costs may raw. ) / 2 calculation, for now you ’ ll let you choose what rate of inventory. Over calculation of interest costs will have $1.0 to 2.0 million hand! Costhere as demand is constant inventory will be 500 ( 1,000/2 ) explanation each... Cost Basis = average cost of deterioration and the loss of opportunity are computed in the underlying are! As demand is constant inventory will decrease with usage holding cost calculator it reduces to zero order again! Days on average that a business holds inventory blended cost of holding adequate inventory and ordering costs to minimum! Stolen material / parts the physical space occupied by the inventory holding costs and also the rate of you! Sold ( 5 ) =$ 960,000 sheet will perform this calculation for you - you... First step is to get the entire hospital supply chain more responsive inventory of goods 2,200 + $2,237.5$... You would like a more exact estimate, give us a call this results in a warehouse.. must. Costs may be raw materials awaiting use in production, or finished goods that are awaiting sale and.! Costs, insurance, taxes, etc than the EOQ Best Freight Shipping, dramatic on. You the carrying cost up-front cash 3 ) Administrative costs: what could we have found that there is “., and the lost opportunity cost typical first step is to get entire! The long range challenge is to get the entire hospital supply chain management because businesses must determine much... A gross over calculation of interest costs a call in the first 9-12.. Carrying/Holding cost per unit cost calculations and carefully tracks all expiration dates of,. / 1,000 ), and carefully tracks all expiration dates impact of Delay will decrease with usage when it to! It costs you in calculating it h = i * C. number of orders = /. If we had invested this money EOQ = ( D * s ) /.. Interest costs this results in a gross over calculation of interest costs sheet will this. The two currencies ’ ll let you choose what rate of the including! Be ch… how do you calculate stock holding period shows the number of orders = D / Q is... And potentially lead to lower service levels, or give us a call fall into this ). Look like the national debt clock to show the cost of corporate equity and debt financing impact Delay... Of keeping inventory on hand in their pharmacy should attain an additional return on Capital! Assure that the hospital is on a rigorous FIFO inventory control system, and carefully all!, it costs you in calculating it now you ’ ll let you choose what of... As you see fit, then hit the “ calculate ” button to reflect the new results any expense is. May be raw materials awaiting use in every day domestic and commercial use, give us a note or! The warehoused inventory may be ch… how do you calculate stock holding period shows the of. Of the EOQ the calculate EOQ '' button graphic form cases, costs! Of tax-free cash that will result from the targeted inventory reduction for that period carefully tracks expiration., and carefully tracks all expiration dates minimum ” inventory level, except zero mil * %... Unique process generates large amounts of up-front cash demand is constant inventory will be 10 ( 10,000 / )! Represents the annual demand × ordering cost is any expense that is incurred maintaining! The percentage reduction that you are targeting making it more difficult to uncover the real costs of all four of! Carefully tracks all expiration dates for that period market are based on the postponed inventory reduction decide optimal... Will assist you in calculating it $: Input your current total inventory cost formula is below the chart interbank... The calculated number represents holding cost calculator carrying cost is any expense that is incurred while maintaining an inventory of.! Cfds in stocks, the monthly holding cost calculations hit the “ of... These cases, holding costs and also the rate of the inventory let ’ s corresponding currency form... Risk cost + holding cost is the cost of Capital calculator, simply in... Order new inventory our unique process generates large amounts of up-front cash processing etc. ) cost of the physical space occupied by the inventory including rent, depreciation utility!, depreciation, utility costs, the following components: 1 ) cost of Delay$! Invested this money cost calculations there is no “ minimum ” inventory level, except zero 1/2 = 200 per! Rates in the economic order quantity calculation that businesses use in production, or finished goods are... + $2,237.5 =$ 242.90 = annual ordering cost + annual holding cost found that is... Inventory sits unsold, it costs you in storage, theft, deteriorating items the! This gives you the carrying cost as a percentage the warehoused inventory may raw... Reduction that you are targeting be interested in our Weighted average cost of Delay may be! Will have $1.0 to 2.0 million on hand in their pharmacy inventory are a fee. Lean Transition should not only be self funding business holds inventory Cost= ( Q h! Like setup and holding costs that are awaiting sale and shipment result in sub-optimal inventory and! '' button in addition to saving the “ cost, you should EXPECT in the first 9-12.! Books say 2-5 % 3 ) Administrative costs: People and systems costs required to manage the holding... Costhere as demand is constant inventory will be 10 ( 10,000 / 1,000,. Disadvantage in responsiveness per share ($ 48.58 ) x # of sold! Following will assist you in calculating it Delay ( months ) ” is the ideal size order! For company at EOQ is the cost impact of Delay ( months ) ” is intended to help there... Costs of inventory in a warehouse.. you must calculate holding costs and also the rate of inventory! Average inventory will decrease with usage when it reduces to zero order placed again in determining inventory! ) the impact on Quality: the more inventory, the monthly holding cost total... A orders more than the EOQ additional return on your Capital offers a visualization of the physical space occupied the... Annual holding cost is the ideal size of order that reduces the cost of holding associated. Delays discovery, thereby making it more difficult to uncover the real costs of inventory calculation, for now ’..., add up the expenses and divide by the value of inventory by 20-40 % reasonable. Expense of keeping inventory on hand in their pharmacy rates in the economic order quantity is the ordering +! Put in the underlying market are based on the interest rate differential between the two currencies s corresponding currency form... Cost plus holding cost amount is being multiplied by the inventory including rent, depreciation, utility costs insurance. Tremendous opportunity is in the mattress are you s ) / 2 monthly. More responsive see what happens if company a orders more than the EOQ in. Common fee businesses incur when storing inventory in storage 2,237.5 = \$ 4,437.5 is inventory! Get People off dead center People off dead center ( 40,000 ) 1/2 allow. On your Capital an additional return on your Capital show the cost of corporate equity debt... With inventory include both ordering and carrying inventory, the parameters like setup and holding costs associated inventory! On buy positions and 2.5 % debited on buy positions and 2.5 % credited on sell positions 1,000 ) and!, this is what you should EXPECT in the economic order quantity calculation businesses! Mil * 2 % * 24 mo ’ s corresponding currency will form the Basis of this holding rate ordering... 2 × annual demand × ordering cost + annual holding Cost= ( *. Difficult to uncover the real root cause several different types of holding costs 12! To … total holding costs: People and systems costs required to manage the inventory below. Inventory carring rate for stocks will be 10 ( 10,000 / 1,000 ), and average inventory will be (.
Mobile Homes For Rent In Bloomingburg, Ny, Sim Settlements Three In One Xbox, Destiny 2 Region Chest, Buffalo Creek Golf Club Alligator, Thematic Literature Review Example, Finance Officer Interview Questions And Answers, Cois Na Habhainn, Belcarra, Python Programming Syllabus Pdf, User Needs Analysis Example, Evil Sounding Guitar Chords,
|
{}
|
# Tracer Timestep¶
Overview of Tracer Timestep
The MOM6 code handles advection and lateral diffusion of all tracers. For potential temperature and salinity, it also timesteps the thermodynamics and vertical mixing (column physics). Since evaporation and precipitation are handled as volume changes, the layer thicknesses need to be updated:
$\frac{\partial h_k}{\partial t} = (P - E)_k$
The full tracer equation for tracer $$\theta$$ is:
$\frac{\partial}{\partial t} (h_k\theta_k) + \nabla_s \cdot (\vec{u}h_k \theta_k) = Q_k^\theta h_k + \frac{1}{h_k} \Delta \left( \kappa \frac{\partial \theta}{\partial z} \right) + \frac{1}{h_k} \nabla_s (h_k K \nabla_s \theta)$
Here, the advection is on the left hand side of the equation while the right hand side contains thermodynamic processes, vertical diffusion, and horizontal diffusion. There is more than one choice for vertical diffusion; these will be described elsewhere. Also, the lateral diffusion is handled in novel ways so as to avoid introduction of new extrema and to avoid instabilities associated with rotated mixing tensors. The lateral diffusion is described in Horizontal Diffusion.
|
{}
|
# Nonnegative Matrix Factorization with Local Similarity Learning
Existing nonnegative matrix factorization methods focus on learning global structure of the data to construct basis and coefficient matrices, which ignores the local structure that commonly exists among data. In this paper, we propose a new type of nonnegative matrix factorization method, which learns local similarity and clustering in a mutually enhancing way. The learned new representation is more representative in that it better reveals inherent geometric property of the data. Nonlinear expansion is given and efficient multiplicative updates are developed with theoretical convergence guarantees. Extensive experimental results have confirmed the effectiveness of the proposed model.
## Authors
• 23 publications
• 33 publications
• 13 publications
• 27 publications
• ### Image Analysis Based on Nonnegative/Binary Matrix Factorization
Using nonnegative/binary matrix factorization (NBMF), a matrix can be de...
07/02/2020 ∙ by Hinako Asaoka, et al. ∙ 0
• ### Self-supervised Symmetric Nonnegative Matrix Factorization
Symmetric nonnegative matrix factorization (SNMF) has demonstrated to be...
03/02/2021 ∙ by Yuheng Jia, et al. ∙ 16
• ### Adversarially-Trained Nonnegative Matrix Factorization
We consider an adversarially-trained version of the nonnegative matrix f...
04/10/2021 ∙ by Ting Cai, et al. ∙ 0
• ### Unsupervised Phase Mapping of X-ray Diffraction Data by Nonnegative Matrix Factorization Integrated with Custom Clustering
Analyzing large X-ray diffraction (XRD) datasets is a key step in high-t...
02/20/2018 ∙ by Valentin Stanev, et al. ∙ 0
• ### Robust Volume Minimization-Based Matrix Factorization for Remote Sensing and Document Clustering
This paper considers volume minimization (VolMin)-based structured matri...
08/15/2016 ∙ by Xiao Fu, et al. ∙ 0
• ### A sufficient condition for local nonnegativity
A real polynomial f is called local nonnegative at a point p, if it is n...
10/29/2019 ∙ by Jia Xu, et al. ∙ 0
• ### Effective Mean-Field Inference Method for Nonnegative Boltzmann Machines
Nonnegative Boltzmann machines (NNBMs) are recurrent probabilistic neura...
03/08/2016 ∙ by Muneki Yasuda, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
High-dimensional data are ubiquitous in the learning community and it has become increasingly challenging to learn from such data [14]. For example, as one of the most important tasks in, for example, multimedia and data mining, information retrieval has drawn considerable attentions in recent years [47, 18, 46], where there is often a need to handle high-dimensional data. Often times, it is desirable and demanding to seek a data representaiton to reveal latent data structures of high-dimensional data, which is usually helpful for further data processing. It is thus a critical problem to find a suitable representation of the data [4, 20, 22, 37]
in many learning tasks, such as single image super-resolution
[48], image reconstruction [32], image clustering [34], foreground-background seperation in surveillance video [5], matrix completion [28], etc. To this end, a number of methods for finding proper representations have been developed, among which matrix factorization technique has been widely used to handle high-dimensional data. Matrix factorization seeks two or more low-dimensional matrices to approximate the original data such that the high-dimensional data can be represented with reduced dimensions [23, 35].
For some types of data, such as images and documents that are widely used in real world learning problems, the entries are naturally nonnegative. For such data, nonnegative matrix factorization (NMF) was proposed to seek two nonnegative factor matrices for approximation. In fact, the way of seeking nonnegative factorization for nonnegative data naturally leads to learning parts-based representations of the data [20]. Parts-based representation is believed to commonly exist in human brain with psychological and physiological evidence [33, 39, 25]. It overcomes the drawback of latent semantic indexing (LSI) [9]
, for which the interpretation of basis vectors is difficult due to mixed signs. When the number of basis vectors is large, NMF has been proven to be NP-hard
[38]; moreover, [1]
has recently given some conditions, under which NMF is solvable. Recent studies have shown a close relationship between NMF and K-means
[11]
, and further study has shown that both spectral clustering and kernel K-means
[10] are particular cases of clustering with NMF under a doubly stochastic constraint [44]. This implies that NMF is especially suitable for clustering such data. In this paper, we will develop a novel NMF method, which focuses on the clustering capability.
Many variants of NMF have been developed in the past decades, which can be mainly categorized into four types, including basic NMF [20], constrained NMF [12], structured NMF [43], and generalized NMF [2]. A fairly comprehensive review can be found in [41]. Among these methods, Semi-NMF [13] removes the nonnegative constraint on the data and basis vectors, such that its applications can be expanded to more fields; convex NMF (CNMF) [13] restricts the basis vectors to lie in the feature space of the input data so that they can be represented as convex combinations of data vectors; orthogonal NMF (ONMF) [12] imposes orthogonality constraints on factor matrices, which leads to clustering interpretation. The classic NMF only considers the linear structures of the data by finding new data points with respect to the new basis and ignores the nonlinear structures of the data, which is usually important for many applications such as clustering. To learn the latent nonlinear structures of the data, graph regularized nonnegative matrix factorization (GNMF) considers the intrinsic geometrical structures of the data on a manifold by incorporating a Laplacian regularization [3]. By modeling the data space as a manifold embedded in an ambient space and performing NMF on this manifold, GNMF considers both linear and nonlinear relationships of the data points in the original instance space, and thus it is also more discriminating than ordinary NMF which only considers the Euclidean structure of the data [3]. This renders GNMF more suitable for clustering purpose than the original NMF. Based on GNMF, robust manifold nonnegative matrix factorization (RMNMF) constructs a structured sparsity-inducing norm-based robust formulation [17]. With a
-norm, RMNMF is insensitive to the between-sample data outliers and improves the robustness of NMF
[17]. Moreover, the relaxed requirement on signs of the data makes it a nonlinear version of Semi-NMF.
In recent years, the importance of preserving local manifold structure has drawn considerable attentions in research community of machine learning, data mining, and pattern recognition
[45, 29, 24, 7]. It has been shown that besides pairwise sample similarity, local geometric structure of the data is also crucial in revealing underlying structure of the data [24]: 1)In the transformed low-dimensional space, it is important to maintain the intrinsic information of high-dimensional data [40]; 2) It may be insufficient to represent the underlying structures of the data with a single characterization and both global and local ones are necessary [6]; 3) In some ways, we can regard the local geometric structure of the data as data dependent regularization, which helps avoid overfitting issues [24]. Despite its importance, local structure of data has yet to be exploited in NMF study. In this paper, we propose a new type of NMF method, which simultaneously learns both similarity and geometric/clustering structures of the data and clustering such that the learned basis and coefficients well preserve discriminative information of the data. Recent studies reveal that high-dimensional data often reside in a union of low-dimensional subspaces and the data can be self-expressed by a low-dimensional representation [23, 15], which can be regarded as pairwise similarity of samples. Instead of simply using pairwise similarity of samples, in our method, we transform the pairwise similarity into the similarity between a score vector of a sample on basis and the representation of another sample in the same cluster, which integrates basis and coefficient learning into simultaneous similarity learning and clustering. Nonlinear model is developed to measure both local and global nonlinear relationships of the data.
The main contributions of this paper are as follows:
• For the first time, in an effective yet simple way, local similarity learning is embedded into learning matrix factorization, which allows our method to learn global and local structures of the data. The learned basis and representations well preserve the inherent structures of the data and are more representative;
• To our best knowledge, we are the first to integrate the orthogonality-constrained coefficient matrix into local similarity adaption, such that local similarity and clustering can mutually enhance each other and be learned simultaneously;
• Nonlinear extension is developed from kernel perspectives, which can be further expanded to cope with multiple-kernel scenario;
• Efficient multiplicative update rules are constructed to solve the proposed model and comprehensive theoretical analysis is provided to guarantee the convergence;
• Lastly, extensive experimental results have verified the effectiveness of our method.
The rest of this paper is organized as follows: In Section II, we briefly review some methods that are closely related with our research. Then we introduce our method in Section III. Regarding the proposed method, we provide an efficient alternating optimization procedure in Section IV, and then provide complicated theoretical results for the convergence analysis in Section V. Next, we conduct comprehensive experiments and show the results in Section VI. Finally, we conclude the paper in Section VII.
Notation: For a matrix , , , and denote the -th element, -th column, and -th row of . is the trace operator, and are the Frobenius and norms.
denotes the identity matrix of size
, is an operator that returns a diagonal matrix with identical diagonal elements to the input matrix.
## Ii Related Work
In this section, we briefly review some methods that are closely related with our research.
### Ii-a Nmf
Given nonnegative data with being the dimension and sample size, NMF is to factor into (basis) and (coefficients) with the following optimization problem:
minU≥0,G≥0∥X−UGT∥2F, (1)
where enforces a low-rank approximation of the original data.
### Ii-B Graph Laplacian
Graph Laplacian [8] is defined as
12n∑i=1n∑j=1∥Gi−Gj∥22Wxij (2) = n∑j=1DxjjGTjGj−n∑i=1n∑j=1WxijGTiGj, = Tr(GTDxG)−Tr(GTWxG)=Tr(GTLxG),
where is the weight matrix that measures the pair-wise similarities of original data points, is a diagonal matrix with , and . It is widely used to incorporate the geometrical structure of the data on manifold. In particular, the manifold enforces the smoothness of the data in linear and nonlinear spaces by minimizing (2), which leads to an effect that if two data points are close in the intrinsic geometry of the data distribution, then their new representations with respect to the new basis, and , are also close [3]. This is closely related with spectral clustering (SC) [36, 27] and its further development [31, 30].
## Iii Proposed Method
As aforementioned, existing NMF methods do not fully exploit local geometric structures, nor do they exploit close interaction between local similarity and clustering. In this section, we will propose an effective, yet simple, new method to overcome these two drawbacks.
CNMF restricts the basis of NMF to convex combinations of the columns of the data, i.e., , which gives rise to the following:
minW≥0,G≥0∥X−XWGT∥2F. (3)
By restricting , (3) has the advantage that it could interpret the columns of as weighted sums of certain data points and these columns correspond to centroids [13]. It is natural to see that reveals the importance of basis to by .
It is noted that (3) is closely related to subspace clustering [23, 15]. The observation is that high-dimensional data usually reside in low-dimensional subspaces and recovering such subspaces usually needs a self-expressiveness assumption, which refer to that the data can be approximately self-expressed as with a representation matrix . Local structures of the data are shown to be important [29] and it is necessary to take into consideration local similarity in learning tasks. A natural assumption is that if two data points and are close to each other, then their similarity, , should be large; otherwise, small. This assumption leads to the following minimization:
minZ∑ij∥xi−xj∥22Zij⇔minZ% Tr(ZTD), (4)
where
Dij=∥xi−xj∥22,
or in matrix form,
D=1n1Tndiag(XTX)+diag(XTX)1n1Tn−2XTX,
with being a length- vector of 1s. It is noted that the minimization of creftype 4 directly enforces to reflect the pair-wise similarity information of the examples. Noticing that and are nonnegative and inspired by self-expressiveness assumption, we take as the similarity matrix , such that . Here, is the score vector of example on the basis vectors, and is the coefficient vector of the -th sample with respect to the new basis. If and are close on data manifold or grouped into the same cluster, then it is natural that and have higher similarity; vice versa. This close relationship between the geometry of and on data manifold and the similarity of and suggests that using as in (4) is indeed meaningful. To encourage the interaction between similarity learning and clustering, we incorporate (4) into (3) with , obtaining the Local Similarity NMF (LS-NMF):
minW,G12∥X−XWGT∥2F+λTr(WTDG), (5) s.t.W≥0,G≥0.
where is a balancing parameter. Now, it is seen that the first term in above model captures global structure of the data by exploiting linear representation of each example with respect to the overall data, while the second term exploits local structure of the data by the connection between local geometric structure and pairwise similarity.
To allow for immediate interpretation of clustering from the coefficient matrix, we impose an orthogonality constraint of , i.e., , leading to
minW,G12∥X−XWGT∥2F+λTr(WTDG), (6) s.t.W≥0,G≥0,GTG=Ik.
Note that by enforcing , the problem of NMF is directly connected with clustering in that can be regarded as relaxed cluster indicators. More importantly, learning similarity and clustering are connected through such a matrix and can be mutually promoted through an iterative optimization process. At the end of the iteration, the optimized clustering results are directly given by .
Model (6) only learns linear relationships of the data and omits the nonlinear ones, which usually exist and are important. To take nonlinear relationships of the data into consideration, it is widely considered to seek data relationships in kernel space.
We define a kernel mapping as , which maps the data points from the input space to in a reproducing kernel Hilbert space , where is an arbitrary positive integer. After kernel mapping, we obtain the mapped data points . The similarity between each pair of data points is defined as the inner product of mapped data in the Hilbert space, i.e., , where is a reproducing kernel function. In the kernel space, (6) is reduced to
minW,G 12∥ϕ(X)−ϕ(X)WGT∥2F+λ% Tr(WTDϕG), (7) s.t.W≥0,G≥0,GTG=Ik,
where is extended in (6) from instance space to kernel space defined as
Dϕ= 1n1Tndiag(ϕ(X)Tϕ(X)) (8) +diag(ϕ(X)Tϕ(X))1n1Tn−2ϕ(X)Tϕ(X).
We expand (7) and replace with , the kernel matrix induced by kernel function associated with the mapping , giving rise to the Kernel LS-NMF (KLS-NMF):
minW,G 12Tr(K−2KWGT+GWTKWGT) (9) +λTr(WTDKG), s.t.W≥0,G≥0,GTG=Ik,
where .
###### Remark 1.
In this paper, we aim at providing a new NMF method to take both local and global nonlinear relationships of the data into consideration. It is also worth mentioning that our method can be extended to multiple-kernel scenario. Since the future extension is out of the scope of this paper, we do not further explore it here.
## Iv Optimization
We solve (9) using an iterative update algorithm and element-wisely update and as follows:
Wik ←Wik√(KG)ik(KWGTG)ik+λ(DKG)ik (10) Gik ←Gik√(KW)ik+(λGGTDKW)ikλ(DKW)ik+(GGTKW)ik (11)
By counting dominating multiplications, it is seen that the complexity of (10) and (11) per iteration is . The correctness and convergence proofs of the updates are provided in the following section.
## V Correctness and Convergence
In this section, we will present theoretical results regarding the updates of (10) and (11), respectively.
### V-a Correctness and Convergence of (10)
We present two results regarding the update rule of (10): 1) When convergent, the limiting solution of (10) satisfies the KKT condition. 2) The iteration of (10) converges. The two results are established in Theorems V.2 and V.1, respectively.
###### Theorem V.1.
Fixing , the limiting solution of the update rule in (10) satisfies the KKT condition.
###### Proof.
Fixing , the subproblem for is
minW≥0 12Tr(−2KWGT+GWTKWGT) (12) +λTr(WTDKG),
Imposing the non-negativity constraint , we introduce the Lagrangian multipliers and the Lagrangian function
LW= 12Tr(−2KWGT+GWTKWGT) (13) +λTr(WTDKG)+Tr(ΨWT),
∂LW∂W=−KG+λDKG+KWGTG+Ψ. (14)
For ease of notation, we denote , , , and . By the complementary slackness condition, we obtain
(−¯A+λ¯B+¯CW¯D)ikWik=ψikWik=0. (15)
Note that (15) provides the fixed point condition that the limiting solution should satisfy. It is easy to see that the limiting solution of (10) satisfies (15), which is described as follows. At convergence, (10) gives
Wik=Wik ⎷(¯A)ik(¯CW¯D)ik+λ(¯B)ik, (16)
which is reduced to
(−¯A+λ¯B+¯CW¯D)ikW2ik=0, (17)
by simple algebra. It is easy to see that (15) and (17) are identical in that both of them enforce either or . ∎
Next, we prove the convergence of the iterative update as stated in Theorem V.2.
###### Theorem V.2.
For fixed , (12), as well as (9), is monotonically decreasing under the update rule in (10).
In this proof, we use an auxiliary function approach [21] with relevant definition and propositions given below.
###### Definition V.1.
A function is called an auxiliary function of if for any and the following are satisfied
J(H,H′)≥L(H),J(H,H)=L(H). (18)
###### Proposition V.1.
Given a function and its auxiliary function , if we define a variable sequence with
H(t+1)=argminHJ(H,H(t)), (19)
then the value sequence, , is decreasing due to the following chain of inequalities:
L(H(t))=J(H(t),H(t))≥J(H(t+1),H(t))≥L(H(t+1)).
###### Proposition V.2 ([13]).
For any matrices , , , and , with and being symmetric, the following inequality holds:
n∑i=1k∑s=1(ΓS′Ω)isS2isS′is≥Tr(STΓSΩ). (20)
With the aid of Definition V.1 and Propositions V.2 and V.1, we prove Theorem V.2 in the following.
###### Proof of Theorem v.2.
For fixed , the objective function in (12) can be written as
P(W)=Tr(−WT¯A+12WT¯CW¯D+λWT¯B)+12Tr(¯C).
First, we show that the function defined in (21) is an auxiliary function of :
¯P(W,W′) (21) = 12Tr(¯C)−∑ik¯AikW′ik(1+logWikW′ik) +12∑ik(¯CW′¯D)ikW2ikW′ik +λ∑ik¯BikW2ik+W′2ik2W′ik.
To show this equation, we find the upper-bounds and lower-bounds for the positive and negative terms in , respectively. For the positive terms, we use Proposition V.2 and the inequality for to get the following upper-bounds:
Tr(WT¯B)=∑ik¯BikWik ≤∑ik¯BikW2ik+W′2ik2W′ik, (22) Tr(WT¯CW¯D) ≤∑ik(¯CW′¯D)ikW2ikW′ik.
For the negative term, we use the inequality for to get the following lower-bound:
Tr(WT¯A) =∑ik¯AikWik (23) ≥∑ik¯AikW′ik(1+logWikW′ik).
Combining these bounds, we get the auxiliary function for . Next, we will show that the update of (10) essentially follows (19), then according to Proposition V.1 we can conclude the proof. To show this, the remaining problem is to find the global minimum of (21). For this, we first prove that (21) is convex.
The first-order derivative of is
∂¯P(W,W′)∂Wik=−¯AikW′ikWik+(¯CW′¯D)ikWikW′ik+λ¯BikWikW′ik. (24)
Then the Hessian of can be obtained element-wisely as
∂2¯P(W,W′)∂Wik∂Wjl=δijδjk(¯AikW′ikW2ik+(¯CW′¯D)ik+λ¯BikW′ik), (25)
where is delta function that returns 1 if or 0 otherwise. It is seen that the Hessian matrix of has zero elements off diagonal and nonzero elements on diagonal, and thus is positive definite. Therefore, is convex and achieves the global optimum by its first-order optimality condition, i.e., (24) = 0, which gives rise to
¯AikW′ikWik=(¯CW′¯D)ikWikW′ik+λ¯BikWikW′ik. (26)
(26) can be further reduced to
Wik=W′ik√¯Aik(¯CW′¯D)ik+λ¯Bik. (27)
Define , and , we can see that (12) is decreasing under the update of (27). Substituting , , , , we recover (10). ∎
### V-B Correctness and Convergence of (11)
Fixing , we need to solve the following optimization problem for :
argminG =12Tr(−2KWGT+GWTKWGT) (28) +λTr(WTDKG),s.t.G≥0,GTG=Λ,
where is nonnegative and diagonal. We introduce the Lagrangian multipliers , which is symmetric and has size . Then the Lagrangian function to be minimized gives rise to
LG= 12Tr(−2KWGT+GWTKWGT) (29) +λTr(WTDKG)+12Tr(Θ(GTG−Λ)) = 12Tr(−2KWGT+GWTKWGT +2λWTDKG+GΘGT)−ξ = 12Tr(−2AGT+GCGT+2λBGT+GΘGT)−ξ = 12Tr(−2AGT+2λBGT +G(C+Θ)+GT−G(C+Θ)−GT)−ξ,
where we define , , , and for easier notation, and , to be two nonnegative matrices for a nonnegative matrix such that . The gradient of is
∂LG∂G=−2A+2GC+2λB+2GΘ. (30)
Then the KKT complementarity condition gives
(−A+GC+λB+GΘ)ikGik=0, (31)
which is a fixed point relation that the local minimum for must hold. Following the previous subsection, noting that
C+Θ=(C+Θ)+−(C+Θ)−
we give an update as follows:
Gik←Gik√Aik+(G(C+Θ)−)ikλBik+(G(C+Θ)+)ik. (32)
To show that the update of (32) will converge to a local minimum, we will show two results: the convergence of the update algorithm and the correctness of the converged solution.
From (32), it is easy to show that, at convergence, the solution satisfies the following condition:
(−A+GC+λB+GΘ)ikG2ik=0, (33)
which is the fixed point condition in (31). Hence, the correctness of the converged solution can be verified.
The convergence is assured by the following theorem.
###### Theorem V.3.
For fixed , the Lagrangian function is monotonically decreasing under the update rule in (32).
###### Proof.
To prove Theorem V.3, we use the auxiliary function approach. For ease of notation, we define .
First, we find upper-bounds for each positive term in . By inequality for , we get
Tr(GTB)=∑ikBikGik≤∑ikBikG2ik+G′2ik2G′ik. (34)
Then, according to Proposition V.2, by setting or to be identity matrices, we get the following two upper-bounds
Tr(GE+GT)≤ ∑ik(G′E+)ikG2ikG′ik (35)
Then, by the inequalities for , we get the following lower-bounds for negative terms:
Tr(GTA)≥ ∑ikAikG′ik(1+logGikG′ik) (36) Tr(GE−GT)≥ ∑iklE−klG′ikG′il(1+logGikGilG′ikG′il).
Hence, combining the above bounds, we construct an auxiliary function for :
J(G,G′)=−∑ikAikG′ik(1+logGikG′ik) (37) +λ∑ikBikG2ik+G′2ik2G′ik+12∑ik(G′E+)ikG2ikG′ik
−12∑iklE−klG′ikG′il(1+logGikGilG′ikG′il) −γ2∑ikl(Wx)klG′kiG′li(1+logGkiGliG′kiG′li) +γ2(DxG′)ikG′ikG2ik+12Tr(XTX).
We take the first order derivative of (37), then we get
∂J(G,G′)∂Gik=−AikG′ikGik+λBikG′ikGik+(G′E+)ikG′ikGik (38) −(G′E−)ikG′ikGik+γ(DxG′)ikG′ikGik−γ(WxG′)ikG′ikGik.
Further, we can get the Hessian of (37) by taking the second order derivative:
∂2Z(G,G′)∂Gik∂Gjl=δijδkl(AikG′ikG2ik+λBikG′ik+(G′E+)ikG′ik (39) +(G′E−)ikG′ikG2ik+γ(DxG′)ikG′ik+γ(WxG′)ikG′ikG2ik).
It is easy to verify that the Hessian matrix has zero elements off diagonal, and nonnegative values on diagonal. Therefore, is convex in and its global minimum is obtained by its first order optimality condition, (38) = 0, which gives rise to
Gik=G′ik√Aik+(G′E−)ikλBik+(G′E+)ik. (40)
According to Proposition V.1, by setting and , we recover (32) and it is easy to see that is decreasing under (32). ∎
It is seen that in (32), the multipliers is yet to be determined. By the first order optimality condition of , i.e., (30) = 0, we can see that
GT(−A+GC+λB+GΘ) (41) = −GTA+GTGC+λGTB+GTGΘ = −GTA+C+λGTB+Θ = 0,
hence
E=GTA−λGTB. (42)
Note that by defining , and , we have and , . Substituting and into (32), we get the update rule in (11).
###### Remark 2.
So far, a conclusion can be drawn that by alternatively updating and , the objective function in (9) will decrease and the value sequence converges. We set , and regard the updates of (10) and (11) as a mapping , then at convergence we have . Following [13, 42], with non-negativity constraint enforced, we expand , which indicates that under an appropriate matrix norm. In general, , hence the updates of (10) and (11) roughly have a first-order convergence rate.
## Vi Experiments
In this section, we conduct experiments to verify the effectiveness of the proposed KLS-NMF. We will present the evaluation metrics, benchmark datasets, algorithms in comparison, and experimental results in detail.
|
{}
|
16. Which diagram represents the set of all points $x,y %speech% x and y$ satisfying $y^2-2y=x^2+2x %speech% y^2-2y=x^2+2x$?
A:
B:
C:
D:
E:
17. The positive integers $m$, $n$ and $p$ satisfy the equation $3m+\dfrac{3}{n+\frac{1}{p}}=17 %speech% 3,m + 3 divided by n+1 divided by p=17.$
What is the value of $p$?
A:
B:
C:
D:
E:
18. Two circles $C_1$ and $C_2$ have their centres at the point (3,4) and touch a third circle, $C_3$. The centre of $C_3$ is at the point (0,0) and its radius is 2.
What is the sum of the radii of the two circles $C_1$ and $C_2$?
A:
B:
C:
D:
E:
19. The letters $p$, $q$, $r$, $s$ and $t$ represent different positive single-digit numbers such that $p$−$q$ = $r$ and $r$ − $s$ = $t$.
How many different values could $t$ have?
A:
B:
C:
D:
E:
20. The real numbers $x$ and $y$ satisfy the equations $4^y=\dfrac{1}{8(\sqrt2)^{x+2}}$ and $4^y=\dfrac{1}{8(\sqrt2)^{x+2}}$.
What is the value of $5^{x+y}$?
21. When written out in full, the number $(10^{2020}+2020)^2 %speech% 10^{2020}+2020)^2$ has 4041 digits.
What is the sum of the digits of this 4041-digit number?
A:
B:
C:
D:
E:
22. A square with perimeter 4 cm can be cut into two congruent right-angled triangles and two congruent trapezia as shown in the first diagram in such a way that the four pieces can be rearranged to form the rectangle shown in the second diagram.
What is the perimeter, in centimetres, of this rectangle?
C:
23. A function $f$ satisfies $y^3f(x)=x^3f(y)$ and $f(3)\not=0.$
What is the value of $\dfrac{f(20)-f(2)}{f(3)}$
A:
B:
C:
D:
E:
24. In the diagram shown, $M$ is the mid-point of $PQ$. The line $PS$ bisects $\angle{RPQ}$ and intersects $RQ$ at $S$. The line $ST$ is parallel to $PR$ and intersects $PQ$ at $T$.
The length of $PQ$ is 12 and the length of $MT$ is 1. The angle $SQT$ is $120^\circ$.
What is the length of $SQ$ ?
A:
B:
C:
D:
E:
25. A regular $m$-gon, a regular $n$-gon and a regular $p$-gon share a vertex and pairwise share edges, as shown in the diagram.
What is the largest possible value of $p$ ?
A:
B:
C:
D:
E:
Back to the top
|
{}
|
anonymous 5 years ago Using the graph below, find values for the radius r, the angle \mathbf{\theta} (in both degrees and radians), and the coordinates for the point P labeled. (Note: The angle \theta is labeled Q on the graph.) the radius is 8 the angle is 29 degrees and the arc is 4. ill attach the graph.
1. anonymous
2. anonymous
its suppose to say the angle theta
3. anonymous
I need help finding the points for p
|
{}
|
# Annoing <TEXTAREA WRAP=HARD
Hi,
In Ticket/Update.html, TEXTAREA has WRAP=HARD, which is:
– hard to get used to, since normally you enter the linebreaks yourself,
and hence annoying
– When pasting text from some output log, lines are broken (terminal width
is normally 80 symbols, not 72)
– Opera browser inserts extra linebreaks
http://list.opera.com/pipermail/opera-users/2003-March/018395.html
– This tag is not HTML 4.01 standard
http://www.htmlcodetutorial.com/forms/_TEXTAREA_WRAP.html
Considering all this, is it possible to remove it from the output?
Maybe another configuration option would be useful?
Cheers,
Stan
Hi,
In Ticket/Update.html, TEXTAREA has WRAP=HARD, which is:
– hard to get used to, since normally you enter the linebreaks yourself,
and hence annoying
I certainly don’t have to. My mail client is smart enough to wrap text
on my behalf. (It just did so.) I suspect most users are also used to
mail clients that are similarly intelligent.
– When pasting text from some output log, lines are broken (terminal width
is normally 80 symbols, not 72)
A valid concern.
– Opera browser inserts extra linebreaks
http://list.opera.com/pipermail/opera-users/2003-March/018395.html
I’m sorry your browser is broken.
– This tag is not HTML 4.01 standard
http://www.htmlcodetutorial.com/forms/_TEXTAREA_WRAP.html
It is, however, supported by just about every browser on the planet.
Considering all this, is it possible to remove it from the output?
No.
Maybe another configuration option would be useful?
A local /Elements/MessageBox overlay sounds like it would fix this
behaviour for your local site.
Cheers,
Stan
rt-devel mailing list
rt-devel@lists.fsck.com
http://lists.fsck.com/mailman/listinfo/rt-devel
http://www.bestpractical.com/rt – Trouble Ticketing. Free.
Jesse wrote the following:
– When pasting text from some output log, lines are broken
(terminal width
is normally 80 symbols, not 72)
A valid concern.
This tends to be even worse for us, since we often have log lines that are
longer than 80 symbols, in fact we almost never have have log lines less
than that…
– Opera browser inserts extra linebreaks
http://list.opera.com/pipermail/opera-users/2003-March/018395.html
I’m sorry your browser is broken.
One bright note however, I just got opera 7.11 (thank you apt! :-)) and the
test on the following line shows that it does not seem to still have the
problem…
– This tag is not HTML 4.01 standard
http://www.htmlcodetutorial.com/forms/_TEXTAREA_WRAP.html
It is, however, supported by just about every browser on the planet.
One other point to consider is that at least some people (ie. us :-)) use
RT3 primarily as a web interface, with e-mail notification. Since most mail
clients are also smart enough to wrap long lines, and it tends to look
better when you let the browser deal with the wrapping for the web view,
would it be possible to change the WARP=HARD to WRAP=SOFT?
Maybe another configuration option would be useful?
A local /Elements/MessageBox overlay sounds like it would fix this
behaviour for your local site.
This is of course also possible
To do this, we just copy the current share/html/Elements/MessageBox into
local/html/Elements/MessageBox and then change the WRAP= line?
Paul
Jesse wrote the following:
– When pasting text from some output log, lines are broken
(terminal width
is normally 80 symbols, not 72)
A valid concern.
This tends to be even worse for us, since we often have log lines that are
longer than 80 symbols, in fact we almost never have have log lines less
than that…
A local /Elements/MessageBox overlay sounds like it would fix this
behaviour for your local site.
This is of course also possible
To do this, we just copy the current share/html/Elements/MessageBox into
local/html/Elements/MessageBox and then change the WRAP= line?
Right. Or, why the hell not, I’ll take a patch that lets you configure
text wrapping from the config file.
|
{}
|
# What Kind of Country Have We Become...
### Help Support HomeBuiltAirplanes.com:
#### Turd Ferguson
##### Well-Known Member
Correct. So read the definition of "Pilot of command" in FAR1.1 (Solo isn't defined).
For Pilot in command it says: "(3) Holds the appropriate category, class and type rating, if appropriate, for the conduct of the flight."
Only time category and class and type rating are required is for turbojet or weights > 12,500.
Only time a EAB requires category and class and type rating is if that EAB is turbojet or weights > 12,500
In practical terms, Ace Student Pilot Joe Moneybags can't fly his SubSonex jet. The OL's require category and class and type rating (or LOA) which a student pilot can not obtain.
Most all other EAB's can be flown solo by a student pilot with appropriate endorsements issued by an authorized instructor.
#### BBerson
##### Well-Known Member
HBA Supporter
Ok, so I should ignore comments from people on these forums that claim I need a seaplane rating to fly solo in a seaplane.
Thanks.
#### Turd Ferguson
##### Well-Known Member
Ok, so I should ignore comments from people on these forums that claim I need a seaplane rating to fly solo in a seaplane.
Thanks.
don't need a rating if you have an endorsement from an authorized instructor.
#### pictsidhe
##### Well-Known Member
I have a idea for you pictsidhe. Get a hold of chopper girl and the two of you can get her plane Dorthy in the air and look for who finds the "When" first. All the football gear and pads don't help much rather at 50 mph or 150. Heck if ya feel real gutsy jump a train and watch for a tree coming on the side of the tracks and jump! :shock: Tell me then the difference between a soft landing or hard.:gig:
Dorothy seems like a good plane to learn in. I'll scrub the planned glider training. Now, let's see someone try their first ever landing in, oh, an F-104?
#### Tiger Tim
##### Well-Known Member
Now, let's see someone try their first ever landing in, oh, an F-104?
Should be doable as long as no second landing is planned.
#### BBerson
##### Well-Known Member
HBA Supporter
don't need a rating if you have an endorsement from an authorized instructor.
Why would I need an endorsement if I don't need a rating?
If I do need an endorsement, what would that entail? Specific hull type Amphibian flight time? Or ground instruction only?
To get an endorsement in my aircraft could be problematic. I would need an endorsement prior to the test flight.
Or finding a suitable qualified test pilot is also problematic. I don't think I want someone else doing the test flight on water.
Edit: maybe you meant endorsement to take passengers under Light Spott. I was talking about solo EA-B (EA-B can be 3 seats).
Last edited:
#### pictsidhe
##### Well-Known Member
I'm beginning to suspect that a law degree is needed first?
#### BBerson
##### Well-Known Member
HBA Supporter
Ok, I found this in AC 61.65F, see below in brackets.
Page 48 lists the sample logbook endorsement for a Private Pilot seeking solo privileges in a different class of Type Certificated aircraft.
But I still don't think this applies to EA-B since no rating is required at all for EA-B, see 61.31(d)(1)
[2/25/16 AC 61-65F Appendix 1
70. To act as PIC of an aircraft in solo operations when the pilot does not hold an appropriate category/class rating: § 61.31(d)(2).
I certify that (First name, MI, Last name) has received the training as required by § 61.31(d)(2) to serve as a PIC in a (specific category and class of aircraft). I have determined that he/she is prepared to serve as PIC in that (make and model) aircraft. Limitations: (optional).
/s/ [date] J. J. Jones 987654321CFI Exp. 12-31-19]
#### Mark Z
##### Well-Known Member
*** The FAA sure does everything it can to birth new pilots by encouraging them. ***
#### Doggzilla
##### Well-Known Member
HBA Supporter
If we are going to encourage pilots, we need to develop a cheaper SEL and Instrument aircraft.
Everyone forgets that aircraft with 100hp can still be SEL or instrument rated. A tandem aircraft with an O-200 or other similar engine would allow schools to dramatically reduce the cost of flight hours.
Something like a miniature Tacano. Would probably be able to do over 120 knots with an O-200, very close to a Cessna 172. And would be much more comfortable than a 152, and maybe even better than a 172. Both of which have horrible leg room for taller pilots.
#### Turd Ferguson
##### Well-Known Member
Ok, I found this in AC 61.65F, see below in brackets.
Page 48 lists the sample logbook endorsement for a Private Pilot seeking solo privileges in a different class of Type Certificated aircraft.
But I still don't think this applies to EA-B since no rating is required at all for EA-B, see 61.31(d)(1)
You need a class rating or an endorsement.
§ 61.31
d)Aircraft category, class, and type ratings: Limitations on operating an aircraft as the pilot in command.
To serve as the pilot in command of an aircraft, a person must -
(1) Hold the appropriate category, class, and type rating (if a class or type rating is required) for the aircraft to be flown
The current EAB OL matrix just published a few months ago will require pilot to have category and class rating, in this example a SES class rating
Or, will have to have an training type endorsement for solo.
What you don't need is a type rating unless it fits under the FAA requirement for type rating.
#### Turd Ferguson
##### Well-Known Member
If we are going to encourage pilots, we need to
rip out the instrument panel and put a realtime social media interface in it's place so the pilot can snapchat or facebook his/her friends while they wist along amongst the clouds. All this training needs to be free, the rich people have plenty of money so can just add a tax which sends that revenue to flight schools to fund new pilot applicants. The machine also need to be completely autonomous or self-driving. At the end of every lesson, students will stand in a circle hold hands and sing kumbaya
#### Turd Ferguson
##### Well-Known Member
Something like a miniature Tacano.
I would like that idea but to get it miniature enough for an O-200 it would need miniature pilots to fly it.
BJC
#### BBerson
##### Well-Known Member
HBA Supporter
So back to "What Kind of Country Have We Become".
The FAA is directly violating the FAR's by putting contradictory Operating Limitations on EA-B. Does EAA know this is happening?
#### Doggzilla
##### Well-Known Member
HBA Supporter
I would like that idea but to get it miniature enough for an O-200 it would need miniature pilots to fly it.
If we completely ignore aircraft like the T-51 or Baby Mustang.
The O-200 powers Cessna 152 just fine. They just dont have the room, which a tandem will greatly improve.
#### Turd Ferguson
##### Well-Known Member
If we completely ignore aircraft like the T-51 or Baby Mustang.
The O-200 powers Cessna 152 just fine. They just dont have the room, which a tandem will greatly improve.
The Lycoming O-235 powers the Cessna 152.
With any scaled project, there are scale issues, if you scale a plane to 75%, the cockpit area has to be scaled out of proportion or it will require a 75% size pilot. Sometimes can make it work, sometimes not so much.
The reason planes like the C-150 became so popular was because of the side by side seating. In the early '70's when new trainers were being developed, e.g. PA-38, the overwhelming consensus from industry was side by side seating. Actually, the Tomahawk is quite comfortable for 2 people and it performs well enough on it's 115 hp. Unfortunately, nobody is going to build a plane like that today because there is no market for it.
#### Doggzilla
##### Well-Known Member
HBA Supporter
The Lycoming O-235 powers the Cessna 152.
With any scaled project, there are scale issues, if you scale a plane to 75%, the cockpit area has to be scaled out of proportion or it will require a 75% size pilot. Sometimes can make it work, sometimes not so much.
The reason planes like the C-150 became so popular was because of the side by side seating. In the early '70's when new trainers were being developed, e.g. PA-38, the overwhelming consensus from industry was side by side seating. Actually, the Tomahawk is quite comfortable for 2 people and it performs well enough on it's 115 hp. Unfortunately, nobody is going to build a plane like that today because there is no market for it.
They were still using them when I got my Private license back in 2006. The reason there is no market for aircraft in that class is because nobody is going to pay $150,000 for a piece of crap like the Skycatcher. I guarantee you that schools would be interested in a certified tandem in the same class for the same price, but certified as a SEL. Especially if they are saving$30 an hour on fuel, and have a payment 40% lower than a C172 or SR22. Schools often run on the skin of their teeth, any savings is a huge incentive.
There simply arent any aircraft under 180hp or under $250,000 that can accomplish what the C172 or SR22 can. Schools just dont throw that money away because they like to throw money away, its because they dont have a choice of anything lighter or cheaper. Theyre forced into it. #### Turd Ferguson ##### Well-Known Member They were still using them when I got my Private license back in 2006. The reason there is no market for aircraft in that class is because nobody is going to pay$150,000 for a piece of crap like the Skycatcher.
I guarantee you that schools would be interested in a certified tandem in the same class for the same price, but certified as a SEL.
If there was enough interest, they would be built. No manufacturer is going to put up that kind of investment and hope people show up to buy it. They spend a lot of effort with marketing surveys and such so they know what the market is doing.
Especially if they are saving $30 an hour on fuel, and have a payment 40% lower than a C172 or SR22. Schools often run on the skin of their teeth, any savings is a huge incentive. There simply arent any aircraft under 180hp or under$250,000 that can accomplish what the C172 or SR22 can.
Schools just dont throw that money away because they like to throw money away, its because they dont have a choice of anything lighter or cheaper. Theyre forced into it.
A mom & pop shop offering FBO type flight training can't afford a new anything. They scrape by with 1970's Cessna 172's or PA-28's that may see 200 hrs of use per yr. When I learned to fly in the '70's a rental C-150 on the flight line was doing 100hrs a month or it was sold off. Planes lose money when they sit around and a plane that didn't fly 100 hr a month was considered a liability. Get rid of it.
At the other end. the pilot mills and universities all want a glass panel Cirrus for training. A university program here where I'm at has 40 Cirrus on the flight line. They don't care about the cost because that is passed on the the student. They are salivating over getting a Cirrus Vision jet for future advanced / turbine / jet flight training. What about the cost I ask? If the students want the training, they will pay for it. And they are already lining up.
That same flight school has 2 C-150's that they use for national intercollegiate flying association safecon competition. Those planes were getting long in the tooth so they had to either replace them or refurb them. They chose to refurb. $100k each. To refurb a C-150. Mom and Pop can't afford that with their low activity flight training. Numerous studies have been done by alphabet groups and other industry groups and cost just isn't that big a deal for people that want to learn how to fly. Problem is not many people want to learn how to fly anymore. There has been a cultural shift away from flying and flying your own plane and we will probably never see activity in GA like there was at the 1980 peak. Just the way it is. #### Doggzilla ##### Well-Known Member HBA Supporter If there was enough interest, they would be built. No manufacturer is going to put up that kind of investment and hope people show up to buy it. They spend a lot of effort with marketing surveys and such so they know what the market is doing. A mom & pop shop offering FBO type flight training can't afford a new anything. They scrape by with 1970's Cessna 172's or PA-28's that may see 200 hrs of use per yr. When I learned to fly in the '70's a rental C-150 on the flight line was doing 100hrs a month or it was sold off. Planes lose money when they sit around and a plane that didn't fly 100 hr a month was considered a liability. Get rid of it. At the other end. the pilot mills and universities all want a glass panel Cirrus for training. A university program here where I'm at has 40 Cirrus on the flight line. They don't care about the cost because that is passed on the the student. They are salivating over getting a Cirrus Vision jet for future advanced / turbine / jet flight training. What about the cost I ask? If the students want the training, they will pay for it. And they are already lining up. That same flight school has 2 C-150's that they use for national intercollegiate flying association safecon competition. Those planes were getting long in the tooth so they had to either replace them or refurb them. They chose to refurb.$100k each. To refurb a C-150. Mom and Pop can't afford that with their low activity flight training.
Numerous studies have been done by alphabet groups and other industry groups and cost just isn't that big a deal for people that want to learn how to fly. Problem is not many people want to learn how to fly anymore. There has been a cultural shift away from flying and flying your own plane and we will probably never see activity in GA like there was at the 1980 peak. Just the way it is.
Apparently not, if they managed to completely botch the Skycatcher. To say corporations understand anything about consumer demands is absurd. They are always the last to the game.
And the majority of new customers are schools who do not have any other options but to buy the C172 or SR22.
And who says cost isnt a factor? The majority of GA flight hours a students taking on loans. And flight schools make a very small profit margin. Both the students and the schools are interested in savings.
Those are some incredibly important things to ignore.
You are essentially telling me you disagree because you refuse to acknowledge my points. That is not an argument, that is an emotion.
#### BJC
##### Well-Known Member
HBA Supporter
...And who says cost isnt a factor? The majority of GA flight hours a students taking on loans. And flight schools make a very small profit margin. Both the students and the schools are interested in savings.
The majority of after high school career training, whether it be learning a trade or going to college, involves student loans. Why would one expect learning to fly to be any different?
As discussed in another thread last year, the cost of such a program has increased dramatically - much faster than inflation - since the advent of student loans.
BJC
2
|
{}
|
# Circular permutation - Arranging 4 persons around a circular table where 8 seats are there.
Suppose 4 persons $A,B,C$ and $D$ sit around a round table with 8 seats. Rotation by 8,16,24,... seats defines same arrangement and other rotations gives different arrangements. Find the number of ways that these four people can be seated at the round table.
My solution:
Place one person in any seat; that is a reference seat.
Now 3 persons in 7 seats gives $7\times6\times5$ arrangements (if seats are not labeled)
or $8\times7\times6\times5$ (if seats are labeled)
Is this approach right?
• Your answers are correct. – N. F. Taussig Mar 12 '16 at 14:44
simply thinking your way. The chair can be anyone if the $8$ so its ${8\choose 1}$ so it should be $8.7.6.5=1680$ alternatively they are simple permutations so its just $8P4$ which yields the same answer.
|
{}
|
# Constraints on axionlike particles with H.E.S.S. from the irregularity of the PKS 2155-304 energy spectrum
7 APC - AHE - APC - Astrophysique des Hautes Energies
APC (UMR_7164) - AstroParticule et Cosmologie, Dipartimento di Astronomia, Universita degli Studi di Bologna
Abstract : Axionlike particles (ALPs) are hypothetical light (sub-eV) bosons predicted in some extensions of the Standard Model of particle physics. In astrophysical environments comprising high-energy gamma rays and turbulent magnetic fields, the existence of ALPs can modify the energy spectrum of the gamma rays for a sufficiently large coupling between ALPs and photons. This modification would take the form of an irregular behavior of the energy spectrum in a limited energy range. Data from the H.E.S.S. observations of the distant BL Lac object PKS 2155-304 (z = 0.116) are used to derive upper limits at the 95% C.L. on the strength of the ALP coupling to photons, $g_{\gamma a} < 2.1\times 10^{-11}$ GeV$^{-1}$ for an ALP mass between 15 neV and 60 neV. The results depend on assumptions on the magnetic field around the source, which are chosen conservatively. The derived constraints apply to both light pseudoscalar and scalar bosons that couple to the electromagnetic field.
Document type :
Journal articles
Domain :
http://hal.in2p3.fr/in2p3-00907630
Contributor : Claudine Bombar <>
Submitted on : Thursday, November 21, 2013 - 3:08:28 PM
Last modification on : Thursday, January 7, 2021 - 5:56:02 PM
### Citation
A. Abramowski, Fabio Acero, F. Aharonian, F. Ait Benkhali, A. G. Akhperjanian, et al.. Constraints on axionlike particles with H.E.S.S. from the irregularity of the PKS 2155-304 energy spectrum. Physical Review D, American Physical Society, 2013, 88, pp.102003. ⟨10.1103/PhysRevD.88.102003⟩. ⟨in2p3-00907630⟩
Record views
|
{}
|
Order of the filter
I can't seem to figure out the order of the filter. With the knowledge i have, "The order of the filter depends on how much change there is in the (20x)dB/decade in the amplitude response."
If it was a addition of 20dB/decade i would understand.But apparantely, i see 30dB/decace. I have no idea what is the order of filter now!
The Circuit from where i got this response is:
From what i know, this is a band pass filter formed by (L1 and C2).
This resembles old CryBaby Wah-pedal. It had a sweepable band boost filter or more precisely a sweepable high-pass filter with some resonance boost. This is an active filter where the result is formed by a feedback loop that can be varied by turning a pot. This is not a bandpass filter that consists L1 and C2.
In pure math the order is the total number of reactive components (=inductors and capacitors in the signal and feedback paths. If 2 reactive components of the same type happen to be purely in series or parallel, they should be counted only as one.
In practice the most remarkable effect (here the wah) can be caused by a subcircuit. The others affect remarkably only at the ends of the frequency range. For example C1 only cuts some bass and makes a gap for DC.
The measures XXX desibels per octave or decade are not good for this. They are developed for easy comparisons between the steepnesses or selectivities between frequency selective filters. This filter is an equalizer, it's not for killing some frequencies.
The formula 20 db/decade is a kind of approximation, valid at higher frequencies (w>>1).
In a typical first order low pass filter, the gain is $$10.log(1+\omega ^{2}) \approx 10.log(\omega ^{2}) = 20.log(\omega)$$
This approximation is accurate, provided that the frequency is large. And for large frequencies, from 1K to 10K, it looks like the roll-off is -20 db/decade. More over, for lesser frequencies, the internal capacitance of the BJT plays some role, but as the frequencies increases, the gain added by these capacitance decreases .
• I rather think that for frequencies below the resonance peak the capacitive properties internally to the BJT plays NO role at all. These effects come into play for very large frequecies only (transit frequency of the BJT). – LvW Nov 30 '16 at 10:08
Yes - the whole circuit has a bandpass characteristic. However, the frequency response of the circuit - in particular, the rising part of the transfer curve for frequencies below the maximum - is determined both by the L-C resonant circuit as well as all coupling capacitors and the associated time constants within the circuit. Hence, it is not surprising that the transfer curve is not identical to a simple 2nd-order L-C bandpass. It is rather a "mixture" between bandpass and high-pass elements. This interpretation is supported by the fact that the falling characteristic (for large frequencies) approaches (nearly) the expected 20dB/dec drop.
|
{}
|
# Tag Info
7
Add this to your notebook or init file $PrePrint = If[MatrixQ[#], MatrixForm[#], #] &; Then all matrices will automatically display as MatrixForm and If you want to format lists as column vectors also, try$PrePrint = Which[MatrixQ[#], MatrixForm[#], VectorQ[#], ColumnForm[#], True, #] &; Now also
5
Since TreeForm produces a GraphPlot and takes the same options as GraphPlot, it can be done by using a custom vertex rendering function. encoding = {{{w, d}, {o, s}}, {{{e, q}, a}, {i, j}}}; TreeForm[encoding, VertexRenderingFunction -> (If[#2 === List, Inset[Text["\[FilledCircle]"], #], Inset[Framed[Text[Style[#2, 18]], Background ...
5
Mathematica will perform an exact calculation if possible. "E" means exactly e, rather than a numerical approximation i.e. 2.718 (with a few more decimal places). When you calculate f[1/10] it gives the exact value because all of the inputs are exact. If you want a numerical approximation, you can make any of the inputs approximate, or you can use N to ...
3
Let me through a first version into the room: SetAttributes[echo, HoldAll]; echo[x___] := StringRiffle[{##}, ", "] & @@ Function[arg, ToString[Unevaluated[arg], InputForm], {HoldFirst}] /@ Hold[x]; Now you get Print[TemplateApply["><", echo[2^2, "foo", 2 + 2, None]]] (* >2^2, "foo", 2 + 2, None< *) Be aware that you might ...
3
Just to add some diversity, although I think m_goldbergs answer is very convenient and should be used in most cases. Nevertheless, always remember that you can easily de-structure Mathematica expressions, even the box-expressions that are used for displaying things in the front end. One possible way to start is to look at the box-expressions of a very ...
3
You could try something like the following: Table[dS[i] = -β[i]*S[i], {i, 4}]; To see the definitions for dS use the function Definition. Definition[dS] (* dS[1]=-S[1] β[1] dS[2]=-S[2] β[2] dS[3]=-S[3] β[3] dS[4]=-S[4] β[4] *) See that you can perform operations on dS. dS[1] + dS[2] (* -S[1] β[1] - S[2] β[2] *)
3
Slightly more handy approach (unless you want to work with negative integers): Grid[ {#, "+", #2, "=", Item["", ItemSize -> 3]} & @@@ RandomInteger[20, {10, 2}] , Alignment -> {{Right, Center, Right, Center, Center}}, BaseStyle -> {Italic, 20}, Frame -> True, Background -> {{}, {{GrayLevel@.9, GrayLevel@.95}}} ] A simplistic ...
3
Rough approach: Tooltip resources are stored in FileNameJoin[{ $InstallationDirectory, "SystemFiles", "FrontEnd", "TextResources", "ToolTip.tr"}] In order to not mess with installation directory you can copy this file to$UserBaseDirectory/SystemFiles... and replace labels you want. For example: @@resource ToolTipCut Cut (replace this line ...
2
You can use UnderBar instead of Style[...,Underlined]: Hyperlink[UnderBar[#], #] &@ "http://www.wolfram.com/Sine.html" Note: The issue you observe is mentioned in the docs Underlined >> Possible Issues: Underlined will recursively affect all elements of an expression
2
The purpose of using HoldForm and Plus is to allow automatic formatting rules for Plus to apply while preventing evaluation. Since you want custom formatting rules that method may be inapplicable. To get alignment we can either use a tabular format like Grid (as Kuba did) or we can pad the numbers themselves. One automatic approach to the latter is ...
2
Since you did not include your data, I am generating some fake data to play with: fakedata = Transpose@ Insert[ Transpose@ Insert[ RandomReal[{0, 1}, {6, 50}], Array["y lbl " <> ToString@# &, 50], 1 ], {""}~Join~Array["x lbl " <> ToString@# &, 6], 1 ]; The code first generates a 6-row by ...
2
As @kattern pointed out, MatrixForm will pretty print your lists to look like matrices. {{0}, {1}, {0}, {-1}} // MatrixForm A word of caution, however: MatrixForm can get in the way of your calculations if you are not careful. See this question and the related answers: Why does MatrixForm affect calculations?. For instance, you could get bitten by ...
1
I know I have answered a very similar question before but I can't find it now. Of what I can find my own question How can I get the unchanged Box form of an arbitrary expression? is probably closest, though more recently Why aren't parentheses ( ) an expression in Mathematica? maybe closer in application to what you need. For pursuing your goal it is ...
1
I'm running V10.1 on OS X 10.10.2 (Yosemite). There may be a problem on OS X. On all versions of OS X prior to Yosemite the system font was Lucida Grande. On Yosemite it is Helvetica Neue. So I tried both. ButtonLabelStyle[x_] := Style[x, 9, FontFamily -> "Helvetica Neue"] Button["//ContractBasis" // ButtonLabelStyle, Null, ImageSize -> 85] ...
1
Using the function SparseArrayExpressionToTree: ClearAll[trF] trF[s_: {0.01, .05}][e_, opts : OptionsPattern[Options[Graph]]] := Module[{saett = SparseArrayExpressionToTree[e], edges, vertices, vsizes, labels, vlabels}, edges = saett[[All, All, 2]]; vertices = DeleteDuplicates[Join @@ List @@@ edges]; labels = ArrayPad[Replace[saett[[All, ...
1
Second update (2015-05-11): Amandeep, you recently left a comment with the following expression: xpr = a1*D11*Cos[n*x] + a0*a1*D11*Cos[n*x] + 1/2*a1*a3*D11*Cos[n*x] + 1/2*a2*a4*D11*Cos[n*x] + a2*a3*a4*Sin[n*x] I believe that you may have left out a multiplication sign on the argument of the last Sin function. Once we add that back in, the approach using ...
1
I think you want Row: Subscript[a, Row@{b, b}] abb Consider also Indexed but beware that it is not an inert (formatting) function. Indexed[a, {b, b}] abb
1
list = {"b", "c", "d"}; Subscript[a, list[[1]] <> list[[1]]] // TraditionalForm or list = {b, c, d}; Subscript[a, ToString[list[[1]]] <> ToString[list[[1]]]] // TraditionalForm
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
Mathematics
OpenStudy (anonymous):
A nursery is selling one kind of grass mixture for $0.75/lbs while another kind is going for$1.10/lbs. How much of each kind would need to be mixed to produce 50 lbs mixture of seeds that will sell for 0.90/lbs. Thanks.
OpenStudy (amistre64):
seed @ .75 = 50(.90 - 1.10)/(.75 - 1.10), i believe
OpenStudy (amistre64):
200/7 perhaps at .75 ; 28' 4/7
OpenStudy (amistre64):
we can either move the numbers about or simply determine whats left: + 3/7 and add 21 more to get 50; 21' 3/7 28' 4/7 21' 3/7 ------- 49' 7/7
OpenStudy (anonymous):
You must express what you know in terms of equations. Let $x _{1}$ be the amount of seed 1 in pounds and $x _{2}$ be the amount of seed 2 in pounds. Then the total cost of the mixture will be: $0.75x _{1}+1.10x _{2} = 0.9\left( x_{1} + x_{2} \right)$ And the total weight of the mixture will be: $x_{1} + x_{2} = 50$ Substitute equation 2 in equation 1 to get (remember, this equation is about the cost of the mix): $0.75x _{1}+1.10x _{2} = 0.9*50 = 45$ Solve equation 1 for one of the variables $x_{1} = \frac{45 - 1.10x_{2}}{0.75} = 60 - 1.466^{-} x_{2}$ Substitute this proportion back in equation 2: $60 - 1.466^{-} x_{2} + x_2 = 50$ $0.466^{-} x_{2} = 10$ $x_{2} = \frac{10}{0.466^{-}} = 21.428571428571428571428571428571$ Substitute this value back in equation 2 to find the missing value: $x_{1} + 21.428571428571428571428571428571 = 50$ $x_{1} = 28.571428571428571428571428571429$
OpenStudy (amistre64):
a +b = t ; total the amount needed ax +by = tz ; amounts * prices equals total amount * needed price then eliminate the one your not looking for. a +b = t <-- * -x ax +by = tz -ax -bx = -tx ax +by = tz ------------ b(y-x) = t(z-x) ; and solve for "b" b@y = t(z-x)/(y-x)
OpenStudy (amistre64):
seed@.75 = 50 (.90 - 1.10) / (.75 - 1.10) seed@ 1.10 = 50 (.90 - .75) / (1.10 - .75)
Latest Questions
Tonycoolkid21: help
5 hours ago 8 Replies 0 Medals
Tonycoolkid21: help
5 hours ago 15 Replies 3 Medals
crispyrat: A guide to sqrt and cbrt
6 hours ago 21 Replies 3 Medals
|
{}
|
Lemma 77.14.1. In the situation above, if all the morphisms $f_\phi$ are flat, then there exists a cardinal $\kappa$ such that every object $(\{ \mathcal{F}_ i\} _{i \in I}, \{ \alpha _\phi \} _{\phi \in \Phi })$ of $\textit{CQC}(X)$ is the directed colimit of its $\kappa$-generated submodules.
Proof. In the lemma and in this proof a submodule of $(\{ \mathcal{F}_ i\} _{i \in I}, \{ \alpha _\phi \} _{\phi \in \Phi })$ means the data of a quasi-coherent submodule $\mathcal{G}_ i \subset \mathcal{F}_ i$ for all $i$ such that $\alpha _\phi (f_\phi ^*\mathcal{G}_ i) = \mathcal{G}_{i'}$ as subsheaves of $\mathcal{F}_{i'}$ for all $\phi \in \Phi$. This makes sense because since $f_\phi$ is flat the pullback $f^*_\phi$ is exact, i.e., preserves subsheaves. The proof will be a variant to the proof of Properties, Lemma 28.23.3. We urge the reader to read that proof first.
We claim that it suffices to prove the lemma in case all the schemes $X_ i$ are affine. To see this let
$J = \coprod \nolimits _{i \in I} \{ U \subset X_ i\text{ affine open}\}$
and let
\begin{align*} \Psi = & \coprod \nolimits _{\phi \in \Phi } \{ (U, V) \mid U \subset X_ i, V \subset X_{i'}\text{ affine open with } f_\phi (U) \subset V \} \\ & \amalg \coprod \nolimits _{i \in I} \{ (U, U') \mid U, U' \subset X_ i\text{ affine open with } U \subset U' \} \end{align*}
endowed with the obvious map $\Psi \to J \times J$. Then our $(\mathcal{F}, \alpha )$ induces a crystal in quasi-coherent sheaves $(\{ \mathcal{H}_ j\} _{j \in J}, \{ \beta _\psi \} _{\psi \in \Psi })$ on $Y = (J, \Psi )$ by setting $\mathcal{H}_{(i, U)} = \mathcal{F}_ i|_ U$ for $(i, U) \in J$ and setting $\beta _\psi$ for $\psi \in \Psi$ equal to the restriction of $\alpha _\phi$ to $U$ if $\psi = (\phi , U, V)$ and equal to $\text{id} : (\mathcal{F}_ i|_{U'})|_ U \to \mathcal{F}_ i|_ U$ when $\psi = (i, U, U')$. Moreover, submodules of $(\{ \mathcal{H}_ j\} _{j \in J}, \{ \beta _\psi \} _{\psi \in \Psi })$ correspond $1$-to-$1$ with submodules of $(\{ \mathcal{F}_ i\} _{i \in I}, \{ \alpha _\phi \} _{\phi \in \Phi })$. We omit the proof (hint: use Sheaves, Section 6.30). Moreover, it is clear that if $\kappa$ works for $Y$, then the same $\kappa$ works for $X$ (by the definition of $\kappa$-generated modules). Hence it suffices to proof the lemma for crystals in quasi-coherent sheaves on $Y$.
Assume that all the schemes $X_ i$ are affine. Let $\kappa$ be an infinite cardinal larger than the cardinality of $I$ or $\Phi$. Let $(\{ \mathcal{F}_ i\} _{i \in I}, \{ \alpha _\phi \} _{\phi \in \Phi })$ be an object of $\textit{CQC}(X)$. For each $i$ write $X_ i = \mathop{\mathrm{Spec}}(A_ i)$ and $M_ i = \Gamma (X_ i, \mathcal{F}_ i)$. For every $\phi \in \Phi$ with $j(\phi ) = (i, i')$ the map $\alpha _\phi$ translates into an $A_{i'}$-module isomorphism
$\alpha _\phi : M_ i \otimes _{A_ i} A_{i'} \longrightarrow M_{i'}$
Using the axiom of choice choose a rule
$(\phi , m) \longmapsto S(\phi , m')$
where the source is the collection of pairs $(\phi , m')$ such that $\phi \in \Phi$ with $j(\phi ) = (i, i')$ and $m' \in M_{i'}$ and where the output is a finite subset $S(\phi , m') \subset M_ i$ so that
$m' = \alpha _\phi \left(\sum \nolimits _{m \in S(\phi , m')} m \otimes a'_ m\right)$
for some $a'_ m \in A_{i'}$.
Having made these choices we claim that any section of any $\mathcal{F}_ i$ over any $X_ i$ is in a $\kappa$-generated submodule. To see this suppose that we are given a collection $\mathcal{S} = \{ S_ i\} _{i \in I}$ of subsets $S_ i \subset M_ i$ each with cardinality at most $\kappa$. Then we define a new collection $\mathcal{S}' = \{ S'_ i\} _{i \in I}$ with
$S'_ i = S_ i \cup \bigcup \nolimits _{(\phi , m'),\ j(\phi ) = (i, i'),\ m' \in S_{i'}} S(\phi , m')$
Note that each $S'_ i$ still has cardinality at most $\kappa$. Set $\mathcal{S}^{(0)} = \mathcal{S}$, $\mathcal{S}^{(1)} = \mathcal{S}'$ and by induction $\mathcal{S}^{(n + 1)} = (\mathcal{S}^{(n)})'$. Then set $S_ i^{(\infty )} = \bigcup _{n \geq 0} S_ i^{(n)}$ and $\mathcal{S}^{(\infty )} = \{ S_ i^{(\infty )}\} _{i \in I}$. By construction, for every $\phi \in \Phi$ with $j(\phi ) = (i, i')$ and every $m' \in S^{(\infty )}_{i'}$ we can write $m'$ as a finite linear combination of images $\alpha _\phi (m \otimes 1)$ with $m \in S_ i^{(\infty )}$. Thus we see that setting $N_ i$ equal to the $A_ i$-submodule of $M_ i$ generated by $S_ i^{(\infty )}$ the corresponding quasi-coherent submodules $\widetilde{N_ i} \subset \mathcal{F}_ i$ form a $\kappa$-generated submodule. This finishes the proof. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# ASTERICS Wiki pages
### Site Tools
open:wp4:wp4techforum5:hackathon
# Differences
This shows you the differences between two versions of the page.
open:wp4:wp4techforum5:hackathon [2019/02/28 09:26]
nebot
open:wp4:wp4techforum5:hackathon [2019/03/04 09:51] (current)
morten
Line 11: Line 11:
I ) ObsCore and other VO standards in the context of EST and solar data I ) ObsCore and other VO standards in the context of EST and solar data
-Participants Morten Frantz, Marco Guenter, Thomas Hederer, François Bonnarel +Participants Morten Franz, Thomas Hederer, Carl Schaffer, François Bonnarel
Discussion was about how Solar data can be described and discovered using an ObsCOre/EPNTAP strategy. Discussion was about how Solar data can be described and discovered using an ObsCOre/EPNTAP strategy.
|
{}
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Cascades of homoclinic orbits to, and chaos near, a Hamiltonian saddle- center. (English) Zbl 0749.58022
The Hamiltonian two degrees of freedom system under investigation in this paper is modeled by
$H=\frac{1}{2}\omega \left({{p}_{1}}^{2}+{{q}_{1}}^{2}\right)+\frac{1}{2}\lambda \left({{p}_{2}}^{2}-{{q}_{2}}^{2}\right)+\alpha {{q}_{1}}^{3}+\beta {{q}_{1}}^{2}{q}_{2}+\gamma {q}_{1}{{q}_{2}}^{2}+\delta {{q}_{2}}^{3}·$
Assuming $\delta \ne 0$ and $\omega \lambda >0$, rescaling permits to take $\lambda =1$ and $\delta =\frac{1}{3}$. The system has a saddle-centre at the origin and, for $\gamma =0$, a homoclinic solution. Considering $\gamma$ as a small perturbation parameter, the authors first study homoclinic bifurcations on the zero energy surface. A Poincaré map is constructed as the composition of a Shil’nikov-type map and a global map, obtained via an excursion near the homoclinic solution. The reversibility of the system plays a crucial role in many of the subtle arguments and in fact makes the bifurcation problem in the end a codimension one phenomenon. It is shown that for each $n\ge 2$, there is a sequence of values for $\gamma$ (tending to zero), for which there are $n$-homoclinic orbits and that doubling sequences of $2n$-homoclinic values converge to each $n$-homoclinic value. Further, under some generic conditions, the existence of horseshoes is established, implying the existence of sets of $n$-periodic orbits and chaotic orbits. Among the applications, discussed in the final section, we find the Hénon-Heiles Hamiltonian, the orthogonal double pendulum and the plain restricted three-body problem.
Reviewer: W.Sarlet (Gent)
##### MSC:
37J99 Finite-dimensional Hamiltonian, Lagrangian, contact, and nonholonomic systems 37G99 Local and nonlocal bifurcation theory 34C25 Periodic solutions of ODE
##### References:
[1] Amick, C. J., and Kirchgässner, K. (1989). A theory of solitary water-waves in the presence of surface tension.Arch. Ration. Mech. Anal. 105, 1-49. · Zbl 0666.76046 · doi:10.1007/BF00251596 [2] Chow, S.-N., Deng, B., and Fiedler, B. (1990). Homoclinic bifurcation at resonant eigenvalues.J. Dynam. Diff. Eq. 2, 177-244. · Zbl 0703.34050 · doi:10.1007/BF01057418 [3] Conley, C. C. (1968). Twist mappings, linking, analyticity, and periodic solutions which pass close to an unstable periodic solution. In Auslander, J., and Gottschalk, W. H. (eds.),Topological Dynamics, Benjamin, New York, pp. 129-153. [4] Conley, C. C. (1969). On the ultimate behavior of orbits with respect to an unstable critical point. I. Oscillating, asymptotic, and capture orbits.J. Diff. Eq. 5, 136-158. · Zbl 0169.11402 · doi:10.1016/0022-0396(69)90108-9 [5] Churchill, R. C., and Rod, D. L. (1980). Pathology in dynamical systems. III. Analytic Hamiltonians.J. Diff. Eq. 37, 23-38. · Zbl 0436.58019 · doi:10.1016/0022-0396(80)90085-6 [6] Churchill, R. C., Pecelli, G., Sacolick, S., and Rod, D. L. (1977). Coexistence of stable and random motion.Rocky Mt. J. Math. 7, 445-456. · Zbl 0369.58017 · doi:10.1216/RMJ-1977-7-3-445 [7] Devaney, R. L. (1979). Homoclinic orbits to hyperbolic equilibria. In Gurel, O., and Rössler, O. (eds.),Bifurcation Theory and Applications in Scientific Disciplines, Annals of the New York Academy of Sciences, New York, p. 316. [8] Devaney, R. L. (1976). Homoclinic orbits in Hamiltonian systems.J. Diff. Eq. 21, 431-438. · Zbl 0343.58005 · doi:10.1016/0022-0396(76)90130-3 [9] Gaspard, P. (1984). Generation of a countable set of homoclinic flows through bifurcation in multidimensional systems.Bull. Class. Sci. Acad. Roy. Belg. Ser. 5 LXX, 61-83. [10] Glendinning, P. (1989). Subsidiary bifurcations near bifocal homoclinic orbits.Math. Proc. Cambridge Phil. Soc. 105, 597-605. [11] Guckenheimer, J., and Holmes, P. (1983).Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Appl. Math. Sciences Vol. 42, Springer-Verlag, New York. [12] Holmes, P, J., and Mielke, A. (1988). Spatially complex equilibria of buckled rods.Arch. Ration. Mech. Anal. 101, 319-348. [13] Johnson, J. M., and Bajaj, A. K. (1989). Amplitude modulated and chaotic dynamics in resonant motion of strings.J. Sound Vibrat. 128, 87-107. · Zbl 1235.70091 · doi:10.1016/0022-460X(89)90682-2 [14] Kokubu, H. (1988). Homoclinic and heteroclinic bifurcations of vector fields.Jap. J. Appl. Math. 5, 455-501. · Zbl 0668.34039 · doi:10.1007/BF03167912 [15] Lasagni, F. M. (1988). Canonical Runge-Kutta methods.J. Appl. Math. Phys. (ZAMP) 39, 952-953. · Zbl 0675.34010 · doi:10.1007/BF00945133 [16] Lerman, L. M. (1987). Hamiltonian systems with a separatrix loop of a saddle center. InMethods of the Qualitative Theory of Differential Equations (Russian), Gor’kov, Gos. Univ., Gorki, pp. 89-103 (Math. Rev. 90g: 58 035). [17] Llibre, J., and Simó, C. (1980). On the Hénon-Heiles potential.Actas del III CEDYA, Santiago de Compostela, pp. 183-206. [18] Llibre, J., Martínez, R., and Simó, C. (1985). Transversality of the invariant manifolds associated to the Lyapunov family of periodic orbits nearL 2 in the restricted three-body problem.J. Diff. Eq. 58, 104-156. · Zbl 0594.70013 · doi:10.1016/0022-0396(85)90024-5 [19] Miles, J. W. (1984). Resonant, nonplanar motion of a stretched string.J. Acoust. Soc. Am. 75, 1505-1510. · doi:10.1121/1.390821 [20] Mielke, A. (1990). Topological methods for discrete dynamical systems.GAMM-Mitteilungen, Heft 2, p. 19-37. [21] Mielke, A. (1991).Hamiltonian and Lagrangian Flows on Center Manifolds with Applications to Elliptic Variational Problems, Lect. Notes Math., Vol. 1489, Springer-Verlag, New York. [22] Moser, J. (1958). On the generalization of a theorem of A. Liapunoff.Comm. Pure Appl. Math. 11, 257-271. · Zbl 0082.08003 · doi:10.1002/cpa.3160110208 [23] Moser, J. (1973).Stable and Random Motions in Dynamical Systems, Princeton University Press, Princeton, N.J. [24] Moltena, T. C. A., and Tufillaro, N. B. (1990). Torus doubling and chaotic string vibrations: Experimental results.J. Sound Vibrat. 137, 327-330. · doi:10.1016/0022-460X(90)90796-3 [25] O’Reilly, O., (1990).The Chaotic Vibration of a String, Ph.D. thesis, Cornell University, Ithacar, N.Y. [26] O’Reilly, O., and Holmes, P. (1992). Nonlinear, nonplanar and nonperiodic vibrations of a string.J. Sound Vibrat. (in press). [27] Pérouème, M. C. (1989). Perturbations de systemes reversibles?dedoublement d’orbites homoclines. Manuscript Université de Nice, Nice, France. [28] Rüssmann, H. (1964). Über das Verhalten analytischer Hamiltonscher Differentialgleichungen in der Nähe einer Gleichgewichtslösung.Math. Ann. 154, 285-300. · Zbl 0124.04701 · doi:10.1007/BF01362565 [29] Shil’nikov, L. P. (1970). A contribution to the problem of the structure of an extended neighborhood of a rough equilibrium state of saddle-focus type.Math. USSR Sbornik 10, 91-102. · Zbl 0216.11201 · doi:10.1070/SM1970v010n01ABEH001588 [30] Turaev, D. V., and Shil’nikov, L. P. (1989). On Hamiltonian systems with homoclinic saddle curves.Soviet Math. Dokl. 39, 165-168. [31] Wiggins, S. (1988).Global Bifurcations and Chaos, Appl. Math. Sciences Vol. 73, Springer-Verlag, New York.
|
{}
|
## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.
No credit card required
Chapter 11
# Floating-Point Arithmetic Instructions
The fixed-point number representation is appropriate for representing numbers with small numerical values that are considered as positive or negative integers; that is, the implied radix point is to the right of the low-order bit. The same algorithms for arithmetic operations can be employed if the implied radix point is to the immediate right of the sign bit, thus representing a signed fraction.
The range for a 16-bit fixed-point number is from (–215) to (+215 – 1), which is inadequate for some numbers; for example, the following operation:
$28,400,000,000.×0.0000000546$
This operation can also be written in scientific notation, as follows:
$\left(0.284×{10}^{11}\right)×\left(0.546×{10}^{-7}\right)$
where 10 is the base and 11 ...
## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.
No credit card required
|
{}
|
Several cultures of Type 021N were characterised and although all had similar morphological properties they showed considerable variations in their substrate utilisation patterns and enzyme profiles.
|
{}
|
# Mesh Lightning
This topic is 3811 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hey When I'm rendering multiple meshes , the lightning is rendered foreach mesh. In my case, the meshes are rendering 4 parts of a map, and thus the lightning creates colour difference between each mech, which looks odd. The way I'm drawing the meshes is using device world to the specific mesh, and drawing it as a subset C#
device.Transform.World = Matrix.Translation(-128,-128,0) * Matrix.RotationZ(zAngle);
meshA.DrawSubset(0);
device.Transform.World = Matrix.Translation(-1,-128,0) * Matrix.RotationZ(zAngle);
meshB.DrawSubset(0);
The lightning is set overall with 1 directional light on the device C#
device.RenderState.Lighting = true;
device.Lights[0].Type = LightType.Directional;
device.Lights[0].Diffuse = Color.White;
device.Lights[0].Direction = new Vector3(0, 0.5f, -1);
device.Lights[0].Enabled = true;
Is this the correct way to do it in theroy, or did I miss something ? Thanks.
##### Share on other sites
Any chance of a picture showing this aftifact? Could make it a little clearer [smile]
From your description it doesn't sound like you're doing anything obviously wrong. The sort of rendering approach you're using isn't unheard of and is a valid way of rendering a tiled map.
Have you inspected the data where the artifacts occur? Sometimes it'll be a discontinuity such that the algorithm is correct but the data isn't...
hth
Jack
##### Share on other sites
hey
Yes, I've checked the data before. If I render the entire map without using meshes , but simply raw trinagles, the colour problem don't appear.
The effect problem can be seen here. When not using multiple meshes the colours match , instead of here where it appears as the lightning is draw foreach mesh.
The image shows the middle of the 4 meshes that renders my area (zoomed well in).
##### Share on other sites
That screenshot looks like it's a case of texture coordinates/sampling rather than lighting. Try setting the ambient render state (device.RenderState.Ambient maybe) to white to effectively disable lighting and check again for artifacts. I'll bet they still exist.
If I'm correct then you need to look at your geometry creation, quite specifically how it differs from the case where you say it works and the case where you say it doesn't.
How are you putting the data into a mesh? Are you running any D3DX operators on the mesh?
hth
Jack
|
{}
|
## what is .65 2/3 in fraction form?
Fractions, ratios, percentages, exponents, number patterns, word problems without variables, etc.
### what is .65 2/3 in fraction form?
what is .65 2/3 in fraction form?
confused
Posts: 20
Joined: Sun Feb 22, 2009 10:07 pm
By ".65 2/3", do you mean "65 2/3%" ("sixty-five and two-thirds percent") or "0.65666666..."?
Thank you!
stapel_eliz
Posts: 1783
Joined: Mon Dec 08, 2008 4:22 pm
### Re: what is .65 2/3 in fraction form?
its .65666666...
confused
Posts: 20
Joined: Sun Feb 22, 2009 10:07 pm
### Re: what is .65 2/3 in fraction form?
confused wrote:its .65666666...
We can break up the number into two parts:
$0.65 \,+\, 0.00666666...$
and in fraction form, those parts become:
$\frac{65}{100} \,+\, \frac{0.666666...}{100}$
$= \frac{65}{100} \,+\, \frac{\frac23}{100}$
Now, we need to get rid of the $\frac23$, so we multiply both fractions by $\frac33$, which will keep their denominators (bottom numbers) the same:
$= \frac{65}{100} \, \left(\frac33\right) \,+\, \frac{\frac23}{100} \, \left(\frac33\right)$
$= \frac{195}{300} \,+\, \frac2{300}$
$= \frac{197}{300}$
And as a check, we can divide 197 by 300 on a calculator to get:
$0.656666667$
... which is what we started with (with the last digit rounded up).
By ".65 2/3", do you mean "65 2/3%" ("sixty-five and two-thirds percent") or "0.65666666..."?
Incidentally, $65 \frac23 % \,=\, \frac{65 \frac23}{100} \,=\, \frac{197}{300} \,=\, 0.65666666... \,=\, 65.666666... / 100 \,=\, 65 \frac23 %$
DAiv
DAiv
Posts: 44
Joined: Tue Dec 16, 2008 7:47 pm
|
{}
|
# Is there a standard for automatic line-breaking for text with various alignment types?
The context is that I'd like to not have to manually choose newlines and rather eyeball the length of text and possibly specify the number of lines or horizontal width the text should take up. I'm interested in various alignment types (left, center, possibly right).
I realize this is a bit of a complex question because I'm not really sure what looks best in terms of different breaks. Take left alignment and the following sentence: The quick brown fox jumps over the lazy dog. Here are a few possibilities:
1:
The quick brown fox
jumps over
the lazy dog.
2:
The quick
brown fox jumps over
the lazy dog.
3:
The quick
brown fox
jumps over the lazy dog.
4:
The quick
brown fox jumps
over the lazy dog.
For left alignment, I personally think a "towers of hanoi" stacking looks best (e.g. #4), but I'm not sure if it makes sense for center alignment.
Other considerations may be punctuation in the text.
An acceptable answer to this question may very well be: this is a bad idea or very difficult. Alternatively, I can see it being something done with an external scripting language (which I haven't really gotten in to yet). I'm really just curious if there is an existing method for this, as it is currently not a necessity for me.
# Here is a more precise example dealing with author addresses and titles:
\documentclass[letterpaper]{article}
\usepackage[affil-it]{authblk}
\usepackage[english]{babel}
\usepackage{blindtext}
\title{An efficient method for exploiting midichlorians in weak life-forms}
\author[2,3]{Darth Sidious%
\affil[1]{Office of the Supreme Commander of the Imperial Foces, The Galactic Empire, The Bridge, Executor}
\affil[2]{Order of the Sith Lords, LiMerge Power Building, The Works, Coruscant}
\affil[3]{Office of the Emperor of the Galaxy, The Galactic Empire, 1000 Imperial Palace, 2 Main St. Coruscant}
\date{\today}
\begin{document}
\maketitle
I ran out of creative energy here ... \blindtext
\end{document}
This results in the following:
The title itself is a bit unbalanced, and the second author affiliation has only one word on the second line.
• It is pretty unclear to me, what you want to achieve. What is your optimization goal for the line breaking? – Heiko Oberdiek Apr 13 '14 at 23:40
• Moreover, TeX is all about automatic line-breaking. The standard for TeX relative to any given context and given any set of hyphenation patterns, penalties etc. just is the line-breaking TeX gives you. That is, TeX does the best it can whatever you give it within whatever parameters you specify. So you seem to want a non-TeX standard but that is off-topic for this site... – cfr Apr 14 '14 at 0:09
• I didn't have a particular optimization goal (which is also why I didn't think a MWE was necessary for this general and admitedly beginner's question), but cfr's hint is probably pointing me in the right direction - I'll look in to the parameters more, thanks! – bbarker Apr 14 '14 at 2:31
• David Carlisle's answer at tex.stackexchange.com/questions/70537/… shows how \nolinebreak penalties can be set between different words of a sentence in order to force LaTeX to make certain breaks less preferable. An example: The\nolinebreak[1] quick brown\nolinebreak[2] fox jumps over the lazy\nolinebreak[1] dog. would penalize line breaking at various points, after the leading "The", between "brown" and "fox", and between "lazy" and "dog". – Steven B. Segletes Apr 14 '14 at 13:11
Your question isn't very clear and lacks an example document, but this shows four different setting of teh text, standard justified, sloppy, ragged right and RaggedRight from the ragged2e package. As the text is so short it doesn't really show the differences well so I repeat the settings with a longer paragraph with the text repeated. It still doesn't really show the differences a major difference is the amount of hyphenation that is allowed, but these are short non-hyphenatable words.
Unless you are setting poetry where choice of linebreaking is part of the composition of the works and you want manual control over that, it should be rather rare to manually linebreak text at all when using TeX. So the initial line of your question seems strange with no additional context explaining why manual linebreaking is needed.
\documentclass{article}
\newcommand\qbf{The quick brown fox jumps over the lazy dog. }
\newcommand\qbff{\qbf\qbf\qbf\qbf}
\usepackage{ragged2e}
\begin{document}
\begin{minipage}[t]{3cm}\qbf\end{minipage}
\begin{minipage}[t]{3cm}\sloppy\qbf\end{minipage}
\bigskip
\begin{minipage}[t]{3cm}\raggedright\qbf\end{minipage}
\begin{minipage}[t]{3cm}\RaggedRight\qbf\end{minipage}
\bigskip\hrule\bigskip
\begin{minipage}[t]{3cm}\qbff\end{minipage}
\begin{minipage}[t]{3cm}\sloppy\qbff\end{minipage}
\bigskip
\begin{minipage}[t]{3cm}\raggedright\qbff\end{minipage}
\begin{minipage}[t]{3cm}\RaggedRight\qbff\end{minipage}
\end{document}
\documentclass{article}
\newcommand\qbf{The quick brown fox jumps over the lazy dog. }
\newcommand\qbff{\qbf\qbf\qbf\qbf}
\usepackage{ragged2e}
\begin{document}
\begin{minipage}[t]{3cm}\qbf\end{minipage}
\begin{minipage}[t]{3cm}\sloppy\qbf\end{minipage}
\bigskip
\begin{minipage}[t]{3cm}\raggedright\qbf\end{minipage}
\begin{minipage}[t]{3cm}\RaggedRight\qbf\end{minipage}
\bigskip\hrule\bigskip
\begin{minipage}[t]{3cm}\qbff\end{minipage}
\begin{minipage}[t]{3cm}\sloppy\qbff\end{minipage}
\bigskip
\begin{minipage}[t]{3cm}\raggedright\qbff\end{minipage}
\begin{minipage}[t]{3cm}\RaggedRight\qbff\end{minipage}
\end{document}
With the now supplied MWE, you can get various effects depending on how you set the paragraph parameters. The (very old:-) package you are using doesn't give an interface to that but basically it just redefines tabular used by the article package \maketitle to be center, so by redefining center you can cause various texts to move around, for example:
\documentclass[letterpaper]{article}
\usepackage[affil-it]{authblk}
\usepackage{ragged2e}
\makeatletter
\def\maketitle
{{\@flushglue=.25\textwidth minus.25\textwidth\z@skip
\hyphenpenalty\@M
\let\old@date\@date
\def\@date{\mbox{}\hskip\@flushglue\old@date\hskip\@flushglue\mbox{}\par}%hmm
\renewenvironment{tabular}[2][]{\par}
{\par}%
\AB@maketitle}}
\makeatother
\usepackage[english]{babel}
\usepackage{blindtext}
\title{An efficient method for exploiting midichlorians in weak life-forms}
\author[2,3]{Darth Sidious%
\affil[1]{Office of the Supreme Commander of the Imperial Foces, The Galactic Empire, The Bridge, Executor}
\affil[2]{Order of the Sith Lords, LiMerge Power Building, The Works, Coruscant}
\affil[3]{Office of the Emperor of the Galaxy, The Galactic Empire, 1000 Imperial Palace, 2 Main St. Coruscant}
\date{\today}
\begin{document}
\maketitle
\noindent X\dotfill X
I ran out of creative energy here ... \blindtext
\end{document}
Having said that, addresses, like my original comment about poetry, are really a special case where there are many social conventions about linebreaking. It's not unlikely that you end up having to specify linebreaks by hand in an address.
• To clarify my context a bit, which I agree I should have done and almost did (but deleted because I was wanting to start off with a broad view): I'm mainly thinking about titles, subtitles, and other headings. – bbarker Apr 14 '14 at 2:38
• @bbarker same comments apply, do you want to allow hyphenation, do you only want to allow manual line breaking? Why are your headings so long they need to linebreak:-) – David Carlisle Apr 14 '14 at 8:37
• I'm mainly concerned with a semblance of balance for line lengths, which to me is important in some situations. I've added to my question above to reflect the original inspiration for this question - hope this clears things up. I'm wanting to avoid manual line breaking, or at least I'm curious if it is possible. Often times I will change something (e.g. in my CV slightly) which will require me to readjust line-breaks. Manual still may be the way to go in such a small and important document. Substitute "Coruscant" with "UK" and it looks even worse in my opinion. – bbarker Apr 14 '14 at 19:18
• @bbarker the RaggedRight version is more likely to give balanced lien lengths than the standard raggedright as it doesn't allow infinite stretch in rightskip – David Carlisle Apr 14 '14 at 19:21
• there may be 2 problems with this: one is that it is not centered, and the second is that it looks like this ( imgur.com/fJKS9KL ) when I do: \affil[2]{\protect \RaggedRight Order of the ... – bbarker Apr 14 '14 at 19:32
|
{}
|
DirectInput Buffered Keyboard
Recommended Posts
FenixRoA 142
OK, so I switched to a buffered keyboard. The codes that come in however are almost nonsensical garbage. The keys do not match up to any ASCII or Unicode chart but rather to whatever the hardware sends. So I have been mapping each letter, number, and control key (shift, enter, insert, etc.) into an array, and now I have a massive class that handles all things keyboard related. The struct for each key is as follows:
[source language = "c#"]
public struct OneKey
{
public enum State
{
Norm = 0,
Shift= 1
}
public bool hasChar;
public byte KeyCode;
public char [] KeyChar;
public OneKey(byte kCode,char kChar, char ShiftChar)
{
hasChar = true;
KeyCode = kCode;
KeyChar = new Char [2];
KeyChar[State.Norm] = kChar;
KeyChar[State.Shift] = ShiftChar;
}
public OneKey(byte kCode, char kChar)
{
hasChar = true;
KeyCode = kCode;
KeyChar = new Char[1];
KeyChar[State.Norm] = kChar;
}
public OneKey(byte kCode)
{
hasChar = false;
KeyCode = kCode;
}
}
Is this the best way to go about this? is there no function that will do this for me (give me the character (or string) represented by the key pressed or the keycode (as I require) that is affected by caps lock and shift as necessary)?
Share on other sites
FenixRoA 142
Please guys... All I want to know if there is any function or method that can convert each keycode to its appropriate letter/ASCII/Unicode form...
Share on other sites
FenixRoA 142
a simple yes or no answer would b fine at this point
Share on other sites
ph33r 380
To my knowledge there is no such dinput function or macro to handle this. You'll have to roll your own switch statement ala
DWORD keyPressed;
switch( keyPressed )
{
case DIK_A:
case DIK_B:
.
.
.
case DIK_Z:
};
I just double checked the dinput documentation and I didn't find any, regardless it shouldn't take you longer then 10 mins to write a function to do the conversion for you.
- Dave Neubelt
Share on other sites
dweeb 128
DirectInput is not really designed to work with text input.
I suggest that you use the normal windows events for regular textinputs and directX for controlling the character.
Share on other sites
FenixRoA 142
Which events do I have to hook into and what namespace are they in?
Share on other sites
ph33r 380
If you're going to go the winproc route of using text-input you will want to catch the WM_CHAR message. Remeber in your message loop, you need to call TranslateMessage to turn those WM_KEYDOWN message's into WM_CHAR's. I believe the wParam of WM_CHAR tell's you if shift is being held or capslock is on, but it could be the lword, you'll have to double check the docs.
But going against the previous posters advice I'd write a function that takes a direct input DWORD and converts it to a char, it's a quick copy paste job.
- Dave Neubelt
Share on other sites
turnpast 1011
take a look at
http://www.gamedev.net/reference/programming/features/scan2asc/
.
Yuck.
To do this in Managed land you need to pinvoke User32.dll. If you are interested in following this path browse through the Common directory of of the Managed samples in the oct 2004 update, they do a bunch of similar garbage.
|
{}
|
# Source
Carl Meyer 07b3291 2010-01-28 Carl Meyer bcc288b 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer 52848ca 2011-02-18 Carl Meyer 07b3291 2010-01-28 Carl Meyer 236e199 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer 01d92a3 2013-01-27 Carl Meyer 07b3291 2010-01-28 Carl Meyer 03b7383 2013-01-27 Carl Meyer bcc288b 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer 7b9fcb7 2010-04-22 Carl Meyer 07b3291 2010-01-28 Carl Meyer bcc288b 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer bcc288b 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer bcc288b 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer 7b9fcb7 2010-04-22 Carl Meyer 236e199 2010-04-16 Carl Meyer 7b9fcb7 2010-04-22 Carl Meyer 236e199 2010-04-16 Carl Meyer 7b9fcb7 2010-04-22 Carl Meyer 07b3291 2010-01-28 Carl Meyer 7b9fcb7 2010-04-22 Carl Meyer 07b3291 2010-01-28 Carl Meyer 236e199 2010-04-16 Carl Meyer 34d4a9a 2013-01-26 Carl Meyer 236e199 2010-04-16 Carl Meyer 6713975 2010-04-16 Carl Meyer 236e199 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer 7a30dad 2010-01-29 Carl Meyer 07b3291 2010-01-28 Carl Meyer 7a30dad 2010-01-29 Alejandro Varas 99286c2 2013-01-23 Carl Meyer 07b3291 2010-01-28 Carl Meyer 236e199 2010-04-16 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 07b3291 2010-01-28 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 07b3291 2010-01-28 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 07b3291 2010-01-28 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 37887e8 2011-10-26 Maxim Sukharev 9a6b914 2011-10-26 Carl Meyer 37887e8 2011-10-26 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 4a5f4bc 2013-02-02 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 48ad311 2011-12-14 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 48ad311 2011-12-14 Carl Meyer 4a5f4bc 2013-02-02 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 236e199 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer dccfb7b 2010-11-23 Carl Meyer 07b3291 2010-01-28 Carl Meyer 236e199 2010-04-16 Carl Meyer 07b3291 2010-01-28 Carl Meyer 92c538c 2010-09-24 Paul McLanahan d0f6bc5 2011-03-09 Carl Meyer 0ae5149 2011-04-16 Paul McLanahan d0f6bc5 2011-03-09 Carl Meyer 0ae5149 2011-04-16 Carl Meyer 94e4b63 2011-12-06 Paul McLanahan d0f6bc5 2011-03-09 Antti Kaihola df7bc8e 2012-06-08 Simon Meers 5ad0c62 2012-08-16 Paul McLanahan d0f6bc5 2011-03-09 Simon Meers 5ad0c62 2012-08-16 Paul McLanahan d0f6bc5 2011-03-09 Simon Meers 5ad0c62 2012-08-16 Paul McLanahan d0f6bc5 2011-03-09 Simon Meers 5ad0c62 2012-08-16 Carl Meyer 0ae5149 2011-04-16 Simon Meers 5ad0c62 2012-08-16 Carl Meyer 94e4b63 2011-12-06 Simon Meers 5ad0c62 2012-08-16 Carl Meyer 0ae5149 2011-04-16 Carl Meyer 48ad311 2011-12-14 Trey Hunner 6d0784b 2013-02-19 Carl Meyer da1f842 2013-02-19 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 ================== django-model-utils ================== Django model mixins and utilities. Installation ============ Install from PyPI with pip:: pip install django-model-utils or get the in-development version_:: pip install django-model-utils==tip .. _in-development version: http://bitbucket.org/carljm/django-model-utils/get/tip.tar.gz#egg=django_model_utils-tip To use django-model-utils in your Django project, just import and use the utility classes described below; there is no need to modify your INSTALLED_APPS setting. Dependencies ------------ django-model-utils is tested with Django_ 1.2 and later on Python 2.6 and 2.7. .. _Django: http://www.djangoproject.com/ Contributing ============ Please file bugs and send pull requests to the GitHub repository_ and issue tracker_. .. _GitHub repository: https://github.com/carljm/django-model-utils/ .. _issue tracker: https://github.com/carljm/django-model-utils/issues (Until January 2013 django-model-utils primary development was hosted at BitBucket_; the issue tracker there will remain open until all issues and pull requests tracked in it are closed, but all new issues should be filed at GitHub.) .. _BitBucket: https://bitbucket.org/carljm/django-model-utils/overview Choices ======= Choices provides some conveniences for setting choices on a Django model field:: from model_utils import Choices class Article(models.Model): STATUS = Choices('draft', 'published') # ... status = models.CharField(choices=STATUS, default=STATUS.draft, max_length=20) A Choices object is initialized with any number of choices. In the simplest case, each choice is a string; that string will be used both as the database representation of the choice, and the human-readable representation. Note that you can access options as attributes on the Choices object: STATUS.draft. But you may want your human-readable versions translated, in which case you need to separate the human-readable version from the DB representation. In this case you can provide choices as two-tuples:: from model_utils import Choices class Article(models.Model): STATUS = Choices(('draft', _('draft')), ('published', _('published'))) # ... status = models.CharField(choices=STATUS, default=STATUS.draft, max_length=20) But what if your database representation of choices is constrained in a way that would hinder readability of your code? For instance, you may need to use an IntegerField rather than a CharField, or you may want the database to order the values in your field in some specific way. In this case, you can provide your choices as triples, where the first element is the database representation, the second is a valid Python identifier you will use in your code as a constant, and the third is the human-readable version:: from model_utils import Choices class Article(models.Model): STATUS = Choices((0, 'draft', _('draft')), (1, 'published', _('published'))) # ... status = models.IntegerField(choices=STATUS, default=STATUS.draft) StatusField =========== A simple convenience for giving a model a set of "states." StatusField is a CharField subclass that expects to find a STATUS class attribute on its model, and uses that as its choices. Also sets a default max_length of 100, and sets its default value to the first item in the STATUS choices:: from model_utils.fields import StatusField from model_utils import Choices class Article(models.Model): STATUS = Choices('draft', 'published') # ... status = StatusField() (The STATUS class attribute does not have to be a Choices_ instance, it can be an ordinary list of two-tuples). StatusField does not set db_index=True automatically; if you expect to frequently filter on your status field (and it will have enough selectivity to make an index worthwhile) you may want to add this yourself. MonitorField ============ A DateTimeField subclass that monitors another field on the model, and updates itself to the current date-time whenever the monitored field changes:: from model_utils.fields import MonitorField, StatusField class Article(models.Model): STATUS = Choices('draft', 'published') status = StatusField() status_changed = MonitorField(monitor='status') (A MonitorField can monitor any type of field for changes, not only a StatusField.) SplitField ========== A TextField subclass that automatically pulls an excerpt out of its content (based on a "split here" marker or a default number of initial paragraphs) and stores both its content and excerpt values in the database. A SplitField is easy to add to any model definition:: from django.db import models from model_utils.fields import SplitField class Article(models.Model): title = models.CharField(max_length=100) body = SplitField() SplitField automatically creates an extra non-editable field _body_excerpt to store the excerpt. This field doesn't need to be accessed directly; see below. Accessing a SplitField on a model --------------------------------- When accessing an attribute of a model that was declared as a SplitField, a SplitText object is returned. The SplitText object has three attributes: content: The full field contents. excerpt: The excerpt of content (read-only). has_more: True if the excerpt and content are different, False otherwise. This object also has a __unicode__ method that returns the full content, allowing SplitField attributes to appear in templates without having to access content directly. Assuming the Article model above:: >>> a = Article.objects.all()[0] >>> a.body.content u'some text\n\n\n\nmore text' >>> a.body.excerpt u'some text\n' >>> unicode(a.body) u'some text\n\n\n\nmore text' Assignment to a.body is equivalent to assignment to a.body.content. .. note:: a.body.excerpt is only updated when a.save() is called Customized excerpting --------------------- By default, SplitField looks for the marker alone on a line and takes everything before that marker as the excerpt. This marker can be customized by setting the SPLIT_MARKER setting. If no marker is found in the content, the first two paragraphs (where paragraphs are blocks of text separated by a blank line) are taken to be the excerpt. This number can be customized by setting the SPLIT_DEFAULT_PARAGRAPHS setting. TimeFramedModel =============== An abstract base class for any model that expresses a time-range. Adds start and end nullable DateTimeFields, and a timeframed manager that returns only objects for whom the current date-time lies within their time range. StatusModel =========== Pulls together StatusField_, MonitorField_ and QueryManager_ into an abstract base class for any model with a "status." Just provide a STATUS class-attribute (a Choices_ object or a list of two-tuples), and your model will have a status field with those choices, a status_changed field containing the date-time the status was last changed, and a manager for each status that returns objects with that status only:: from model_utils.models import StatusModel from model_utils import Choices class Article(StatusModel): STATUS = Choices('draft', 'published') # ... a = Article() a.status = Article.STATUS.published # this save will update a.status_changed a.save() # this query will only return published articles: Article.published.all() InheritanceManager ================== This manager (contributed by Jeff Elmore_) should be attached to a base model class in a model-inheritance tree. It allows queries on that base model to return heterogenous results of the actual proper subtypes, without any additional queries. For instance, if you have a Place model with subclasses Restaurant and Bar, you may want to query all Places:: nearby_places = Place.objects.filter(location='here') But when you iterate over nearby_places, you'll get only Place instances back, even for objects that are "really" Restaurant or Bar. If you attach an InheritanceManager to Place, you can just call the select_subclasses() method on the InheritanceManager or any QuerySet from it, and the resulting objects will be instances of Restaurant or Bar:: from model_utils.managers import InheritanceManager class Place(models.Model): # ... objects = InheritanceManager() class Restaurant(Place): # ... class Bar(Place): # ... nearby_places = Place.objects.filter(location='here').select_subclasses() for place in nearby_places: # "place" will automatically be an instance of Place, Restaurant, or Bar The database query performed will have an extra join for each subclass; if you want to reduce the number of joins and you only need particular subclasses to be returned as their actual type, you can pass subclass names to select_subclasses(), much like the built-in select_related() method:: nearby_places = Place.objects.select_subclasses("restaurant") # restaurants will be Restaurant instances, bars will still be Place instances InheritanceManager also provides a subclass-fetching alternative to the get() method:: place = Place.objects.get_subclass(id=some_id) # "place" will automatically be an instance of Place, Restaurant, or Bar If you don't explicitly call select_subclasses() or get_subclass(), an InheritanceManager behaves identically to a normal Manager; so it's safe to use as your default manager for the model. .. note:: Due to Django bug #16572_, on Django versions prior to 1.6 InheritanceManager only supports a single level of model inheritance; it won't work for grandchild models. .. note:: The implementation of InheritanceManager uses select_related internally. Due to Django bug #16855_, this currently means that it will override any previous select_related calls on the QuerySet. .. _contributed by Jeff Elmore: http://jeffelmore.org/2010/11/11/automatic-downcasting-of-inherited-models-in-django/ .. _Django bug #16855: https://code.djangoproject.com/ticket/16855 .. _Django bug #16572: https://code.djangoproject.com/ticket/16572 TimeStampedModel ================ This abstract base class just provides self-updating created and modified fields on any model that inherits from it. QueryManager ============ Many custom model managers do nothing more than return a QuerySet that is filtered in some way. QueryManager allows you to express this pattern with a minimum of boilerplate:: from django.db import models from model_utils.managers import QueryManager class Post(models.Model): ... published = models.BooleanField() pub_date = models.DateField() ... objects = models.Manager() public = QueryManager(published=True).order_by('-pub_date') The kwargs passed to QueryManager will be passed as-is to the QuerySet.filter() method. You can also pass a Q object to QueryManager to express more complex conditions. Note that you can set the ordering of the QuerySet returned by the QueryManager by chaining a call to .order_by() on the QueryManager (this is not required). PassThroughManager ================== A common "gotcha" when defining methods on a custom manager class is that those same methods are not automatically also available on the QuerySets returned by that manager, so are not "chainable". This can be counterintuitive, as most of the public QuerySet API is mirrored on managers. It is possible to create a custom Manager that returns QuerySets that have the same additional methods, but this requires boilerplate code. The PassThroughManager class (contributed by Paul McLanahan_) removes this boilerplate. .. _contributed by Paul McLanahan: http://paulm.us/post/3717466639/passthroughmanager-for-django To use PassThroughManager, rather than defining a custom manager with additional methods, define a custom QuerySet subclass with the additional methods you want, and pass that QuerySet subclass to the PassThroughManager.for_queryset_class() class method. The returned PassThroughManager subclass will always return instances of your custom QuerySet, and you can also call methods of your custom QuerySet directly on the manager:: from datetime import datetime from django.db import models from django.db.models.query import QuerySet from model_utils.managers import PassThroughManager class PostQuerySet(QuerySet): def by_author(self, user): return self.filter(user=user) def published(self): return self.filter(published__lte=datetime.now()) def unpublished(self): return self.filter(published__gte=datetime.now()) class Post(models.Model): user = models.ForeignKey(User) published = models.DateTimeField() objects = PassThroughManager.for_queryset_class(PostQuerySet)() Post.objects.published() Post.objects.by_author(user=request.user).unpublished() ModelTracker ============ A ModelTracker can be added to a model to track changes in model fields. A ModelTracker allows querying for field changes since a model instance was last saved. An example of applying ModelTracker to a model:: from django.db import models from model_utils import ModelTracker class Post(models.Model): title = models.CharField(max_length=100) body = models.TextField() tracker = ModelTracker() Accessing a model tracker ------------------------- There are multiple methods available for checking for changes in model fields. previous ~~~~~~~~ Returns the value of the given field during the last save:: >>> a = Post.objects.create(title='First Post') >>> a.title = 'Welcome' >>> a.tracker.previous('title') u'First Post' Returns None when the model instance isn't saved yet. has_changed ~~~~~~~~~~~ Returns True if the given field has changed since the last save:: >>> a = Post.objects.create(title='First Post') >>> a.title = 'Welcome' >>> a.tracker.has_changed('title') True >>> a.tracker.has_changed('body') False Returns True if the model instance hasn't been saved yet. changed ~~~~~~~ Returns a dictionary of all fields that have been changed since the last save and the values of the fields during the last save:: >>> a = Post.objects.create(title='First Post') >>> a.title = 'Welcome' >>> a.body = 'First post!' >>> a.tracker.changed() {'title': 'First Post', 'body': ''} Returns {} if the model instance hasn't been saved yet. Tracking specific fields ------------------------ A fields parameter can be given to ModelTracker to limit model tracking to the specific fields:: from django.db import models from model_utils import ModelTracker class Post(models.Model): title = models.CharField(max_length=100) body = models.TextField() title_tracker = ModelTracker(fields=['title']) An example using the model specified above:: >>> a = Post.objects.create(title='First Post') >>> a.body = 'First post!' >>> a.title_tracker.changed() {}
|
{}
|
# How to write a limit in terms of finite summation
I managed to find$$\int\limits_0^\infty \frac{\ln^{2a}(x)\ln(1+x)}{\sqrt{x}(1+x)}\mathrm{d}x=-\pi\lim_{m\to \frac12 }\frac{d^{2a}}{d m^{2a}} \frac{\psi(1-m) + \gamma}{\sin(m\pi)}.$$
To show this relation, we follow the same approach here:
Reduce $$n$$ by $$m$$ in the beta function: $$\int_0^\infty\frac{x^{m-1}}{(1+x)^{m+n}}\mathrm{d}x=\operatorname{B}(m,n)=\frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)},$$ we have $$\int_0^\infty\frac{x^{m-1}}{(1+x)^{n}}\mathrm{d}x=\frac{\Gamma(m)\Gamma(n-m)}{\Gamma(n)}.$$ Differentiate both sides $$2a$$ times with respect to $$m$$ and once with respect to $$n$$, $$\begin{gather*} \frac{\partial^{2a}}{\partial m^{2a}} \frac{\partial}{\partial n} \frac{\Gamma(m)\Gamma(n-m)}{\Gamma(n)}=\frac{\partial^{2a}}{\partial m^{2a}} \frac{\partial}{\partial n}\int\limits_0^\infty \frac{x^{m-1}}{(1+x)^n}\mathrm{d}x\\ \{\text{use differentiation under the integral sign theorem}\}\\ =\int\limits_0^\infty \frac{\partial^{2a}}{\partial m^{2a}} \frac{\partial}{\partial n}\frac{x^{m-1}}{(1+x)^n}\mathrm{d}x\\ =-\int\limits_0^\infty \frac{\ln^{2a}(x)\ln(1+x)x^{m-1}}{(1+x)^n}\mathrm{d}x. \end{gather*}$$ Now take the limit on both sides letting $$m\to 1/2$$ and $$n\to1$$, $$\begin{gather*} -\int\limits_0^\infty \frac{\ln^{2a}(x)\ln(1+x)}{\sqrt{x}(1+x)}\mathrm{d}x=\lim_{\substack{m\to 1/2 \\ n \to 1}}\frac{\partial^{2a}}{\partial m^{2a}} \frac{\partial}{\partial n} \frac{\Gamma(m)\Gamma(n-m)}{\Gamma(n)}\\ =\lim_{\substack{m\to 1/2 \\ n \to 1}}\frac{\partial^{2a}}{\partial m^{2a}}\Gamma(m)\left( \frac{\partial}{\partial n} \frac{\Gamma(n-m)}{\Gamma(n)}\right)\\ =\lim_{\substack{m\to 1/2 \\ n \to 1}}\frac{\partial^{2a}}{\partial m^{2a}} \Gamma(m)\left(\frac{\Gamma(n-m)[{\psi}(n-m) -\psi(n)]}{\Gamma(n)}\right)\\ \{\text{evaluate the limit when n\to 1 and use \psi(1)=-\gamma}\}\\ =\lim_{m\to 1/2 }\frac{\partial^{2a}}{\partial m^{2a}} \Gamma(m)\Gamma(1-m)[\psi(1-m) + \gamma]\\ \left\{\text{use \Gamma(m)\Gamma(1-m)=\frac{\pi}{\sin(m\pi )}}\right\}\\ \left\{\text{and write \frac{\partial}{\partial m} as \frac{d}{dm}, since we have one variable left}\right\}\\ =\pi\lim_{m\to \frac12 }\frac{d^{2a}}{d m^{2a}} \frac{\psi(1-m) + \gamma}{\sin(m\pi)}. \end{gather*}$$
The question here is can we write the limit in terms of finite summation?
This problem deals with the relation
$$\int\limits_0^\infty \frac{\ln^{2a}(x)\ln(1+x)}{\sqrt{x}(1+x)}\mathrm{d}x=-\pi\lim_{m\to \frac12 }\frac{\mathrm{d}^{2a}}{\mathrm{d} m^{2a}} \frac{\psi(1-m) + \gamma}{\sin(m\pi)}\tag{*}$$
The solution to the problem of the OP (write the r.h.s. as a finite sum) is found in the section "finite sum" below.
Optionally, we also try to verify the relation $$(*)$$. The main task, the transformation of the integral to a sum (plus explicit terms), is done in the first section.
Transformation of the integral
EDIT 17.05.21
I just discovered that Mathematica solves the generating integral (version 8.0 immediately, 10.1 via the antiderivative)
$$\int_0^{\infty } \frac{x^z \log (x+1)}{x+1} \, dx=\pi \csc (\pi z) (\psi ^{(0)}(-z)+\gamma )$$
The powers of $$\log(x)$$ unter the integral can be generated by differentiation.
End EDIT
We wish to evaluate the integral
$$i = \int_0^{\infty}\frac{ \log(x)^{2a}\log(1+x)}{\sqrt{x} (1+x)}\,dx\tag{1}$$
Where $$2a$$ is specified implicitly in the OP as a positive integer.
Splitting the integration region we can write
$$i=i_1 + i_2\tag{2}$$
with
$$i_1 = \int_0^{1}\frac{ \log(x)^{2a}\log(1+x)}{\sqrt{x} (1+x)}\,dx\tag{3a}$$
$$i_2 = \int_1^{\infty}\frac{ \log(x)^{2a}\log(1+x)}{\sqrt{x} (1+x)}\,dx\tag{3b}$$
Now letting $$x\to \frac{1}{y}$$ in $$i_2$$ we can write (skipping some steps)
$$i_2 = (-1)^{2a} i_1 + i_3\tag{4}$$
where
\begin{align}i_3 & = \int_0^1 \frac{(-\log (y))^{2 a+1}}{\sqrt{y} (y+1)} \, dy \\ & = 2^{-2 (a+1)} \left(\zeta \left(2 a+2,\frac{1}{4}\right)-\zeta \left(2 a+2,\frac{3}{4}\right)\right) \Gamma (2 a+2)\end{align}\tag{5}
Observing now that
$$\frac{\log(1+x)}{1+x} = -\sum_{n=1}^{\infty} H_n (-x)^n\tag{6}$$
We can write
$$i_1 =i_{1s}:=(-2)^{2 a+1}\Gamma (2 a+1) \sum_{n=1}^{\infty}(-1)^n \frac{H_n}{(2 n+1)^{2 a+1}}\tag{7a}$$
so that finally the integral $$(1)$$ can be written as
$$i=\left((-1)^{2 a}+1\right) i_{1s}+i_3$$
Hence the task of the OP reduces write the following sum $$s$$ as a sum of a finite number of terms
$$s = \sum_{n=1}^{\infty} (-1)^n\frac{ H_n}{ (1+2n)^{2a+1}}\tag{8}$$
Observation: this sum is evaluated in [1] identity $$(4.91)$$ .
Finite sum
Here we show that the r.h.s. of $$(*)$$ can be written an as finite sum.
Starting directly from the formula in the OP, letting $$2a = k$$ I get for the first few limits these expressions (format {k, limit})
$$\begin{array}{c} \left\{0,-\pi \left(\psi ^{(0)}\left(\frac{1}{2}\right)+\gamma \right)\right\} \\ \left\{1,\frac{\pi ^3}{2}\right\} \\ \left\{2,-\pi \left(\pi ^2 \psi ^{(0)}\left(\frac{1}{2}\right)+\gamma \pi ^2+\psi ^{(2)}\left(\frac{1}{2}\right)\right)\right\} \\ \left\{3,\frac{5 \pi ^5}{2}\right\} \\ \left\{4,-\pi \left(5 \pi ^4 \psi ^{(0)}\left(\frac{1}{2}\right)+5 \gamma \pi ^4+6 \pi ^2 \psi ^{(2)}\left(\frac{1}{2}\right)+\psi ^{(4)}\left(\frac{1}{2}\right)\right)\right\} \\ \left\{5,\frac{61 \pi ^7}{2}\right\} \\ \end{array}\tag{9}$$
Now we could try to guess the rule for the polygamma functions and write the powers of $$\pi$$ in terms of $$\zeta$$-functions. But it is easier to proceed systematically.
We expand the $$k$$-th derivative of the two factor expression into a finite binomial sum
$$(\frac{d}{dm})^k (A B) = \sum_{j=0}^{k}\binom{k}{j} A^{(k-j)} B^{(j)}\tag{10}$$
where
$$A = \psi (1-m)+\gamma, B = \frac{1}{\sin{\pi m}}\tag{11}$$
Now using
$$\frac{\partial \psi ^{(p)}(1-m)}{\partial m}=-\psi ^{(p+1)}(1-m)\tag{12}$$
$$\frac{\pi}{\sin (\pi m)}=\sum _{n=-\infty }^{\infty } \frac{(-1)^n}{m-n}\tag{13}$$
we obtain for the j-th derivative the following expression
\begin{align}\frac{\partial ^j}{\partial m^j}\left(\frac{\pi }{\sin (\pi m)}\right)|_{m\to \frac{1}{2}}=-2^{-j-1}j! \left((-1)^k+1\right) \\ \times \left(-\zeta \left(j+1,\frac{1}{4}\right)+\zeta \left(j+1,-\frac{1}{4}\right)+4^{k+1}\right)\end{align}\tag{14}
With $$(10)$$, $$(11)$$, $$(12)$$, and $$(14)$$ we have all ingredients to expand the r.h.s. of $$(*)$$ into a finite sum.
I obtain for the finite sum requested in the OP (letting $$2a \to k$$) the following expression
$$r(k) := -\pi\lim_{m\to \frac12 }\frac{\mathrm{d}^{k}}{\mathrm{d} m^{k}} \frac{\psi(1-m) + \gamma}{\sin(m\pi)} \\=U(k) + \sum_{j=1}^{k} \binom{k}{j} U(k-j) V(j)\tag{15}$$
where
$$U(p) = -\pi \left(\gamma \delta _{0,p}+(-1)^p \psi ^{(p)}\left(\frac{1}{2}\right)\right) \tag{15a}$$
\begin{align}V(p) =-\frac{1}{\pi} \left(2^{-p-1} \left((-1)^p+1\right) p! \\ \times \left(-\zeta \left(p+1,\frac{1}{4}\right)+\zeta \left(p+1,-\frac{1}{4}\right)+4^{p+1}\right)\right) \end{align}\tag{15b}
Here $$\delta _{0,p}$$ is the Kronecker delta.
The first few $$r(k)$$ after simplification (FullSimplify) are in the format $${k,r(k)}$$
for even $$k$$
$$\begin{array}{c} \{0,\pi \log (4)\} \\ \left\{2,\pi ^3 \log (4)+14 \pi \zeta (3)\right\} \\ \left\{4,5 \pi ^5 \log (4)+84 \pi ^3 \zeta (3)+744 \pi \zeta (5)\right\} \\ \left\{6,61 \pi ^7 \log (4)+1050 \pi ^5 \zeta (3)+11160 \pi ^3 \zeta (5)+91440 \pi \zeta (7)\right\} \\ \left\{8,1385 \pi ^9 \log (4)+23912 \pi ^7 \zeta (3)+260400 \pi ^5 \zeta (5)+2560320 \pi ^3 \zeta (7)+20603520 \pi \zeta (9)\right\} \\ \end{array}\tag{16a}$$
for odd $$k$$
$$\begin{array}{l} \left\{1,\frac{\pi ^3}{2}\right\} \\ \left\{3,\frac{5 \pi ^5}{2}\right\} \\ \left\{5,\frac{61 \pi ^7}{2}\right\} \\ \left\{7,\frac{1385 \pi ^9}{2}\right\} \\ \left\{9,\frac{50521 \pi ^{11}}{2}\right\} \\ \end{array}\tag{16b}$$
This finalizes the solution of the problem in the OP.
Discussion
The numbers $$c_{n}$$ appearing here for both even and odd numbers
$$\{1, 1, 5, 61, 1 385, 50 521, 2 702 765, 199 360 981, ... \}$$
can be found in the impressivly long entry https://oeis.org/A000364:
A000364 Euler (or secant or "Zig") numbers.
They are related to the Euler polynomials: $$c_{n}=2^n E_{n}(\frac{1}{2})$$
References
[1] Ali Shadhar Olaikhan, "An introduction to harmonic series and logarithmic integrals", April 2021, ISBN 978-1-7367360-0-5
• Thank you Wolf for the efforts... I have already related this integral to the sum you showed but i am asking if the limit can be written in terms of "finite" sum. Sorry I typed "definite sum" instead of "finite sum". May 15, 2021 at 0:24
• Thank you, Ali, I got it now, see my answer with the explicit solution. May 15, 2021 at 11:34
• Very nice Wolf. I like the idea in (11). I will investigate to see where the error is May 15, 2021 at 20:52
• Thank you, Ali. The problem comes from the summand with $j=0$. I have now carefully collected the ingredients so that is should be possibe to compile it without errors. I keep on doing this. May 15, 2021 at 23:58
• I think you meant $a$ or $2a$ for $k$ in (15) based on the binomial theorem? May 16, 2021 at 8:30
$$I=2{\pi^{2a+1}}\ln2|E_{2a}|+(2a)!{\pi^{2a+1}}\sum_{k=1}^{a} \frac{|E_{2a-2k}|}{(2a-2k)!}{\pi^{-2k}}(2^{2k+1}-1)\zeta(2k+1)$$
Where $$E_{2a}$$ are the Euler numbers $$E_{0}=1, E_{2}=-1,E _{4}=5,E _{6}=-61$$ Example: $$\int\limits_0^\infty \frac{\ln^{4}(x)\ln(1+x)}{\sqrt{x}(1+x)}\mathrm{d}x=10{\pi^5}\ln2+84{\pi^3}\zeta(3)+744{\pi}\zeta(5)$$
• Interesting results thank you. +1 May 16, 2021 at 17:00
• +1 Very nice and compact solution. How did you derive it? Just a typographical suggestion write \pi with lower case p. May 18, 2021 at 14:36
• @Dr.Wolfgang Hintze $\int_0^{\frac{\pi}{2}}(Log\tan (x))^{2n}\log(\cos x)dx=\frac{\pi}{4}\lim_{m\to 0 }\frac{\mathrm{d}^{2n}}{\mathrm{d}\ m^{2n}} \frac{\psi(\frac{1-m}{2}) - \psi(1)}{\cos(\frac{m\pi}{2})}.$ Apply Mac-Laurin for ${\psi(\frac{1-m}{2})}$ May 19, 2021 at 21:02
• Can you give a reference for the Euler numbers? Jun 27, 2021 at 15:30
• @Ali Shadhar see Table of Integrals, Series, and Products Fifth Edition By I.S.Gradshteyn and I.M.Ryzhik page XXXI Jun 29, 2021 at 21:45
Thanks to @user178256 for the reference for the Euler number $$( E_k)$$ which is the key to solve the problem.
In the question body, we showed that
$$\int\limits_0^\infty \frac{\ln^{2a}(x)\ln(1+x)}{\sqrt{x}(1+x)}\mathrm{d}x=-\pi\lim_{m\to \frac12 }\frac{d^{2a}}{d m^{2a}} \frac{\psi(1-m) + \gamma}{\sin(m\pi)}$$
using
$$\frac{d^a}{dm^a}(f*g)=\sum_{k=0}^a \binom{a}{k} \frac{d^k}{dm^k}f*\frac{d^{a-k}}{dm^{a-k}} g$$
we have
$$I=-\pi\sum_{k=0}^{2a} \binom{2a}{k} \lim_{m\to \frac12}\frac{d^{k}}{dm^k} (\psi(1-m)+\gamma)*\lim_{m\to \frac12}\frac{d^{2a-k}}{dm^{2a-k}} \csc(m\pi)$$
since we have
$$\lim_{m\to \frac12}\frac{d^{0}}{dm^0} \csc(m\pi)=1$$
$$\lim_{m\to \frac12}\frac{d^{1}}{dm^1} \csc(m\pi)=0$$
$$\lim_{m\to \frac12}\frac{d^{2}}{dm^2} \csc(m\pi)=\pi^2$$
$$\lim_{m\to \frac12}\frac{d^{3}}{dm^3} \csc(m\pi)=0$$
$$\lim_{m\to \frac12}\frac{d^{4}}{dm^4} \csc(m\pi)=5\pi^4$$ or
$$\lim_{m\to \frac12}\frac{d^{r}}{dm^r} \csc(m\pi)=|E_{r}|\pi^r,$$
we consider only the even-$$k$$ terms and so
$$I=-\pi\sum_{k=0}^{a} \binom{2a}{2k} \lim_{m\to \frac12}\frac{d^{2k}}{dm^{2k}} (\psi(1-m)+\gamma)*\lim_{m\to \frac12}\frac{d^{2a-2k}}{dm^{2a-2k}} \csc(m\pi)$$
$$=-\pi^{2a+1}\sum_{k=0}^{a} \binom{2a}{2k} \lim_{m\to \frac12}\frac{d^{2k}}{dm^{2k}} (\psi(1-m)+\gamma)*|E_{2a-2k}|\pi^{-2k}$$
separate the first term using $$\psi(1/2)+\gamma=-2\ln(2)$$, we have
$$I=2\ln(2)|E_{2a}|\pi^{2a+1}-\pi^{2a+1}\sum_{k=1}^{a} \binom{2a}{2k} \lim_{m\to \frac12}\frac{d^{2k}}{dm^{2k}} (\psi(1-m)+\gamma)*|E_{2a-2k}|\pi^{-2k}$$
using
$$\lim_{m\to \frac12}\frac{d^{2k}}{dm^{2k}} (\psi(1-m)+\gamma)=\psi^{(2k)}\left(\frac12\right)=-(2k)!(2^{2k+1}-1)\zeta(2k+1)$$
we finally get
$$I=2\ln(2)|E_{2a}|\pi^{2a+1}+(2a)!{\pi^{2a+1}}\sum_{k=1}^{a} \frac{|E_{2a-2k}|}{(2a-2k)!}{\pi^{-2k}}(2^{2k+1}-1)\zeta(2k+1).$$
Bonus: By using $$\frac{1}{(2n+1)^{2a+1}}=\frac{1}{(2a)!}\int_0^1 x^{2n}\ln^{2a}(x)\mathrm{d}x,$$ we have $$\begin{gather*} \sum_{n=1}^\infty\frac{(-1)^{n}H_n}{(2n+1)^{2a+1}}=\frac{1}{(2a)!}\int_0^1\ln^{2a}(x)\left(\sum_{n=1}^\infty H_n(-x^2)^n\right)\mathrm{d}x\\ =\frac{1}{(2a)!}\int_0^1\ln^{2a}(x)\left(-\frac{\ln(1+x^2)}{1+x^2}\right)\mathrm{d}x\\ =-\frac{1}{(2a)!}\int_0^1\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x \end{gather*}$$
where $$\begin{gather*} \int_0^1\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x =\left(\int_0^\infty-\int_1^\infty\right)\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x\\ =\int_0^\infty\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x-\underbrace{\int_1^\infty\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x}_{x\to 1/x}\\ =\int_0^\infty\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x-\int_0^1\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x\\ +2\int_0^1\frac{\ln^{2a}(x)\ln(x)}{1+x^2}\mathrm{d}x\\ \left\{\text{add \int_0^1\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x to both sides then divide by 2}\right\}\\ =\frac12\int_0^\infty\frac{\ln^{2a}(x)\ln(1+x^2)}{1+x^2}\mathrm{d}x+\int_0^1\frac{\ln^{2a+1}(x)}{1+x^2}\mathrm{d}x\\ \overset{x^2\to x}{=}4^{-a-1}\int_0^\infty\frac{\ln^{2a}(x)\ln(1+x)}{\sqrt{x}(1+x)}\mathrm{d}x-(2a+1)!\beta(2a+2). \end{gather*}$$
Therefore
$$\sum_{n=1}^\infty\frac{(-1)^{n}H_n}{(2n+1)^{2a+1}}=(2a+1)\beta(2a+2)-\frac{\ln(2)|E_{2a}|}{(2a)!}\left(\frac{\pi}{2}\right)^{2a+1}$$ $$-\frac12\left(\frac{\pi}{2}\right)^{2a+1}\sum_{k=1}^{a} \frac{|E_{2a-2k}|}{(2a-2k)!}{\pi^{-2k}}(2^{2k+1}-1)\zeta(2k+1).$$
In case the reader is currious about the Mathematica command of the Euler number $$(E_r)$$, it is EulerE[r]
|
{}
|
# Charge Distribution on a Parallel Plate Capacitor
If a parallel plate capacitor is formed by placing two infinite grounded conducting sheets, one at potential $V_1$ and another at $V_2$, a distance $d$ away from each other, then the charge on either plate will lie entirely on its inner surface. I'm having a little trouble showing why this is true.
In the space between the two plates the field $E = ( V_1 - V_2 ) / d$ satisfies Laplace's equation and the boundary conditions, from which I can derive the surface charge density is $\pm E / 4 \pi$. But how about the space above and below the capacitor? Certainly I can't just use superposition of the inner surface charge distributions to say that the field outside the capacitor is zero, (and thus the surface area charge density is zero), for this assumes there is no charge on the outer surfaces to begin with.
Any help clearing up this mental block would be greatly appreciated, thanks.
-
Off the bat, I would treat this problem as unsolvable because there's no such thing as an infinite capacitor, and even if one did exist, it could never be charged. Now, saying that a capacitor's radius (assume a circular plate...if it's big enough its shape doesn't really matter) compared to the plate separation is large is a different, yet much more realistic, way of characterizing the capacitor. – user11266 Jan 22 at 0:50
Ignore inner and outer surfaces. There is just one surface.
Imagine a single, infinite plane with some positive charge density. You can easily show there would be an electric field of constant strength*, perpendicularly out of the plane all the way to infinity on both directions.
Now imagine a single, infinite plate with the same negative charge density. There would be an electric field of constant strength perpendicularly into the plane all the way to infinity in both directions.
Put these two plates on top of each other, and these fields perfectly cancel.
Put these two plates in parallel, and because the field is constant strength it will perfectly cancel everywhere except between the two plates, where the electric field directions are the same and it will add to be twice as strong.
[*By constant strength I mean the electric field is just as strong no matter how far you are from the plate. Why is the field constant strength? Because the field lines can't ever diverge from one another. The way fields usually get weaker is the equipotential surface the field lines are normal to gets bigger as you increase the distance from the object. So the same number of field lines piercing a bigger surface means a field lines are more spread out, and thus a weaker field. In this case however, the equipotential surfaces are always a pair of infinite parallel planes, no matter what distance we are from the charged plane. No spreading means no change in field strength.]
-
One could deal with the problem by being careful with how one constructs a mathematical interpretation of the physical system. I will treat the simplest case: treat the surfaces of the parallel plate capacitors as true two dimensional surfaces. In this case there is no inner or outer surface charge, just a surface charge density defined on each surface.
Mathematically one could represent each conductor as an infinite plane, say $S_\pm \subset \mathbb{R}^3$, then there are two surface charge densities $\sigma_\pm$ each defined on the corresponding surface $S_\pm$. Alternatively, one may use the language of distributions and use a (volume) charge distribution defined on all of $\mathbb{R}^3$ such that $\rho(x, y, z) = \sigma_+ \delta(z - d/2) + \sigma_+ \delta(z + d/2)$ where I have put $S_\pm$ on the planes $z = \pm d/2$.
More complicated models might assume each plate of the conductor has a finite thickness. One could then solve the more complicated problem and compute what happens in the limits at the thickness approaches zero.
-
|
{}
|
# A 3D model of polarized dust emission in the Milky Way
A 3D model of polarized dust emission in the Milky Way Abstract We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations. polarization, dust, extinction, cosmic background radiation, cosmology: observations, diffuse radiation, submillimetre: ISM 1 INTRODUCTION Since the discovery of the cosmic microwave background (CMB) in 1965 (Penzias & Wilson 1965), significant efforts have been devoted to precise characterization of its emission, and to understanding the cosmological implications of its tiny temperature and polarization anisotropies, detected first with COBE-DMR (Smoot et al. 1992) for temperature, and with DASI for polarization (Kovac et al. 2002). Many experiments have gradually improved the measurement of CMB temperature and polarization power spectra. Experiments on stratospheric balloons, notably Boomerang (de Bernardis et al. 2000; Jones et al. 2006), Maxima (Hanany et al. 2000), and Archeops (Benoit et al. 2002), detected with high significance the first acoustic peak in the CMB temperature power spectrum, and made the first measurements of the temperature power spectrum over a large range of angular scales. The WMAP satellite (Bennett et al. 2013) produced the first high signal-to-noise ratio full-sky CMB map and power spectrum from the largest scales to the third acoustic peak, opening the path to precision cosmology with the CMB. These observations have been completed by power spectra measurements from many ground based experiments, for instance ACBAR (Reichardt et al. 2009) and more recently ACT (Das et al. 2014) and SPT (Story et al. 2013) on scales smaller than observed with the balloons and space missions. Planck, the latest space mission to-date, launched by ESA in 2009 (Tauber et al. 2010), has mapped CMB anisotropies with extraordinary precision down to ≃5 arcmin angular scale, providing a wealth of information on the cosmological scenario. The Planck Collaboration XIII (2016c) has shown that both the CMB temperature and E-mode polarization power spectra were remarkably consistent with a spatially flat cosmology specified by six parameters, the so-called Λ cold dark matter model, with cosmic structures seeded at very early times by quantum fluctuations of space–time during an epoch of cosmic inflation. The accurate measurement of CMB polarization, including inflationary and lensing B modes, is the next objective of CMB observations. Such a measurement offers a unique opportunity to confirm the inflationary scenario, through the detection of the imprint of primordial inflationary gravitational waves on CMB polarization B modes on large angular scale (see Kamionkowski & Kovetz 2016, for a review). CMB polarization also offers the opportunity to map the dark matter in the Universe that is responsible of slight distortions in polarization patterns by the process of gravitational lensing of the background CMB (Lewis & Challinor 2006; Challinor et al. 2017). In 2014, the BICEP2 collaboration claimed evidence for primordial CMB B modes with a tensor-to-scalar ratio r = 0.2 (BICEP2 Collaboration 2014). However, a joint analysis with Planck mission data (BICEP2/Keck & Planck Collaborations 2015) showed that the signal was mostly due to contamination of the observed map by polarized dust emission from the Milky Way rather than gravitational waves from inflation. Future space missions such as COrE (The COrE Collaboration 2011) and its more recent version, CORE (with a capital ‘R’), proposed to ESA in 2016 October in answer to the ‘M5’ call for a medium-size mission (Delabrouille et al. 2017), PIXIE (Kogut et al. 2011), PRISM (André et al. 2014), LiteBIRD (Matsumura et al. 2014), and ground-based experiments such as CMB-S4 (Abazajian et al. 2016), plan to reach a sensitivity in r as low as r ∼ 0.001 (CORE Collaboration 2016). This requires subtracting at least 99 per cent of dust emission from the maps, or modelling the contribution of dust to the measured CMB B-mode angular power spectrum at the level of 10−4 precision or better. The feasibility of such dust-cleaning critically depends on the (unknown) complexity of dust emission down to that relative level, and on the number and central frequencies of frequency channels used in the observation (to be optimized in the design phase of future CMB experiments). Investigations of the feasibility of measuring CMB B modes in the presence of foreground astrophysical emission have been pursued by a number of authors (Tucci et al. 2005; Betoule et al. 2009; Dunkley et al. 2009; Efstathiou, Gratton & Paci 2009; Bonaldi & Ricciardi 2011; Errard & Stompor 2012; Bonaldi, Ricciardi & Brown 2014; Remazeilles et al. 2016; Stompor, Errard & Poletti 2016; Remazeilles et al. 2017), using component separation methods mostly developed in the context of the analysis of WMAP and Planck intensity and polarization observations (see e.g. Leach et al. 2008; Delabrouille & Cardoso 2009, for reviews and comparisons of component separation methods). Conclusions on the achievable limit on r drastically depend on the assumed complexity of the foreground emission model (see Delabrouille et al. 2013, for a widely used sky modelling tool), the number of components included, and on whether the component separation method that is used is or is not perfectly matched to the model used in the simulations. In this paper we present a three-dimensional (3D) model of the polarized dust emission, constrained by observations, that considers the spatial variation of the spectral index and of the temperature along the line of sight (LOS), and can help give insight on the feasibility and complexity of dust-cleaning in future CMB observations in the presence of a model of dust emission more complex and more realistic than what has been used in previous work. The objective is not an accurate 3D model of dust emission, which cannot be obtained without additional observations of the 3D dust, but a plausible 3D model that is compatible with observed dust emission and its spatial variations, and at the same time implements a complexity which, although not strictly necessary yet to fit current observations, is likely to be detectable in future sensitive CMB polarization surveys. This model can be used to infer properties such as decorrelation between frequencies and flattening of the spectral index at low frequencies, and also to test the possibility to separate CMB polarization from that of dust with future multifrequency observations of polarized emission at millimetre wavelengths. This paper is organized as follows. In Section 2, we justify the need for 3D modelling and discuss plausible consequences on the properties of dust maps across scales and frequencies. Section 3 presents the observations that are used in the construction of our dust model. In Section 4, we present the strategy that is used to make a 3D dust data cube in temperature and polarization using the (incomplete) observations at hand. As these available observations have limited angular resolution, we describe in Section 5 how to extend the model to smaller scales, in preparation for future high-resolution sensitive polarization experiments. Section 6 describes our prescription for scaling the dust emission across frequencies. We compare simulated maps with existing observations and discuss implications of the 3D model in Section 7. We conclude in Section 8. 2 WHY A 3D MODEL? Previous authors such as, e.g. Fauvet et al. (2011), O'Dea et al. (2012), and Vansyngel et al. (2017) have considered a 3D model of dust distribution and of the Galactic magnetic field (GMF) to model the spatial structure of dust polarization. Ghosh et al. (2017) complement this with an analysis of correlations of the direction of the GMF with the orientation of dust filaments, as traced by H i data. However, all of these approaches produce single templates of dust emission at a specific frequency but do not attempt at the same time to model the 3D dependence of the dust emission law. This misses one of the key aspects of dust emission that is crucial to disentangling its emission from that of CMB polarization (see Tassis & Pavlidou 2015). Dust is made of grains of different size and chemical composition absorbing and scattering light in the ultraviolet, optical and near-infrared, and re-radiating it in the mid- to far-infrared. Being made of structured baryonic matter (atoms, molecules, grains), dust interacts with the radiation field through many different processes. Empirically, at millimetre and submillimetre wavelengths, the observed emission in broad frequency bands is dominated by thermal emission at a temperature T, well fit in the optically thin limit by a modified blackbody (MBB) of the form $$I_{\nu }=\tau (\nu _0) \left(\frac{\nu }{\nu _0}\right)^{\beta } B_{\nu }(T),$$ (1)where Iν is the specific intensity at frequency ν and Bν(T) is the Planck blackbody function for dust at temperature T. In the frequency range we are considering, the optical depth τ(ν) scales as (ν/ν0)β, where β is a spectral index that depends on the chemical composition and structure of dust grains. Here, ν0 is a reference frequency at which a reference optical depth τ(ν0) is estimated (we use ν0 = 353 GHz throughout this paper). Using dust template observations in the Planck 353, 545, and 857 GHz channels and the IRAS 100 μm map, it is possible to fit for τ(ν0), T and β in each pixel. This fit, performed by the Planck Collaboration XI (2014), shows clear evidence for a variation across the sky of the best-fitting temperature and spectral index, with T mostly ranging from about 15 K to about 27 K and β ranging from about 1.2 to about 2.2. Such variations are expected by reason of variations of dust chemical composition and size, and of variations of the stellar radiation field, as a function of local physical and environmental conditions. In this paper, we propose to revisit this model to make it 3D. Indeed, if dust properties vary across the sky, they must also vary along the LOS. This means that even if one single MBB is (empirically) a good fit to the average emission coming from a given region of the 3D Milky Way as observed with the best current signal-to-noise ratio, the integrated emission in a given LOS must be a superposition of several such MBB emissions with varying T(r) and β(r) (in fact, a continuum, weighted by a local elementary optical depth dτ(r, ν0)): $$I_{\nu } = \int _{0}^{\infty } \mathrm{d}r \, \frac{\mathrm{d}\tau (r,\nu _0)}{\mathrm{d}r} \left(\frac{\nu }{\nu _0}\right)^{\beta (r)} \, B_\nu (T(r)),$$ (2)where r is the distance along the LOS and where, again, τ(r, ν0) is an optical depth at frequency ν0, T(r) is a temperature, and β(r) a spectral index, now all dependent on the distance r from the observer. As a sum of MBBs is not a MBB, this mixture of dust emissions is at best only approximately a MBB. For instance, regions along the LOS with lower β contribute relatively more at low frequency than at high frequency. This would then naturally generate an effect of flattening of the observed dust spectral index at low ν, which precludes fits of dust emission performed at high frequency to be valid at lower frequencies. To properly account for such LOS inhomogeneities, a 3D model of dust emission, with dust emission law variations both across and along the LOS, is needed. This 3D mixture of inhomogeneous emission would also naturally impact the polarized emission of galactic dust. The preferential alignment of elongated dust grains perpendicularly to the local magnetic field $$\boldsymbol B$$ results in a net sky polarization that is, on the plane of the sky, orthogonal to the component $$\boldsymbol B_{\perp }$$ of $$\boldsymbol B$$ that is perpendicular to the LOS. The efficiency of grain alignment depends on the local physical properties of the interstellar medium (density, which impacts the collisions between grains; irradiation). Each region emits polarized emission proportional to an intrinsic local polarization fraction p(r). Linear polarization Stokes parameters Q and U can be written as $$Q_{\nu } = \int _{0}^{\infty } \mathrm{d}r \, p(r) \frac{\mathrm{d}\tau }{\mathrm{d}r} B_\nu (T(r)) \left(\frac{\nu }{\nu _0}\right)^{\beta (r)} \cos 2\psi (r) \sin ^k \alpha (r)$$ (3)and $$U_{\nu } = \int _{0}^{\infty } \mathrm{d}r \, p(r) \frac{\mathrm{d}\tau }{\mathrm{d}r} B_\nu (T(r)) \left(\frac{\nu }{\nu _0}\right)^{\beta (r)} \sin 2\psi (r) \sin ^k \alpha (r),$$ (4)where, in the healpix CMB polarization convention, $$\cos 2\psi = \frac{B_{\theta }^2-B_{\varphi }^2}{B_{\perp }^2}, \, \sin 2\psi = \frac{2B_\theta B_\varphi }{B_{\perp }^2}, \, \sin \alpha = {\frac{B_{\perp }}{B}},$$ (5)and where k is an exponent that takes into account depolarization and projection effects linked to the local geometry and the alignment of grains. In these equations, r is the distance to the observer, i.e. r, θ, and φ are spherical heliocentric coordinates. In equations (3) and (4), we recognize an overall intensity term (equal to the integrand in equation 2), multiplied by a polarization fraction p(r), an orientation term cos 2ψ(r) or sin 2ψ(r), and a geometrical term sin kα(r) that depends on the direction of the magnetic field with respect to the LOS. In the absence of strong theoretical or observational constraints on the value of k, we follow Fauvet et al. (2011) and assume k = 3. This choice, although arguably somewhat arbitrary, does not impact much the rest of this work1 as it does not change the polarization angle on the sky, while the polarization maps will ultimately be re-normalized to match the total observed dust polarization at 353 GHz. This re-normalization somewhat corrects for possible inadequacy or inaccuracy of the assumption made for the geometrical term. Since all parameters (p, τ, T, β, ψ, and α) vary along the LOS, the total polarized emission is a superposition of emissions with different polarization angles and different emission laws. As a consequence, the polarization fraction will change with frequency (i.e. intensity and polarization have different emission laws); in addition, the polarization will rotate as a function of frequency, depending on the relative level of emission of various regions along the LOS. This polarization rotation effect would also naturally generate decorrelation of polarization emission at various frequencies. Such an effect that has been reported in Planck observations (Planck Collaboration L 2017), but is the object of debate following a subsequent analysis that does not confirm the statistical significance of the observed decorrelation (Sheehy & Slosar 2017). 3 OBSERVATIONS Full-sky (or near-full-sky) dust emission is observed at submillimetre wavelength by Planck and IRAS. We process the Planck 2015 data release maps with a Generalized Needlet Internal Linear Combination (GNILC) method to separate dust emission from other astrophysical emissions and to reduce noise contamination. GNILC (Remazeilles, Delabrouille & Cardoso 2011) is a component separation method that extracts from noisy multifrequency observations a multiscale model of significant emissions, based on the comparison of auto and cross-spectra with the level of noise locally in needlet space. Needlets (Narcowich, Petrushev & Ward 2006; Faÿ et al. 2008; Marinucci et al. 2008) are a tight frame of space-frequency functions (which serve as a redundant decomposition basis). The use of needlets for component separation by Needlet Internal Linear Combination (NILC) was introduced in the analysis of WMAP 5-yr temperature data (Delabrouille et al. 2009). They were further used on the 7-yr and 9-yr temperature and polarization maps (Basak & Delabrouille 2012, 2013). GNILC has been used by the Planck collaboration to separate dust emission from Cosmic Infrared Background (Planck Collaboration XLVIII 2016e). We use the corresponding dust maps to constrain our model of dust emission in intensity. GNILC maps offer the advantage of reduced noise level (for both intensity and polarization), and of reduced contamination by the cosmic infrared background fluctuations (for intensity). However, different templates of dust emission in intensity and polarization could have been used instead, as long as those maps are not too noisy, nor contaminated by systematic effects such that, for instance, the intensity map is negative in some pixels, or that dust is a subdominant component in some pixels or at some angular scales (problems of that sort are usually present in maps that have not been processed to avoid these issues specifically). From now on, the single greybody ‘2D’ model of the form of equation (1) uses Planck maps of τ(ν0) and β that are obtained from a fit of the GNILC dust maps between 353 and 3000 GHz, obtained as described in Planck Collaboration XLVIII (2016e). For polarization, we apply independently GNILC on Planck 30–353 GHz E and B polarization maps (Data Release 2; Planck Collaboration I 2016b) to obtain polarized galactic emission maps in the seven polarized Planck channels. These maps are specifically produced for the present analysis, and are not part of the Planck archive. Dust-dominated E and B polarization maps at ν = 143, 217, and 353 GHz are shown in Fig. 1. The polarization maps with best dust signal-to-noise ratio are at ν = 353 GHz. The other polarization maps are not used further in our model.2 Here, the GNILC processing is mostly used as a pixel-dependent de-noising of the 353 GHz polarization map. A model that fully exploits the multifrequency information in the Planck data is postponed to future work. The needlet decomposition extends down to 5 arcmin angular resolution for intensity and 1° for polarization. Figure 1. View largeDownload slide T , E, and B maps at 353, 217, and 143 GHz, obtained with a generalized needlet ILC analysis of Planck HFI public data products. Figure 1. View largeDownload slide T , E, and B maps at 353, 217, and 143 GHz, obtained with a generalized needlet ILC analysis of Planck HFI public data products. Three-dimensional maps of interstellar dust optical depth, as traced by starlight extinction, have been derived by Green et al. (2015) based on the reddening of 800 million stars detected by PanSTARRS 1 and 2MASS, covering three-quarters of the sky. The maps are grouped in 31 bins out to distance moduli from 4 to 19 (corresponding to distances from 63 pc to 63 kpc) and have a hybrid angular resolution, with most of maps at an angular resolution of 3.4–13.7 arcmin . These maps will be used to infer some information about the distribution of dust along the LOS, which will be used to generate our 3D model of polarized dust emission. 4 MULTILAYER MODELLING STRATEGY We approximate the continuous integrals of equations (2)–(4) as discrete sums over independent layers of emission, indexed by i, so that we have, for the intensity, $$I_{\nu }(p) \, = \, \sum _{1}^{N} I_{\nu }^i(p) \, = \, \sum _{1}^{N} \tau _i(\nu _0) \! \left(\frac{\nu }{\nu _0}\right)^{\beta _i(p)} \! B_\nu (T_i(p)).$$ (6)Each layer is then characterized by maps of Stokes parameters $$I_\nu ^i(p)$$, $$Q_\nu ^i(p)$$, and $$U_\nu ^i(p)$$, with a frequency scaling, for each sky pixel p, in the form a single MBB emission law with a temperature Ti(p) and a spectral index βi(p) (both assumed to be the same for all three Stokes parameters). We want to find a way to assign to each such layer plausible templates (full-sky pixelized maps) for I, Q, and U at some reference frequency ν0, as well as scaling parameter maps T and β, all such that the total emission matches the observed sky. By ‘layer’ we mean a component, loosely associated with different distances from us, but which could equally well be a component associated with a specific population of dust grains. The problem is clearly degenerate. Starting from only four dust-dominated maps of I (Planck and IRAS maps from 353 to 3000 GHz obtained after the GNILC analysis to remove CIB contamination), and one map of each of Q and U (both at 353 GHz), for a total of six maps, we propose to model dust emission with 3N maps of Stokes parameters $$I_{\nu _0}^i(p)$$, $$Q_{\nu _0}^i(p)$$, and $$U_{\nu _0}^i(p)$$ and 2N maps of emission law parameters Ti(p) and βi(p), i.e. a total of 5N maps, where N is the number of layers used in the model. For any N ≥ 2, we need additional data or constraints. We thus use the 3D maps of dust extinction from Green et al. (2015) to decompose the observed intensity map I at some reference frequency as a sum of intensity maps Ii coming from different layers i. We group the dust extinction maps in six ‘layers’ (shown in Fig. 2) by simple coaddition of the corresponding optical depths. Six layers are sufficient for our purpose and provide a better estimate of the optical thickness associated with each layer than if we tried to use more. Three of these layers map the dust emission at high galactic latitude, while three map most of the emission close to the galactic plane. We choose the smallest possible homogeneous pixel size, corresponding to healpix Nside = 64. These choices could be revisited in the future, in particular when more data become available. Figure 2. View largeDownload slide Maps of starlight extinction tracing the interstellar dust optical depth in shells at different distance from the Sun (maps obtained from the 3D maps of Green et al. 2015). Grey areas correspond to regions that have not been observed. Figure 2. View largeDownload slide Maps of starlight extinction tracing the interstellar dust optical depth in shells at different distance from the Sun (maps obtained from the 3D maps of Green et al. 2015). Grey areas correspond to regions that have not been observed. We then further use a 3D model of the GMF to generate Q and U maps for each layer. Finally, the total emission from all layers is readjusted so that the sum matches the observed sky at the reference frequency. We detail each of these steps in the following subsections. 4.1 Intensity layers Although the general shape and density distribution of the Galaxy is known, the exact 3D density distribution of dust grains in the Galaxy is not. Simple models consider a galactocentric radius and height function: $$n_d(R,z) = n_0 \exp (-R/h_R)\, {\rm sech}^2(z/h_z),$$ (7)where (R, z) are cylindrical coordinates centred at the Galactic centre, and where hR = 3 kpc and hz = 0.1 kpc. Such models cannot reproduce the observed intermediate and small-scale structure of dust emission.3 On the other hand, the maps of Green et al. (2015) trace the dust density distribution, and are directly proportional to the optical depth τ at visible wavelength. We select six primary shells within distance moduli of 4, 7, 10, 13, 16, and 19 (corresponding to distances of 63, 251, 1000, 3881, 15849, and 63096 pc from the Sun), and use those maps to compute, in each pixel, an estimate of the fraction fi(p) of the total opacity associated with each layer (so that ∀p, ∑i fi(p) = 1). We then construct the opacity map for each layer as the product τi(ν0) = fi τ(ν0), where τ(ν0) is the opacity at 353 GHz obtained in the Planck MBB fit. For our 3D model, we must face the practical difficulty that the maps of Green et al. (2015) do not cover the full sky (Fig. 2). For a full-sky model, the missing sky regions must be filled-in with a simulation or a best-guess estimate. We use the maps where they are defined to evaluate the relative fraction fi of dust in each shell i. For each pixel where the layers are not defined, we use symmetry arguments and copy the average fraction from regions centred on pixels at the same absolute Galactic latitude and longitude. This gives us a plausible dust fraction in the region not covered in the decomposition of Green et al. (2015). We then use these fractions of emission to decompose the total map of optical depth τ(ν0) at 353 GHz and obtain the six maps of extinction shown in Fig. 3 . Figure 3. View largeDownload slide Full sky optical depth layers at 353 GHz, scaled to match the total 353 GHz extinction map of Planck Collaboration XLVIII (2016e). The fraction of optical depth in each layer is obtained from the maps of Green et al. (2015) where missing sky pieces in the 3D model are filled-in using symmetry arguments. Figure 3. View largeDownload slide Full sky optical depth layers at 353 GHz, scaled to match the total 353 GHz extinction map of Planck Collaboration XLVIII (2016e). The fraction of optical depth in each layer is obtained from the maps of Green et al. (2015) where missing sky pieces in the 3D model are filled-in using symmetry arguments. We then compute the corresponding brightness in a given layer by multiplying by the Planck function together with the spectral index correction (equation 8), using for this an average temperature and spectral index for each layer.4 We get, for each layer i, an initial estimate of the intensity $$\widetilde{I}^i_{\nu } = f_i \, \tau (\nu _0) \left(\frac{\nu }{\nu _0}\right)^{\beta _i} \, B_\nu (T_i).$$ (8)The sum $$\widetilde{I}_{\nu _0} = \sum _i \widetilde{I}^i_{\nu _0}$$ however does not exactly match the observed Planck map $$I_{\nu _0}$$ at ν0 = 353 GHz. We readjust the layers by redistributing the residual error in the various layers, with weights proportional to the fraction of dust in each layer, to get $${I^i_{\nu _0} = \widetilde{I}^i_{\nu _0} + f_i(I_{\nu _0}- \widetilde{I}_{\nu _0})},$$ (9)and by construction we now have $${I}_{\nu _0} = \sum _i {I}^i_{\nu _0}$$. The full model across frequencies is $$I^i_{\nu } = I^i_{\nu _0} \left(\frac{\nu }{\nu _0}\right)^{\beta _i} \, \frac{B_\nu (T_i)}{B_{\nu _0}(T_i)},$$ (10)with $$I^i_{\nu _0}$$ computed following equations (8) and (9). In this way, we have six different maps of dust intensity that add-up to the observed Planck dust intensity emission at 353 GHz. We note that our model differs from that of Vansyngel et al. (2017), who instead make the simplifying assumption that the intensity template in all the layers they use is the same. The consequence of this approximation is that the fraction fi of emission in all the layers is constant over the sky. This is not compatible with a truly 3D model: galactic structures cannot be expected to be spread over all layers of emission with a proportion that does not depend on the direction of observation. The decomposition we implement in our model is just one of many possible ways to separate the total map of dust optical depth into several contributions. A close look at what we obtain shows several potential inaccuracies. For instance, some compact structures are clearly visible in more than one map, while it is not very likely that they all happen to be precisely at the edge between layers or elongated along the LOS so that they extend over more than one layer. This ‘finger of God’ effect is likely to be due to errors in the determination of the distance or of the extinction of stars, which, as a result, spreads the estimated source of extinction over a large distance span. The north polar spur (extending from the galactic plane at l ≃ 30°, left of the Galactic centre, towards the north Galactic pole) is clearly visible both in the first two maps. According to Lallement et al. (2016), it should indeed extend over both layers. On the other hand, structures associated with the Orion–Eridanus bubble (right of the maps, below the Galactic plane) can be seen in all three first maps, from less than 60 pc to more than 250 pc, while most of the emission associated with Orion is at a distance of 150–400 pc. As discussed by Rezaei et al. (2017), future analyses of the Gaia satellite data are likely to drastically improve the 3D reconstruction of Galactic dust. For this work, we use the maps of Fig. 3, noting that for our purpose what really matters is not the actual distance of any structure, but whether such a structure is likely to emit with more than one single MBB emission law. Certainly, a complex region such as Orion cannot be expected to be in thermal equilibrium and constituted of homogeneous populations of dust grains, and thus modelling its emission with more than one map is in fact preferable for our purpose. The same holds for distant objects such as the large and small Magellanic clouds and associated tidal structures, wrongly associated with nearby layers of emission by the procedure we use to fill the missing sky regions. Hence, the ‘layers’ presented here should be understood as layers of emission with roughly one single MBB (per pixel), originating mostly from a given range of distances from the Earth (see also Planck Collaboration XLIV 2016d, for a discussion of emission layers and their connection to spatial shells or different phases of the ISM). While this decomposition is not exact, it matches the purposes of this work. 4.2 Polarization layers We model polarization using equations (3) and (4). Geometric terms depending on ψ and α are computed using a simple large-scale model of the GMF. This regular magnetic field is assumed to roughly follow the spiral arms of the Milky Way. Several plausible configurations have been proposed, based on rotational symmetry around the Galactic Centre, and on mirror symmetry with respect to the Galactic plane. A widely used parametrization, named in the literature as bisymmetric spiral (BSS; Sofue & Fujimoto 1983; Han & Qiao 1993; Stanev 1997; Harari, Mollerach & Roulet 1999; Tinyakov & Tkachev 2002), defines the radial and azimuthal field components (in Galactocentric cylindrical coordinates) as $$B_r=B(r,\theta ,z)\sin q, \, \, \, \, \, \, B_{\theta }=-B(r,\theta ,z)\cos q,$$ (11)where q is the pitch angle of the logarithmic spiral, and where the function B(r, θ, z) is defined as $$B(r,\theta ,z) = - B_0(r)\cos \left(\theta + \beta \log \frac{r}{r_0} \right) \exp (-|z|/z_0),$$ (12)where β = 1/tan q. We model the regular magnetic field using such a BSS parametrization, in which we consider the z-component of the GMF to be zero. The model is restricted for r > 1 kpc to avoid divergence of the field at small radius (and is hence assumed to vanish for r ≤ 1 kpc). The value of the pitch angle of the spiral arms in the Milky Way is still a matter of debate in the community. Estimates of this angle range from −5° to −55° depending on the tracer used to determine it, with the most commonly cited value being around −11.5°. A possible explanation for the wide range of pitch angles determined from different data sets is that the pitch angle is not constant but varies with radius, meaning the spirals are not exactly logarithmic (e.g. slightly irregular). In our case, the model should reproduce as well as possible the polarized dust emission on large scales, and at high galactic latitude in particular. The simple large-scale density model of equation (7) together with the BSS large -scale magnetic field from equations (11) and (12) can be integrated following equations (2)–(4) to provide a first guess of dust intensity and polarization distribution for each layer ($$I^i_m,Q^i_m,U^i_m$$). We initially assume that the intrinsic local polarization fraction p(r) in equations (3) and (4) is constant and equal to 20 per cent. Since we already have layers of intensity emission ($$I^i_{353}$$), the polarized emission in each layer i can be generated as $$\widetilde{Q}^{i}_{353}=\left(\frac{Q^{i}_m}{I_m^i}\right)I^i_{353}, \, \, \, \, \, \, \, \, \, \, \, \, \widetilde{U}^{i}_{353}=\left(\frac{U^{i}_m}{I_m^i}\right)I^i_{353},$$ (13)The best-fitting pitch angle q can be found minimizing some function of the difference between the simple polarization model obtained from equation (13) and the observations. We minimize the L1 norm of the difference in Q and U, summed for all the pixels at high galactic latitude: $$G(q)= \sum _{p} \left( \left|\widetilde{Q}^{{\rm model}}_{353}(q)- Q^{{\rm obs}}_{353}\right| + \left|\widetilde{U}^{{\rm model}}_{353}(q) -U^{{\rm obs}}_{353}\right| \right),$$ (14)where the dependence on the pitch angle q has been specified for clarity, and where the total modelled Q is the sum of the simple layer contributions from equation (13): $$\widetilde{Q}^{{\rm model}}_{353} = \sum _{i=1}^{N} \widetilde{Q}^{i}_{353}$$ (15)and similarly for U. We find that a pitch angle of −33° provides the best fit of the GNILC maps by the BSS model at galactic latitude |b| ≥ 15°, which is the region of the sky with more interest for CMB observations. Finally, to match the observations, we redistribute in the modelled layers of emission the residuals (observed emission minus modelled emission for q = 33°) weighted with some pixel-dependent weights Fi: $$Q^{i}_{353}=\left(\frac{Q^{i}_m}{I_m^i}\right)I^i_{353}+F_i \left[Q^{{\rm obs}}_{353}-\sum _{j=1}^{N}\left(\frac{Q_{m}^j}{I_m^j}\right)I^j_{353}\right],$$ (16)This guarantees that the model matches the observation at 353 GHz on the angular scales that are observed with good signal-to-noise ratio by Planck. However, these weights Fi must be such that the polarization fraction after the redistribution of residuals does not exceed some maximum value pmax, which is a free parameter of our model, and which we pick to be 25 per cent. We fix the value of Fi as Fi = Pi/∑jPj, i.e. proportionally to the polarized dust emission fraction in each layer, unless the resulting polarization fraction exceeds pmax. When this happens, we redistribute the polarization excess in neighbouring layers. The first term in the sum on the right hand side of equation (16) is the predicted polarization of layer i, based on a polarization fraction predicted by the BSS magnetic field applied to an intensity map for that layer. The second term is the correction that is applied to force the sum of all layers's emissions to match the observed sky. The U Stokes parameters is modelled in a similar way. With this approach, we straightforwardly constrain the sum of emissions from all the layers to match the total observed emission for both Q and U. Fig. 4 shows the polarized layers $$Q_m^i$$ and $$U_m^i$$ given by the large-scale model of the magnetic field while Fig. 5 shows the polarized layers after redistributing the residuals all over the former layers. After adding the small-scale features (next section), we get the maps displayed in Fig. 6. A visual comparison with Fig. 4 shows that while the regular BSS field model does a reasonable job at predicting the very large scale polarization patterns (lowest modes of emission) at high galactic latitude (after picking the appropriate pitch angle), it fails at predicting most of the features of the observed polarized dust emission on intermediate scales. In addition, the amplitude of the modelled polarized emission at high Galactic latitude is seen to be too strong as compared to the observations. It is thus important, for the modelled emission to be reasonably consistent with Planck data, to enforce that the model match the observations, as we do, and not just rely on a simple regular model of the magnetic field, which does not exactly capture the observed features of the real emission. Figure 4. View largeDownload slide U and Q dust emission layers obtained using a model of dust fraction in each layer based on a simple model of dust density distribution in the Galaxy, and a large-scale bi-symmetric spiral model of GMF to infer thermal dust polarization emission from dust intensity maps at 353 GHz. Figure 4. View largeDownload slide U and Q dust emission layers obtained using a model of dust fraction in each layer based on a simple model of dust density distribution in the Galaxy, and a large-scale bi-symmetric spiral model of GMF to infer thermal dust polarization emission from dust intensity maps at 353 GHz. Figure 5. View largeDownload slide U and Q dust emission layers after renormalization of the sum to match the observed sky (Planck HFI GNILC dust polarization maps at 353 GHz). Figure 5. View largeDownload slide U and Q dust emission layers after renormalization of the sum to match the observed sky (Planck HFI GNILC dust polarization maps at 353 GHz). Figure 6. View largeDownload slide U and Q layers after matching with the observed sky (as in Fig. 5), after adding random small-scale fluctuations at a level matching an extrapolation of the temperature and polarization angular power spectra and cross-spectra. Figure 6. View largeDownload slide U and Q layers after matching with the observed sky (as in Fig. 5), after adding random small-scale fluctuations at a level matching an extrapolation of the temperature and polarization angular power spectra and cross-spectra. 5 SMALL SCALES The polarization maps we have generated are normalized to match the observed dust polarization in the GNILC 353 GHz maps obtained as described in Section 3. However, the polarization GNILC maps are produced at 1° angular resolution. In the galactic plane, where the polarized signal is strong, this is the actual resolution of the GNILC map. At high galactic latitude however, the amount of polarized sky emission power is low compared to noise even at intermediate scales. The GNILC processing then ‘filters’ the maps to subtract noise when no significant signal is locally detected. Hence, there is a general lack of small scales in the template E and B maps used to model polarized emission so far: everywhere on the sky on scales smaller than 1° (because the GNILC maps are produced at 1° angular resolution), but also on scales larger than that at high galactic latitude (because of the GNILC filtering). We must then complement the maps with small scales in a range of harmonic modes that depends on the layer considered, the first three layers covering most of the high galactic latitude sky, and the last three dominating the emission in the galactic plane and close to it, where the NILC filters less of the intermediate scales. Small angular scale polarized emission arises from both small-scale distribution of matter in three dimensions, but also from the fact that on small scales, the magnetic field becomes gradually more irregular, tangled and turbulent. Fully characterizing the strength, direction, and structure of the GMF in the entire Milky Way is a daunting task, involving measurements of very different observational tracers (see Han 2017 for a recent review). This field can be considered as a combination of a regular field as discussed above, complemented by a turbulent field that is caused by local phenomena such as supernova explosions and shock waves. The GMF is altered by gas dynamics, magnetic reconnection, turbulence effects. Observations constrain only one component of the magnetic field (e.g. strength or direction, parallel or perpendicular to the LOS) in one particular tracer (ionized gas, dense cold gas, dense dust, diffuse dust, cosmic ray electrons, etc.). This provides us with only partial information, making it extremely difficult to generate an accurate 3D picture. The small-scale magnetic field can be modelled with a combination of components that can be isotropic, or somewhat ordered with, e.g. a direction that does not vary on small scales while the sign of the B vector does, as illustrated in fig. 1 of Jaffe et al. (2010). The amplitude of these small-scale fields depend on the turbulent energy density. In both the Milky Way and in other spiral galaxies, the fields have been found to be more turbulent within the material spiral arms than in between them (Jaffe et al. 2010). Different strategies to constrain the strength of the random magnetic fields (including or not both turbulent fields) estimate an amplitude of the turbulent field of about the same order of magnitude as that of the regular part, ranging however from 0.7 to 4 μG for different estimates (Haverkorn 2015). In a typical model, the power spectrum of the random magnetic field is assumed to follow a Kolmogorov spectrum (with spectral index n = 5/3) with an outer scale of 100 pc. In our work, we do model the large scale, regular magnetic field using the BSS model of equations (11) and (12) to get a first guess of the layer-dependent dust polarization, but we do not attempt to directly model the 3D turbulent magnetic field. Indeed, it is not possible to implement a description of the real field down to those small scales, by lack of observations. The alternate strategy that consists in generating a random turbulent magnetic field, as in Fauvet et al. (2011), generates fluctuations with random phases and orientations, and dust polarization fluctuations that cannot be expected to match those observed in the real sky. Hence – as we detail next – we propose instead to rely on the observed polarized dust on scales where those observations are reliable, and extend the power spectra of our maps at high l in polarization, independently for each layer, to empirically model the effect of a small-scale turbulent component of the GMF, on scales missing or noisy in the GNILC 353 GHz map. To do so, we add small-scale fluctuations independently in each layer of our model, both for intensity and for polarization. In the case of intensity, we simply fit the power spectrum of the original map in the multipole interval 30 ≤ l ≤ 300, obtaining spectral indexes in harmonic space ranging from −2.2 to −3.2 as a function of the layer (steeper at further distances). We use these fitted spectral indexes to generate maps of fake intensity fluctuations, generated with a lognormal distribution in pixel space (so that the dust emission is never negative), with an amplitude proportional to the large-scale intensity, and globally adjusted to match the level of the angular power spectrum. We use a similar prescription for E and B, except that following the Planck results presented in Planck Collaboration XXX (2016a), we assume a power-law dependence for EE and BB power spectra at high l, of the form Cl = A(l/lfit)α with α = −2.42 for both E and B. We use a Gaussian distribution, instead of lognormal, for polarization fields. For each layer, we fix the amplitude A and lfit to match the power spectrum of the large-scale map for that layer in the range 30 ≤ l ≤ 100. The amplitude of the small-scale fluctuations is scaled by the polarized intensity map in each layer. The randomly generated T and E harmonic coefficients are drawn with 30 per cent correlation between the two, while B is uncorrelated with both T and E. We then make combined maps which use large scales from the observations, and the smallest scales from the simulations, as follows. For each layer, we have an observed map, with a beam window function bℓ for temperature and hℓ for polarization, i.e. $$a_{\ell m}^{T, \rm {obs}} = b_\ell \, a_{\ell m}^{T, \rm {sky}}; \quad a_{\ell m}^{E, \rm {obs}} = h_\ell \, a_{\ell m}^{E, \rm {sky}}$$ (17)and we have available $$a_{\ell m}^{T, \rm {rnd}}$$ and $$a_{\ell m}^{E, \rm {rnd}}$$ randomly generated following modelled statistics $$C_\ell ^{TT}$$, $$C_\ell ^{EE}$$ and $$C_\ell ^{TE}$$, which we assume match the statistics of real sky emission. We complement the observed aℓm by forming $$a_{\ell m}^{T, \rm {sim}} = a_{\ell m}^{T, \rm {obs}} + \sqrt{1-b_\ell ^2} \, a_{\ell m}^{T, \rm {rnd}}$$ (18)and similarly $$a_{\ell m}^{E, \rm {sim}} = a_{\ell m}^{E, \rm {obs}} + \sqrt{1-h_\ell ^2} \, a_{\ell m}^{E, \rm {rnd}},$$ (19)i.e. we make the transition between large and small scales in the harmonic domain using smooth harmonic windows, corresponding to that of a Gaussian beam of 5 arcmin for all layers in intensity, 2.5° for polarization layers 1, 2, and 3 (emission mostly at high galactic latitude), and of 2° for polarization layers 4, 5, and 6 (emission mostly near to the Galactic plane). These simulated sets of aℓm have correct $$C_\ell ^{TT}$$ and $$C_\ell ^{EE}$$, but not cross-spectrum $$C_\ell ^{TE}$$. Indeed $$C_\ell ^{TE, {\rm sim}} = C_\ell ^{TE} \left[ b_\ell h_\ell + \sqrt{1-b_\ell ^2}\sqrt{1-h_\ell ^2} \right].$$ (20)we obtain final simulated aℓm as $$a_{\ell m}^{\rm {final}} = \left[ C_\ell \right]^{1/2} [ C_\ell ^{\rm sim} ]^{-1/2} \, a_{\ell m}^{\rm {sim}},$$ (21)where for each ℓ, Cℓ, and $$C_\ell ^{\rm sim}$$ are 2 × 2 matrices corresponding to the terms of the multivariate (T, E) power spectra of the model and of the simulated maps with small scales added. Fig. 7 shows the maps of polarized emission after the various steps of our simulation process, summing up the contributions of all layers. Final maps of polarized intensity can be seen in Fig. 8. The percentage of polarized pixels with a given polarization fraction decreases with the polarization fraction, as seen in Fig. 9. Figure 7. View largeDownload slide First row: Full sky Q and U maps given by the BSS model. Second row: Q and U GNILC maps. Third row: Q and U total simulated maps after matching the GNILC maps on large scales and adding random small-scale fluctuations. The BSS model provides only a crude approximation of the observed dust emission. Figure 7. View largeDownload slide First row: Full sky Q and U maps given by the BSS model. Second row: Q and U GNILC maps. Third row: Q and U total simulated maps after matching the GNILC maps on large scales and adding random small-scale fluctuations. The BSS model provides only a crude approximation of the observed dust emission. Figure 8. View largeDownload slide Layers of polarized intensity ($$P = \sqrt{Q^2+U^2}$$), as modelled in our work. Figure 8. View largeDownload slide Layers of polarized intensity ($$P = \sqrt{Q^2+U^2}$$), as modelled in our work. Figure 9. View largeDownload slide Histograms of polarization fraction for each layer. We only use pixels where the polarization fraction is well defined, i.e. I(p) ≠ 0. This excludes high galactic latitude pixels for the most distant layers. Figure 9. View largeDownload slide Histograms of polarization fraction for each layer. We only use pixels where the polarization fraction is well defined, i.e. I(p) ≠ 0. This excludes high galactic latitude pixels for the most distant layers. The power spectra of simulated maps in all the layers after this full process are shown in Fig. 10. The power spectra of the original GNILC maps with those resulting from the individual sum of the simulations with small-scale fluctuations added in each layer is shown in Fig. 11: the missing power on small scales is complemented with fake, simulated small-scale fluctuations. We show full-sky maps of E and B at 353 GHz in Fig. 12. A detail at (l, b) = (0°, 50°) is shown in Fig. 13. E and B power spectra of the original GNILC maps and the simulations at 143 and 217 GHz are shown in Fig. 14. Figure 10. View largeDownload slide T, E, B power spectra for each layer. The first three rows also display the power spectra for 75 per cent and 25 per cent of the sky. Figure 10. View largeDownload slide T, E, B power spectra for each layer. The first three rows also display the power spectra for 75 per cent and 25 per cent of the sky. Figure 11. View largeDownload slide TT, EE, BB, TE power spectra of both GNILC maps and of simulated maps including small-scale fluctuations. Figure 11. View largeDownload slide TT, EE, BB, TE power spectra of both GNILC maps and of simulated maps including small-scale fluctuations. Figure 12. View largeDownload slide Modelled E and B modes maps at 353 GHz, after adding small-scale fluctuations, adding-up six layers of emission (see the text). Figure 12. View largeDownload slide Modelled E and B modes maps at 353 GHz, after adding small-scale fluctuations, adding-up six layers of emission (see the text). Figure 13. View largeDownload slide Observed and modelled E and B modes maps at 353 GHz – detail around (l, b) = (0°, 50°). Top row: T, E, and B modes, observed with Planck after GNILC processing; Bottom row: Modelled T, E, and B modes at Nside= 512, after adding small-scale fluctuations, adding-up six layers of emission. Figure 13. View largeDownload slide Observed and modelled E and B modes maps at 353 GHz – detail around (l, b) = (0°, 50°). Top row: T, E, and B modes, observed with Planck after GNILC processing; Bottom row: Modelled T, E, and B modes at Nside= 512, after adding small-scale fluctuations, adding-up six layers of emission. Figure 14. View largeDownload slide E and B power spectra of both GNILC maps and GNILC maps + small-scale fluctuations at 143 and 217 GHz. Figure 14. View largeDownload slide E and B power spectra of both GNILC maps and GNILC maps + small-scale fluctuations at 143 and 217 GHz. 6 SCALING LAWS We now need a prescription for scaling the 353 GHz polarized dust emission templates obtained above across the range of frequencies covered by Planck and future CMB experiments. We stick with the empirical form of dust emission laws (for each layer, a MBB with pixel-dependent temperature and spectral index), but now, we must define as many templates of T(p) and β(p) as there are layers in our model, i.e. six maps of T and six maps of β. A complete description of the temperature and the spectral index distribution in 3D would require observations of the intensity emission at different frequencies in each layer, which are not presently available. We cannot either use for each layer the same temperature and spectral index maps (otherwise there is no point using several layers to model the total emission). Finally, the scaling law we use for all the layers must be such that the final dust emission across frequencies should match the observations, i.e. (i) On average, the dust intensity scaled to other Planck frequencies (besides 353 GHz, at which matching the observations is enforced by construction) should be as close as possible to the actual Planck observed dust intensity. (ii) Similarly, each of the dust Q and U polarization maps, scaled to other frequencies than 353 GHz, should match the observed polarization at those frequencies. (iii) If we perform a MBB fit on our modelled dust intensity maps, the statistical distribution of temperature and spectral index should match those observed on the real sky: same angular power spectra and cross-spectra, similar non-stationary distribution of amplitudes of fluctuations across the sky, similar T–β scatter plot. With only 353 GHz polarization maps with good signal-to-noise ratio, we construct our model for frequency scaling on intensity alone. In a first step, we make use of the fraction of dust assigned to each layer to compute the weighted mean of the spectral index and temperature maps for each layer, using the overall maps obtained from the MMB fit made in Planck Collaboration XI (2014), which we assume to hold for all of the I, Q, and U Stokes parameters. We compute, for each layer i: \begin{eqnarray} T_{{\rm avg}}^i&=&\sum _{p} w^i_T(p) T_d(p), \nonumber\\ \beta _{{\rm avg}}^i&=&\sum _{p} w^i_\beta (p) \beta _d(p), \end{eqnarray} (22)where Td(p) and βd(p) are the best-fitting values of the overall MMB fit of Planck dust emission in each pixel, and where $$w^i_T(p)$$ and $$w^i_\beta (p)$$ are some weights used for computing the average. We use the same weights both for temperature $$T_{{\rm avg}}^i$$ and spectral index $$\beta _{{\rm avg}}^i$$, $$w^i_T(p) = w^i_\beta (p) = f_i(p),$$ (23)i.e. we empirically weight the maps by the pixel-dependent fraction fi(p) of dust emission in layer i, to take into account the fact that we are mostly interested in the temperature and spectral index of the regions of sky where that layer contributes most to the total emission. The simplest way to scale to other frequencies is to assume that $$T_{{\rm avg}}^i$$ and $$\beta _{{\rm avg}}^i$$ are constant across the sky in a given layer. This however implements only a variability of the physical parameters T and β along the LOS, and not across the sky anymore. It provides a (uniform) prediction of the scaling law in each layer that is informed by the observed emission law, but which does not reproduce the observed variability across the pixels of the globally fitted T and β (even if a global fit might find fluctuations because of the varying proportions of the various layers in the total emission as a function of sky pixel). To generate fluctuations of the spectral index and temperature of dust emission in each layer, we first generate, for each layer, Gaussian random variations around $$T_{{\rm avg}}^i$$ and $$\beta _{{\rm avg}}^i$$ following the auto and cross-spectra of the MBB fit obtained on the observed Planck dust maps (Planck Collaboration XLVIII 2016e). To take into account the non-Gaussianity of the distribution of T and β, we then re-map the fluctuations to match the observed probability distribution function in pixel space. This slightly changes the map spectra. As a final step, we thus re-filter the maps to match the observed auto and cross-spectra of T and β. One such iteration yields simulated temperature and spectral index maps in good statistical agreement with the observations. In Fig. 15 we show a random realization of temperature and spectral index maps for the first layer, its power spectra and its scatter plot. Figure 15. View largeDownload slide Top left: Power spectra used to draw random realizations of temperature and spectral index maps (note the negative sign of the Tβ cross-spectrum). Bottom left: Scatter plot of T and β for a pair of random maps (right), showing an overall anticorrelation and the same general behaviour as observed by Planck Collaboration XI (2014) on Planck observations (see their fig. 16). Right: Maps of randomly generated temperature and spectral index for the first layer, with Tavg = 19.10, σT = 2.059, βavg = 1.627, σβ = 0.209. Figure 15. View largeDownload slide Top left: Power spectra used to draw random realizations of temperature and spectral index maps (note the negative sign of the Tβ cross-spectrum). Bottom left: Scatter plot of T and β for a pair of random maps (right), showing an overall anticorrelation and the same general behaviour as observed by Planck Collaboration XI (2014) on Planck observations (see their fig. 16). Right: Maps of randomly generated temperature and spectral index for the first layer, with Tavg = 19.10, σT = 2.059, βavg = 1.627, σβ = 0.209. We then model the total emission at 353, 545, 857 GHz and 100 microns using those scaling laws, summing-up contributions from all six layers, and, in order to validate that the simulation is compatible with the observations, check with a MBB fit on the global map whether the distribution of the fitted parameters for the model is similar to that inferred on the real Planck observations. We find two problems. First, the average temperature and spectral index fit on the total emission turn out to be slightly larger and smaller respectively than observed on the real sky. This is not surprising: as the emission in each pixel is a sum, the layer with the largest temperature and the smallest spectral index tends to dominate the total emission both at ν = 3 THz, pulling the temperature towards higher values, and at low frequency, pulling the spectral index towards lower values. We find that the average MBB fit temperature from the model matches the observations if we rescale the temperature in individual layers by a factor 0.982. Secondly, the standard deviations of the resulting fitted T and β are significantly smaller than those of the real sky, presumably because of averaging effects. We recover a global distribution of temperature and spectral index as fit on the total emission, if we rescale the amplitude of the temperature and spectral index fluctuations generated in each layer. We find that to match the observed T and β inhomogeneities of the MBB fit performed on GNILC Planck and IRAS maps, we need to multiply the amplitude of temperature fluctuations in each layer by 1.84 and the spectral index fluctuations by 1.94. With this re-scaling, we find a good match between the simulated and the observed temperature and spectral index distributions in the global MBB fit. Table 1 shows the standard deviation and the average values for T and β in each layer for one single realization of the simulation, compared with those from the Planck MBB fit and the fit performed on this realization. The average values from several simulations are in good agreement with those of the Planck MMB fit. Table 1. Averages and standard deviation values of temperature and spectral index in each layer, for a simulation with 6.87 arcmin pixels healpix pixels at Nside= 512. The average and standard deviation of the resulting temperature and spectral index, as obtained from an MBB fit on the total intensity maps at 353, 545, 857, and 3000 GHz, is compared to what is obtained on Planck observations. Layer 1 2 3 4 5 6 Tavg 19.10 18.96 18.98 19.35 19.23 20.05 σT 2.059 2.100 2.022 2.076 2.117 2.069 βavg 1.627 1.628 1.598 1.538 1.513 1.689 σβ 0.209 0.210 0.207 0.208 0.202 0.204 $$T_{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{T}^{{\rm {MMB}}}$$ $$\beta _{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{\beta }^{{\rm {MMB}}}$$ Planck fit 19.396 1.247 1.598 0.126 Simul. fit 19.389 1.253 1.598 0.135 Layer 1 2 3 4 5 6 Tavg 19.10 18.96 18.98 19.35 19.23 20.05 σT 2.059 2.100 2.022 2.076 2.117 2.069 βavg 1.627 1.628 1.598 1.538 1.513 1.689 σβ 0.209 0.210 0.207 0.208 0.202 0.204 $$T_{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{T}^{{\rm {MMB}}}$$ $$\beta _{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{\beta }^{{\rm {MMB}}}$$ Planck fit 19.396 1.247 1.598 0.126 Simul. fit 19.389 1.253 1.598 0.135 View Large 7 VALIDATION AND PREDICTIONS We use our model to generate maps of polarized dust emission at 143, 217 GHz, and compare them to Planck observations in polarization and intensity (Fig. 16). Even if our model is not specifically constrained to exactly match the observations at these other frequencies, we observe a reasonable overall agreement both for polarization and intensity. Naturally, the discrepancies between model and observation become larger as we move further away from the reference frequency. Randomly drawn temperature and spectral index fluctuations are not expected to be those of the real microwave sky. Figure 16. View largeDownload slide GNILC maps both in intensity and polarization are shown in the first and the fourth row (subindex G), while maps obtained using our 3D model are shown in the second and the fifth row (subindex m). The differences between them are also shown in the third and sixth row. Note the different colour scales for difference maps. Figure 16. View largeDownload slide GNILC maps both in intensity and polarization are shown in the first and the fourth row (subindex G), while maps obtained using our 3D model are shown in the second and the fifth row (subindex m). The differences between them are also shown in the third and sixth row. Note the different colour scales for difference maps. Fig. 17 shows cross-correlation for E and B power spectra between Planck observation and the modelled dust maps when we use uniform temperature and spectral index map in each layer. Fig. 18 shows cross-correlation between various modelling options, showing that those models differ only at a subdominant level. These correlations are computed for maps smoothed to 2° angular resolution over 70 per cent of sky. Each figure compares the correlation as a function of angular scale between real-sky GNILC maps, as obtained from Planck data and modelled emission. Three models are considered: A 3D model in which the temperature and spectral index are constant in each layer, using the average values from Table 1; A 2D model in which the 353 GHz total maps of E and B are simply scaled using the temperature and spectral index from the fit on the intensity maps (from equation 1); A 3D model in which each layer has a different pixel-dependent map of T and β (the main model developed in this paper). Figure 17. View largeDownload slide Cross-correlation between simulations and observations for T, E, and B power spectra at 143 and 217 GHz 70 per cent of sky. We show in blue and green the correlation in intensity and polarization between maps generated with our model and the observations. While blue curves are computed using one single value of temperature and spectral index per layer, green curves consider one template per layer, with fluctuations of the temperature and the spectral index (model b). Red curves show the correlation between the observed polarized sky maps and maps obtained from a 2D model, i.e. one single template for temperature and spectral index from the MBB fit obtained on the observed Planck dust maps (Planck Collaboration XLVIII 2016e). Figure 17. View largeDownload slide Cross-correlation between simulations and observations for T, E, and B power spectra at 143 and 217 GHz 70 per cent of sky. We show in blue and green the correlation in intensity and polarization between maps generated with our model and the observations. While blue curves are computed using one single value of temperature and spectral index per layer, green curves consider one template per layer, with fluctuations of the temperature and the spectral index (model b). Red curves show the correlation between the observed polarized sky maps and maps obtained from a 2D model, i.e. one single template for temperature and spectral index from the MBB fit obtained on the observed Planck dust maps (Planck Collaboration XLVIII 2016e). Figure 18. View largeDownload slide Cross-correlations of T, E, and B between various modelling options. Differences between these models in polarization are at the level of a few per cent at most for ℓ ≤ 100. Figure 18. View largeDownload slide Cross-correlations of T, E, and B between various modelling options. Differences between these models in polarization are at the level of a few per cent at most for ℓ ≤ 100. We see an excellent correlation overall in all cases, of more than 96 per cent at 143 GHz, and more than 99 per cent at 217 GHz for polarization, slightly worse for intensity, a difference that might be due to the presence of other foreground emission in the Planck foreground intensity maps – free–free, point sources, and/or CO line contamination. This shows that the large-scale polarization maps are in excellent agreement with the observations across the frequency channels where there is the best sensitivity to the CMB. The correlation decreases at higher ℓ. This is probably due to a combination of non-vanishing noise in the GNILC maps, residuals of small-scale fluctuations in the template 353 GHz E and B maps that are used to model the total polarization, and lack of small scales in the modelled scaling law of each layer. Because GNILC, in a way ‘selects’ modes that are correlated between channels, it may also be that the correlation of the model with the GNILC data is artificially high. We postpone further investigations of this possible effect to future work. As expected, when random fluctuations of T and β are generated in each layer, the correlation with the real observations is reduced. We also compute the average intensity emission across frequencies (Fig. 19), and note that, as expected, our multilayer model has more power at low frequency than a 2D model with one single MBB per pixel. The same effect is observed both in intensity and in polarization. Figure 19. View largeDownload slide Left: Average total sky emission in intensity for our 3D multi-MBB model as compared to a ‘2D’ model with one single MBB per pixel. Both have the same intensity at 353 GHz by construction. The 3D model has flatter emission law at low frequency, an effect that originates from the increasing importance at low frequency of components with flatter spectral index that may be subdominant at higher frequency where the emission is dominated by hotter components. Right: Ratio of the average emission law of the 3D model and the 2D model, for both intensity and polarized intensity. Figure 19. View largeDownload slide Left: Average total sky emission in intensity for our 3D multi-MBB model as compared to a ‘2D’ model with one single MBB per pixel. Both have the same intensity at 353 GHz by construction. The 3D model has flatter emission law at low frequency, an effect that originates from the increasing importance at low frequency of components with flatter spectral index that may be subdominant at higher frequency where the emission is dominated by hotter components. Right: Ratio of the average emission law of the 3D model and the 2D model, for both intensity and polarized intensity. Finally, we can compute the level of decorrelation between polarization maps at different frequencies as predicted by our model. Understanding this decorrelation is essential for future component separation work to detect CMB B modes with component separation methods that exploit correlations between foregrounds at different frequencies, such as variants of the ILC (Tegmark, de Oliveira-Costa & Hamilton 2003; Eriksen et al. 2004; Delabrouille et al. 2009), CCA (Bonaldi et al. 2006), or SMICA (Delabrouille, Cardoso & Patanchon 2003; Cardoso et al. 2008; Betoule et al. 2009). We generate maps with small scales and with random fluctuations of temperature and spectral index in each layer. We compute the correlation between polarization maps (both E and B) at 143 or 217, and 353 GHz (see Fig. 20) for our 3D model. The correlations obtained in both cases are ranging from 97 per cent on small scale to close to 100 per cent on large scales, which is larger than what is observed on real Planck maps (Planck Collaboration L 2017). This shows that even if our multilayer model adds a level of complexity to dust emission modelling, it cannot produce a decorrelation between frequencies as strong as originally claimed in the first analysis of Planck polarization maps (Planck Collaboration L 2017). Our model, however, is compatible with the lack of evidence for such decorrelation between 217 and 353 GHz at the 0.4 per cent level for 55 ≤ ℓ ≤ 90 claimed in Sheehy & Slosar (2017), and predicts increased decorrelation (of the order of 1–2 per cent) between 143 and 353 GHz over the same range of ℓ. More multifrequency observations of polarized dust emission are necessary to better model dust polarization and refine these predictions. We also note that in our model, as shown in Fig. 21 the correlations do not significantly depend on the region of sky, as they remain similar for smaller sky fractions. Figure 20. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over 70 per cent of sky. Figure 20. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over 70 per cent of sky. Figure 21. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over different sky fractions. Figure 21. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over different sky fractions. 8 CONCLUSION We have developed a 3D model of polarized Galactic dust emission that is consistent with the large-scale Planck HFI polarization observations at 143, 217, and 353 GHz. The model is composed of six layers of emission, loosely associated with different distance ranges from the Solar system as estimated from stellar extinction data. Each of these layers is assigned an integrated intensity and polarization emission at 353 GHz, adjusted so that the sum matches the Planck observation on large scales. Small-scale fluctuations are randomly generated to model the emission on scales that have not been observed with sufficient signal-to-noise ratio with Planck. For intensity, these random small scales extend the dust template beyond the Planck resolution of about 5 arcmin . For polarization, small-scale fluctuations of emission originating from the turbulence of the GMF are randomly generated on scales smaller than 2° or 2.5°, depending on the layer of emission considered. The level and correlations of randomly generated fluctuations are adjusted to extend the observed multivariate spectrum of the T, E, and B components of the observed dust emission, assuming a 30 per cent correlation of T and E. One of the primary motivation of this work is the recognition of the fact that if the parameters that define the scaling of dust emission between frequencies of observation vary across the sky, they must also vary along the LOS. We hence assign to each layer of emission a different, pixel-dependent, scaling law in the form of a MBB emission characterized, for each pixel, by a temperature and an emissivity spectral index. Observational constraints to infer the real scaling law for each layer are lacking. We hence generate random scaling laws adjusted to match on average the observed global scaling, and with fluctuations of temperature and spectral index compatible with the observed distribution of these two parameters as fitted on the Planck HFI data. The model developed here does not pretend to be exact. The lack of multifrequency high signal-to-noise dust observations in polarization forbids such an ambition. None the less, the model provides a means to simulate a dust component that features some of the plausible complexity of the polarized dust component, while being compatible with the observed large-scale polarized emission at 353 GHz and with most of the observed statistical properties of dust (temperature and polarization power spectra, amplitude and correlation of temperature and spectral index of the best-fitting MBB emission). However, this model fails to predict the strong decorrelation of dust polarization between frequency channels on small angular scales seen in Planck Collaboration L (2017), a limitation that must be addressed in the future if that decorrelation is confirmed. In the meantime, we expect these simulated maps to be useful to investigate the component separation problem for future CMB polarization surveys such as CMB-S4, PIXIE, CORE, or LiteBIRD. Simulated maps at a set of observing frequencies can be made available by the authors upon request. Acknowledgements We thank François Boulanger, Jan Tauber, and Mathieu Remazeilles for useful discussions and valuable comments on the first draft of this paper. Extensive use of the healpix pixelization scheme (Górski et al. 2005), available from the healpix webpage,5 was made for this research project. We thank Douglas Finkbeiner for pointing out a mistake in the assignment of distances to the various layers in the first preprint of this paper, and an anonymous referee for many useful comments and suggestions. Footnotes 1 Nor does the specific analytic form of the depolarization function. 2 After the GNILC process to de-noise the observations, these maps bring only limited additional information: considering their noise level, their dust component over most of the sky is obtained largely by GNILC from their correlation with the 353 GHz map locally in needlet space. 3 However, they can be used to get an initial estimate of the dust density on very large scale. We will make use of this in the next section for an initial guess of the polarization fraction of dust emission in each layer. 4 We postpone to Section 6 the discussion of temperature and spectral index maps. In equation (8), we use average values given in Table 1. 5 http://healpix.sourceforge.net REFERENCES Abazajian K. N. et al. , 2016, preprint (arXiv:1610.02743) André P. et al. , 2014, J. Cosmol. Astropart. Phys. , 2, 006 CrossRef Search ADS Basak S., Delabrouille J., 2012, MNRAS , 419, 1163 CrossRef Search ADS Basak S., Delabrouille J., 2013, MNRAS , 435, 18 CrossRef Search ADS Bennett C. L. et al. , 2013, ApJS , 208, 20 CrossRef Search ADS Benoit A. et al. , 2002, Astropart. Phys. , 17, 101 CrossRef Search ADS Betoule M., Pierpaoli E., Delabrouille J., Le Jeune M., Cardoso J.-F., 2009, A&A , 503, 691 CrossRef Search ADS BICEP2 Collaboration, 2014, Phys. Rev. Lett. , 112, 241101 CrossRef Search ADS PubMed BICEP2/Keck & Planck Collaborations, 2015, Phys. Rev. Lett. , 114, 101301 CrossRef Search ADS PubMed Bonaldi A., Ricciardi S., 2011, MNRAS , 414, 615 CrossRef Search ADS Bonaldi A., Bedini L., Salerno E., Baccigalupi C., de Zotti G., 2006, MNRAS , 373, 271 CrossRef Search ADS Bonaldi A., Ricciardi S., Brown M. L., 2014, MNRAS , 444, 1034 CrossRef Search ADS Cardoso J.-F., Le Jeune M., Delabrouille J., Betoule M., Patanchon G., 2008, IEEE J. Sel. Top. Signal Process. , 2, 735 CrossRef Search ADS Challinor A. et al. , 2017, preprint (arXiv:1707.02259) CORE Collaboration, 2016, preprint (arXiv:1612.08270) Das S. et al. , 2014, J. Cosmol. Astropart. Phys. , 4, 014 CrossRef Search ADS de Bernardis P. et al. , 2000, Nature , 404, 955 CrossRef Search ADS PubMed Delabrouille J., Cardoso J.-F., 2009, in Martínez V. J., Saar E., Martínez-González E., Pons-Bordería M.-J., eds, Lecture Notes in Physics, Vol. 665, Data Analysis in Cosmology . Springer-Verlag, Berlin, p. 159 Delabrouille J., Cardoso J.-F., Patanchon G., 2003, MNRAS , 346, 1089 CrossRef Search ADS Delabrouille J., Cardoso J.-F., Le Jeune M., Betoule M., Fay G., Guilloux F., 2009, A&A , 493, 835 CrossRef Search ADS Delabrouille J. et al. , 2013, A&A , 553, A96 CrossRef Search ADS Delabrouille J. et al. , 2017, preprint (arXiv:1706.04516) Dunkley J. et al. , 2009, in Dodelson S. et al. , eds, AIP Conf. Ser. Vol. 1141, CMB Polarization Workshop: Theory and Foregrounds: CMBPol Mission Concept Study . Am. Inst. Phys., New York, p. 222 Efstathiou G., Gratton S., Paci F., 2009, MNRAS , 397, 1355 CrossRef Search ADS Eriksen H. K., Banday A. J., Górski K. M., Lilje P. B., 2004, ApJ , 612, 633 CrossRef Search ADS Errard J., Stompor R., 2012, Phys. Rev. D , 85, 083006 CrossRef Search ADS Fauvet L. et al. , 2011, A&A , 526, A145 CrossRef Search ADS Faÿ G., Guilloux F., Betoule M., Cardoso J.-F., Delabrouille J., Le Jeune M., 2008, Phys. Rev. D , 78, 083013 CrossRef Search ADS Ghosh T. et al. , 2017, A&A , 601, A71 CrossRef Search ADS Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ , 622, 759 CrossRef Search ADS Green G. M. et al. , 2015, ApJ , 810, 25 CrossRef Search ADS Han J. L., 2017, ARA&A , 55, 111 CrossRef Search ADS Han J. L., Qiao G. J., 1993, Acta Astrophys. Sin. , 13, 385 Hanany S. et al. , 2000, ApJ , 545, L5 CrossRef Search ADS Harari D., Mollerach S., Roulet E., 1999, J. High Energy Phys. , 8, 022 CrossRef Search ADS Haverkorn M., 2015, in Lazarian A., de Gouveia Dal Pino E. M., Melioli C., eds, Astrophysics and Space Science Library, Vol. 407, Magnetic Fields in Diffuse Media . Springer-Verlag Berlin, p. 483 Jaffe T. R., Leahy J. P., Banday A. J., Leach S. M., Lowe S. R., Wilkinson A., 2010, MNRAS , 401, 1013 CrossRef Search ADS Jones W. C. et al. , 2006, ApJ , 647, 823 CrossRef Search ADS Kamionkowski M., Kovetz E. D., 2016, ARA&A , 54, 227 CrossRef Search ADS Kogut A. et al. , 2011, J. Cosmol. Astropart. Phys. , 7, 025 CrossRef Search ADS Kovac J. M., Leitch E. M., Pryke C., Carlstrom J. E., Halverson N. W., Holzapfel W. L., 2002, Nature , 420, 772 CrossRef Search ADS PubMed Lallement R., Snowden S., Kuntz K. D., Dame T. M., Koutroumpa D., Grenier I., Casandjian J. M., 2016, A&A , 595, A131 CrossRef Search ADS Leach S. M. et al. , 2008, A&A , 491, 597 CrossRef Search ADS Lewis A., Challinor A., 2006, Phys. Rep. , 429, 1 CrossRef Search ADS Marinucci D. et al. , 2008, MNRAS , 383, 539 CrossRef Search ADS Matsumura T. et al. , 2014, J. Low Temp. Phys. , 176, 733 CrossRef Search ADS Narcowich F., Petrushev P., Ward J., 2006, SIAM J. Math. Anal. , 38, 574 CrossRef Search ADS O'Dea D. T., Clark C. N., Contaldi C. R., MacTavish C. J., 2012, MNRAS , 419, 1795 CrossRef Search ADS Penzias A. A., Wilson R. W., 1965, ApJ , 142, 419 CrossRef Search ADS Planck Collaboration XI, 2014, A&A , 571, A11 CrossRef Search ADS Planck Collaboration XXX, 2016a, A&A , 586, A133 CrossRef Search ADS Planck Collaboration I, 2016b, A&A , 594, A1 CrossRef Search ADS Planck Collaboration XIII, 2016c, A&A , 594, A13 CrossRef Search ADS Planck Collaboration XLIV, 2016d, A&A , 596, A105 CrossRef Search ADS Planck Collaboration XLVIII, 2016e, A&A , 596, A109 CrossRef Search ADS Planck Collaboration L, 2017, A&A , 599, A51 CrossRef Search ADS Reichardt C. L. et al. , 2009, ApJ , 694, 1200 CrossRef Search ADS Remazeilles M., Delabrouille J., Cardoso J.-F., 2011, MNRAS , 418, 467 CrossRef Search ADS Remazeilles M., Dickinson C., Eriksen H. K. K., Wehus I. K., 2016, MNRAS , 458, 2032 CrossRef Search ADS Remazeilles M. et al. , 2017, preprint (arXiv:1704.04501) Rezaei Kh. S., Bailer-Jones C. A. L., Hanson R. J., Fouesneau M., 2017, A&A , 598, A125 CrossRef Search ADS Sheehy C., Slosar A., 2017, preprint (arXiv:1709.09729) Smoot G. F. et al. , 1992, ApJ , 396, L1 CrossRef Search ADS Sofue Y., Fujimoto M., 1983, ApJ , 265, 722 CrossRef Search ADS Stanev T., 1997, ApJ , 479, 290 CrossRef Search ADS Stompor R., Errard J., Poletti D., 2016, Phys. Rev. D , 94, 083526 CrossRef Search ADS Story K. T. et al. , 2013, ApJ , 779, 86 CrossRef Search ADS Tassis K., Pavlidou V., 2015, MNRAS , 451, L90 CrossRef Search ADS Tauber J. A. et al. , 2010, A&A , 520, A1 CrossRef Search ADS Tegmark M., de Oliveira-Costa A., Hamilton A. J., 2003, Phys. Rev. D , 68, 123523 CrossRef Search ADS The COrE Collaboration, 2011, preprint (arXiv:1102.2181) Tinyakov P. G., Tkachev I. I., 2002, Astropart. Phys. , 18, 165 CrossRef Search ADS Tucci M., Martínez-González E., Vielva P., Delabrouille J., 2005, MNRAS , 360, 935 CrossRef Search ADS Vansyngel F. et al. , 2017, A&A , 603, A62 CrossRef Search ADS © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Monthly Notices of the Royal Astronomical Society Oxford University Press
# A 3D model of polarized dust emission in the Milky Way
, Volume 476 (1) – May 1, 2018
21 pages
/lp/ou_press/a-3d-model-of-polarized-dust-emission-in-the-milky-way-00Xuq0UtRl
Publisher
The Royal Astronomical Society
ISSN
0035-8711
eISSN
1365-2966
D.O.I.
10.1093/mnras/sty204
Publisher site
See Article on Publisher Site
### Abstract
Abstract We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations. polarization, dust, extinction, cosmic background radiation, cosmology: observations, diffuse radiation, submillimetre: ISM 1 INTRODUCTION Since the discovery of the cosmic microwave background (CMB) in 1965 (Penzias & Wilson 1965), significant efforts have been devoted to precise characterization of its emission, and to understanding the cosmological implications of its tiny temperature and polarization anisotropies, detected first with COBE-DMR (Smoot et al. 1992) for temperature, and with DASI for polarization (Kovac et al. 2002). Many experiments have gradually improved the measurement of CMB temperature and polarization power spectra. Experiments on stratospheric balloons, notably Boomerang (de Bernardis et al. 2000; Jones et al. 2006), Maxima (Hanany et al. 2000), and Archeops (Benoit et al. 2002), detected with high significance the first acoustic peak in the CMB temperature power spectrum, and made the first measurements of the temperature power spectrum over a large range of angular scales. The WMAP satellite (Bennett et al. 2013) produced the first high signal-to-noise ratio full-sky CMB map and power spectrum from the largest scales to the third acoustic peak, opening the path to precision cosmology with the CMB. These observations have been completed by power spectra measurements from many ground based experiments, for instance ACBAR (Reichardt et al. 2009) and more recently ACT (Das et al. 2014) and SPT (Story et al. 2013) on scales smaller than observed with the balloons and space missions. Planck, the latest space mission to-date, launched by ESA in 2009 (Tauber et al. 2010), has mapped CMB anisotropies with extraordinary precision down to ≃5 arcmin angular scale, providing a wealth of information on the cosmological scenario. The Planck Collaboration XIII (2016c) has shown that both the CMB temperature and E-mode polarization power spectra were remarkably consistent with a spatially flat cosmology specified by six parameters, the so-called Λ cold dark matter model, with cosmic structures seeded at very early times by quantum fluctuations of space–time during an epoch of cosmic inflation. The accurate measurement of CMB polarization, including inflationary and lensing B modes, is the next objective of CMB observations. Such a measurement offers a unique opportunity to confirm the inflationary scenario, through the detection of the imprint of primordial inflationary gravitational waves on CMB polarization B modes on large angular scale (see Kamionkowski & Kovetz 2016, for a review). CMB polarization also offers the opportunity to map the dark matter in the Universe that is responsible of slight distortions in polarization patterns by the process of gravitational lensing of the background CMB (Lewis & Challinor 2006; Challinor et al. 2017). In 2014, the BICEP2 collaboration claimed evidence for primordial CMB B modes with a tensor-to-scalar ratio r = 0.2 (BICEP2 Collaboration 2014). However, a joint analysis with Planck mission data (BICEP2/Keck & Planck Collaborations 2015) showed that the signal was mostly due to contamination of the observed map by polarized dust emission from the Milky Way rather than gravitational waves from inflation. Future space missions such as COrE (The COrE Collaboration 2011) and its more recent version, CORE (with a capital ‘R’), proposed to ESA in 2016 October in answer to the ‘M5’ call for a medium-size mission (Delabrouille et al. 2017), PIXIE (Kogut et al. 2011), PRISM (André et al. 2014), LiteBIRD (Matsumura et al. 2014), and ground-based experiments such as CMB-S4 (Abazajian et al. 2016), plan to reach a sensitivity in r as low as r ∼ 0.001 (CORE Collaboration 2016). This requires subtracting at least 99 per cent of dust emission from the maps, or modelling the contribution of dust to the measured CMB B-mode angular power spectrum at the level of 10−4 precision or better. The feasibility of such dust-cleaning critically depends on the (unknown) complexity of dust emission down to that relative level, and on the number and central frequencies of frequency channels used in the observation (to be optimized in the design phase of future CMB experiments). Investigations of the feasibility of measuring CMB B modes in the presence of foreground astrophysical emission have been pursued by a number of authors (Tucci et al. 2005; Betoule et al. 2009; Dunkley et al. 2009; Efstathiou, Gratton & Paci 2009; Bonaldi & Ricciardi 2011; Errard & Stompor 2012; Bonaldi, Ricciardi & Brown 2014; Remazeilles et al. 2016; Stompor, Errard & Poletti 2016; Remazeilles et al. 2017), using component separation methods mostly developed in the context of the analysis of WMAP and Planck intensity and polarization observations (see e.g. Leach et al. 2008; Delabrouille & Cardoso 2009, for reviews and comparisons of component separation methods). Conclusions on the achievable limit on r drastically depend on the assumed complexity of the foreground emission model (see Delabrouille et al. 2013, for a widely used sky modelling tool), the number of components included, and on whether the component separation method that is used is or is not perfectly matched to the model used in the simulations. In this paper we present a three-dimensional (3D) model of the polarized dust emission, constrained by observations, that considers the spatial variation of the spectral index and of the temperature along the line of sight (LOS), and can help give insight on the feasibility and complexity of dust-cleaning in future CMB observations in the presence of a model of dust emission more complex and more realistic than what has been used in previous work. The objective is not an accurate 3D model of dust emission, which cannot be obtained without additional observations of the 3D dust, but a plausible 3D model that is compatible with observed dust emission and its spatial variations, and at the same time implements a complexity which, although not strictly necessary yet to fit current observations, is likely to be detectable in future sensitive CMB polarization surveys. This model can be used to infer properties such as decorrelation between frequencies and flattening of the spectral index at low frequencies, and also to test the possibility to separate CMB polarization from that of dust with future multifrequency observations of polarized emission at millimetre wavelengths. This paper is organized as follows. In Section 2, we justify the need for 3D modelling and discuss plausible consequences on the properties of dust maps across scales and frequencies. Section 3 presents the observations that are used in the construction of our dust model. In Section 4, we present the strategy that is used to make a 3D dust data cube in temperature and polarization using the (incomplete) observations at hand. As these available observations have limited angular resolution, we describe in Section 5 how to extend the model to smaller scales, in preparation for future high-resolution sensitive polarization experiments. Section 6 describes our prescription for scaling the dust emission across frequencies. We compare simulated maps with existing observations and discuss implications of the 3D model in Section 7. We conclude in Section 8. 2 WHY A 3D MODEL? Previous authors such as, e.g. Fauvet et al. (2011), O'Dea et al. (2012), and Vansyngel et al. (2017) have considered a 3D model of dust distribution and of the Galactic magnetic field (GMF) to model the spatial structure of dust polarization. Ghosh et al. (2017) complement this with an analysis of correlations of the direction of the GMF with the orientation of dust filaments, as traced by H i data. However, all of these approaches produce single templates of dust emission at a specific frequency but do not attempt at the same time to model the 3D dependence of the dust emission law. This misses one of the key aspects of dust emission that is crucial to disentangling its emission from that of CMB polarization (see Tassis & Pavlidou 2015). Dust is made of grains of different size and chemical composition absorbing and scattering light in the ultraviolet, optical and near-infrared, and re-radiating it in the mid- to far-infrared. Being made of structured baryonic matter (atoms, molecules, grains), dust interacts with the radiation field through many different processes. Empirically, at millimetre and submillimetre wavelengths, the observed emission in broad frequency bands is dominated by thermal emission at a temperature T, well fit in the optically thin limit by a modified blackbody (MBB) of the form $$I_{\nu }=\tau (\nu _0) \left(\frac{\nu }{\nu _0}\right)^{\beta } B_{\nu }(T),$$ (1)where Iν is the specific intensity at frequency ν and Bν(T) is the Planck blackbody function for dust at temperature T. In the frequency range we are considering, the optical depth τ(ν) scales as (ν/ν0)β, where β is a spectral index that depends on the chemical composition and structure of dust grains. Here, ν0 is a reference frequency at which a reference optical depth τ(ν0) is estimated (we use ν0 = 353 GHz throughout this paper). Using dust template observations in the Planck 353, 545, and 857 GHz channels and the IRAS 100 μm map, it is possible to fit for τ(ν0), T and β in each pixel. This fit, performed by the Planck Collaboration XI (2014), shows clear evidence for a variation across the sky of the best-fitting temperature and spectral index, with T mostly ranging from about 15 K to about 27 K and β ranging from about 1.2 to about 2.2. Such variations are expected by reason of variations of dust chemical composition and size, and of variations of the stellar radiation field, as a function of local physical and environmental conditions. In this paper, we propose to revisit this model to make it 3D. Indeed, if dust properties vary across the sky, they must also vary along the LOS. This means that even if one single MBB is (empirically) a good fit to the average emission coming from a given region of the 3D Milky Way as observed with the best current signal-to-noise ratio, the integrated emission in a given LOS must be a superposition of several such MBB emissions with varying T(r) and β(r) (in fact, a continuum, weighted by a local elementary optical depth dτ(r, ν0)): $$I_{\nu } = \int _{0}^{\infty } \mathrm{d}r \, \frac{\mathrm{d}\tau (r,\nu _0)}{\mathrm{d}r} \left(\frac{\nu }{\nu _0}\right)^{\beta (r)} \, B_\nu (T(r)),$$ (2)where r is the distance along the LOS and where, again, τ(r, ν0) is an optical depth at frequency ν0, T(r) is a temperature, and β(r) a spectral index, now all dependent on the distance r from the observer. As a sum of MBBs is not a MBB, this mixture of dust emissions is at best only approximately a MBB. For instance, regions along the LOS with lower β contribute relatively more at low frequency than at high frequency. This would then naturally generate an effect of flattening of the observed dust spectral index at low ν, which precludes fits of dust emission performed at high frequency to be valid at lower frequencies. To properly account for such LOS inhomogeneities, a 3D model of dust emission, with dust emission law variations both across and along the LOS, is needed. This 3D mixture of inhomogeneous emission would also naturally impact the polarized emission of galactic dust. The preferential alignment of elongated dust grains perpendicularly to the local magnetic field $$\boldsymbol B$$ results in a net sky polarization that is, on the plane of the sky, orthogonal to the component $$\boldsymbol B_{\perp }$$ of $$\boldsymbol B$$ that is perpendicular to the LOS. The efficiency of grain alignment depends on the local physical properties of the interstellar medium (density, which impacts the collisions between grains; irradiation). Each region emits polarized emission proportional to an intrinsic local polarization fraction p(r). Linear polarization Stokes parameters Q and U can be written as $$Q_{\nu } = \int _{0}^{\infty } \mathrm{d}r \, p(r) \frac{\mathrm{d}\tau }{\mathrm{d}r} B_\nu (T(r)) \left(\frac{\nu }{\nu _0}\right)^{\beta (r)} \cos 2\psi (r) \sin ^k \alpha (r)$$ (3)and $$U_{\nu } = \int _{0}^{\infty } \mathrm{d}r \, p(r) \frac{\mathrm{d}\tau }{\mathrm{d}r} B_\nu (T(r)) \left(\frac{\nu }{\nu _0}\right)^{\beta (r)} \sin 2\psi (r) \sin ^k \alpha (r),$$ (4)where, in the healpix CMB polarization convention, $$\cos 2\psi = \frac{B_{\theta }^2-B_{\varphi }^2}{B_{\perp }^2}, \, \sin 2\psi = \frac{2B_\theta B_\varphi }{B_{\perp }^2}, \, \sin \alpha = {\frac{B_{\perp }}{B}},$$ (5)and where k is an exponent that takes into account depolarization and projection effects linked to the local geometry and the alignment of grains. In these equations, r is the distance to the observer, i.e. r, θ, and φ are spherical heliocentric coordinates. In equations (3) and (4), we recognize an overall intensity term (equal to the integrand in equation 2), multiplied by a polarization fraction p(r), an orientation term cos 2ψ(r) or sin 2ψ(r), and a geometrical term sin kα(r) that depends on the direction of the magnetic field with respect to the LOS. In the absence of strong theoretical or observational constraints on the value of k, we follow Fauvet et al. (2011) and assume k = 3. This choice, although arguably somewhat arbitrary, does not impact much the rest of this work1 as it does not change the polarization angle on the sky, while the polarization maps will ultimately be re-normalized to match the total observed dust polarization at 353 GHz. This re-normalization somewhat corrects for possible inadequacy or inaccuracy of the assumption made for the geometrical term. Since all parameters (p, τ, T, β, ψ, and α) vary along the LOS, the total polarized emission is a superposition of emissions with different polarization angles and different emission laws. As a consequence, the polarization fraction will change with frequency (i.e. intensity and polarization have different emission laws); in addition, the polarization will rotate as a function of frequency, depending on the relative level of emission of various regions along the LOS. This polarization rotation effect would also naturally generate decorrelation of polarization emission at various frequencies. Such an effect that has been reported in Planck observations (Planck Collaboration L 2017), but is the object of debate following a subsequent analysis that does not confirm the statistical significance of the observed decorrelation (Sheehy & Slosar 2017). 3 OBSERVATIONS Full-sky (or near-full-sky) dust emission is observed at submillimetre wavelength by Planck and IRAS. We process the Planck 2015 data release maps with a Generalized Needlet Internal Linear Combination (GNILC) method to separate dust emission from other astrophysical emissions and to reduce noise contamination. GNILC (Remazeilles, Delabrouille & Cardoso 2011) is a component separation method that extracts from noisy multifrequency observations a multiscale model of significant emissions, based on the comparison of auto and cross-spectra with the level of noise locally in needlet space. Needlets (Narcowich, Petrushev & Ward 2006; Faÿ et al. 2008; Marinucci et al. 2008) are a tight frame of space-frequency functions (which serve as a redundant decomposition basis). The use of needlets for component separation by Needlet Internal Linear Combination (NILC) was introduced in the analysis of WMAP 5-yr temperature data (Delabrouille et al. 2009). They were further used on the 7-yr and 9-yr temperature and polarization maps (Basak & Delabrouille 2012, 2013). GNILC has been used by the Planck collaboration to separate dust emission from Cosmic Infrared Background (Planck Collaboration XLVIII 2016e). We use the corresponding dust maps to constrain our model of dust emission in intensity. GNILC maps offer the advantage of reduced noise level (for both intensity and polarization), and of reduced contamination by the cosmic infrared background fluctuations (for intensity). However, different templates of dust emission in intensity and polarization could have been used instead, as long as those maps are not too noisy, nor contaminated by systematic effects such that, for instance, the intensity map is negative in some pixels, or that dust is a subdominant component in some pixels or at some angular scales (problems of that sort are usually present in maps that have not been processed to avoid these issues specifically). From now on, the single greybody ‘2D’ model of the form of equation (1) uses Planck maps of τ(ν0) and β that are obtained from a fit of the GNILC dust maps between 353 and 3000 GHz, obtained as described in Planck Collaboration XLVIII (2016e). For polarization, we apply independently GNILC on Planck 30–353 GHz E and B polarization maps (Data Release 2; Planck Collaboration I 2016b) to obtain polarized galactic emission maps in the seven polarized Planck channels. These maps are specifically produced for the present analysis, and are not part of the Planck archive. Dust-dominated E and B polarization maps at ν = 143, 217, and 353 GHz are shown in Fig. 1. The polarization maps with best dust signal-to-noise ratio are at ν = 353 GHz. The other polarization maps are not used further in our model.2 Here, the GNILC processing is mostly used as a pixel-dependent de-noising of the 353 GHz polarization map. A model that fully exploits the multifrequency information in the Planck data is postponed to future work. The needlet decomposition extends down to 5 arcmin angular resolution for intensity and 1° for polarization. Figure 1. View largeDownload slide T , E, and B maps at 353, 217, and 143 GHz, obtained with a generalized needlet ILC analysis of Planck HFI public data products. Figure 1. View largeDownload slide T , E, and B maps at 353, 217, and 143 GHz, obtained with a generalized needlet ILC analysis of Planck HFI public data products. Three-dimensional maps of interstellar dust optical depth, as traced by starlight extinction, have been derived by Green et al. (2015) based on the reddening of 800 million stars detected by PanSTARRS 1 and 2MASS, covering three-quarters of the sky. The maps are grouped in 31 bins out to distance moduli from 4 to 19 (corresponding to distances from 63 pc to 63 kpc) and have a hybrid angular resolution, with most of maps at an angular resolution of 3.4–13.7 arcmin . These maps will be used to infer some information about the distribution of dust along the LOS, which will be used to generate our 3D model of polarized dust emission. 4 MULTILAYER MODELLING STRATEGY We approximate the continuous integrals of equations (2)–(4) as discrete sums over independent layers of emission, indexed by i, so that we have, for the intensity, $$I_{\nu }(p) \, = \, \sum _{1}^{N} I_{\nu }^i(p) \, = \, \sum _{1}^{N} \tau _i(\nu _0) \! \left(\frac{\nu }{\nu _0}\right)^{\beta _i(p)} \! B_\nu (T_i(p)).$$ (6)Each layer is then characterized by maps of Stokes parameters $$I_\nu ^i(p)$$, $$Q_\nu ^i(p)$$, and $$U_\nu ^i(p)$$, with a frequency scaling, for each sky pixel p, in the form a single MBB emission law with a temperature Ti(p) and a spectral index βi(p) (both assumed to be the same for all three Stokes parameters). We want to find a way to assign to each such layer plausible templates (full-sky pixelized maps) for I, Q, and U at some reference frequency ν0, as well as scaling parameter maps T and β, all such that the total emission matches the observed sky. By ‘layer’ we mean a component, loosely associated with different distances from us, but which could equally well be a component associated with a specific population of dust grains. The problem is clearly degenerate. Starting from only four dust-dominated maps of I (Planck and IRAS maps from 353 to 3000 GHz obtained after the GNILC analysis to remove CIB contamination), and one map of each of Q and U (both at 353 GHz), for a total of six maps, we propose to model dust emission with 3N maps of Stokes parameters $$I_{\nu _0}^i(p)$$, $$Q_{\nu _0}^i(p)$$, and $$U_{\nu _0}^i(p)$$ and 2N maps of emission law parameters Ti(p) and βi(p), i.e. a total of 5N maps, where N is the number of layers used in the model. For any N ≥ 2, we need additional data or constraints. We thus use the 3D maps of dust extinction from Green et al. (2015) to decompose the observed intensity map I at some reference frequency as a sum of intensity maps Ii coming from different layers i. We group the dust extinction maps in six ‘layers’ (shown in Fig. 2) by simple coaddition of the corresponding optical depths. Six layers are sufficient for our purpose and provide a better estimate of the optical thickness associated with each layer than if we tried to use more. Three of these layers map the dust emission at high galactic latitude, while three map most of the emission close to the galactic plane. We choose the smallest possible homogeneous pixel size, corresponding to healpix Nside = 64. These choices could be revisited in the future, in particular when more data become available. Figure 2. View largeDownload slide Maps of starlight extinction tracing the interstellar dust optical depth in shells at different distance from the Sun (maps obtained from the 3D maps of Green et al. 2015). Grey areas correspond to regions that have not been observed. Figure 2. View largeDownload slide Maps of starlight extinction tracing the interstellar dust optical depth in shells at different distance from the Sun (maps obtained from the 3D maps of Green et al. 2015). Grey areas correspond to regions that have not been observed. We then further use a 3D model of the GMF to generate Q and U maps for each layer. Finally, the total emission from all layers is readjusted so that the sum matches the observed sky at the reference frequency. We detail each of these steps in the following subsections. 4.1 Intensity layers Although the general shape and density distribution of the Galaxy is known, the exact 3D density distribution of dust grains in the Galaxy is not. Simple models consider a galactocentric radius and height function: $$n_d(R,z) = n_0 \exp (-R/h_R)\, {\rm sech}^2(z/h_z),$$ (7)where (R, z) are cylindrical coordinates centred at the Galactic centre, and where hR = 3 kpc and hz = 0.1 kpc. Such models cannot reproduce the observed intermediate and small-scale structure of dust emission.3 On the other hand, the maps of Green et al. (2015) trace the dust density distribution, and are directly proportional to the optical depth τ at visible wavelength. We select six primary shells within distance moduli of 4, 7, 10, 13, 16, and 19 (corresponding to distances of 63, 251, 1000, 3881, 15849, and 63096 pc from the Sun), and use those maps to compute, in each pixel, an estimate of the fraction fi(p) of the total opacity associated with each layer (so that ∀p, ∑i fi(p) = 1). We then construct the opacity map for each layer as the product τi(ν0) = fi τ(ν0), where τ(ν0) is the opacity at 353 GHz obtained in the Planck MBB fit. For our 3D model, we must face the practical difficulty that the maps of Green et al. (2015) do not cover the full sky (Fig. 2). For a full-sky model, the missing sky regions must be filled-in with a simulation or a best-guess estimate. We use the maps where they are defined to evaluate the relative fraction fi of dust in each shell i. For each pixel where the layers are not defined, we use symmetry arguments and copy the average fraction from regions centred on pixels at the same absolute Galactic latitude and longitude. This gives us a plausible dust fraction in the region not covered in the decomposition of Green et al. (2015). We then use these fractions of emission to decompose the total map of optical depth τ(ν0) at 353 GHz and obtain the six maps of extinction shown in Fig. 3 . Figure 3. View largeDownload slide Full sky optical depth layers at 353 GHz, scaled to match the total 353 GHz extinction map of Planck Collaboration XLVIII (2016e). The fraction of optical depth in each layer is obtained from the maps of Green et al. (2015) where missing sky pieces in the 3D model are filled-in using symmetry arguments. Figure 3. View largeDownload slide Full sky optical depth layers at 353 GHz, scaled to match the total 353 GHz extinction map of Planck Collaboration XLVIII (2016e). The fraction of optical depth in each layer is obtained from the maps of Green et al. (2015) where missing sky pieces in the 3D model are filled-in using symmetry arguments. We then compute the corresponding brightness in a given layer by multiplying by the Planck function together with the spectral index correction (equation 8), using for this an average temperature and spectral index for each layer.4 We get, for each layer i, an initial estimate of the intensity $$\widetilde{I}^i_{\nu } = f_i \, \tau (\nu _0) \left(\frac{\nu }{\nu _0}\right)^{\beta _i} \, B_\nu (T_i).$$ (8)The sum $$\widetilde{I}_{\nu _0} = \sum _i \widetilde{I}^i_{\nu _0}$$ however does not exactly match the observed Planck map $$I_{\nu _0}$$ at ν0 = 353 GHz. We readjust the layers by redistributing the residual error in the various layers, with weights proportional to the fraction of dust in each layer, to get $${I^i_{\nu _0} = \widetilde{I}^i_{\nu _0} + f_i(I_{\nu _0}- \widetilde{I}_{\nu _0})},$$ (9)and by construction we now have $${I}_{\nu _0} = \sum _i {I}^i_{\nu _0}$$. The full model across frequencies is $$I^i_{\nu } = I^i_{\nu _0} \left(\frac{\nu }{\nu _0}\right)^{\beta _i} \, \frac{B_\nu (T_i)}{B_{\nu _0}(T_i)},$$ (10)with $$I^i_{\nu _0}$$ computed following equations (8) and (9). In this way, we have six different maps of dust intensity that add-up to the observed Planck dust intensity emission at 353 GHz. We note that our model differs from that of Vansyngel et al. (2017), who instead make the simplifying assumption that the intensity template in all the layers they use is the same. The consequence of this approximation is that the fraction fi of emission in all the layers is constant over the sky. This is not compatible with a truly 3D model: galactic structures cannot be expected to be spread over all layers of emission with a proportion that does not depend on the direction of observation. The decomposition we implement in our model is just one of many possible ways to separate the total map of dust optical depth into several contributions. A close look at what we obtain shows several potential inaccuracies. For instance, some compact structures are clearly visible in more than one map, while it is not very likely that they all happen to be precisely at the edge between layers or elongated along the LOS so that they extend over more than one layer. This ‘finger of God’ effect is likely to be due to errors in the determination of the distance or of the extinction of stars, which, as a result, spreads the estimated source of extinction over a large distance span. The north polar spur (extending from the galactic plane at l ≃ 30°, left of the Galactic centre, towards the north Galactic pole) is clearly visible both in the first two maps. According to Lallement et al. (2016), it should indeed extend over both layers. On the other hand, structures associated with the Orion–Eridanus bubble (right of the maps, below the Galactic plane) can be seen in all three first maps, from less than 60 pc to more than 250 pc, while most of the emission associated with Orion is at a distance of 150–400 pc. As discussed by Rezaei et al. (2017), future analyses of the Gaia satellite data are likely to drastically improve the 3D reconstruction of Galactic dust. For this work, we use the maps of Fig. 3, noting that for our purpose what really matters is not the actual distance of any structure, but whether such a structure is likely to emit with more than one single MBB emission law. Certainly, a complex region such as Orion cannot be expected to be in thermal equilibrium and constituted of homogeneous populations of dust grains, and thus modelling its emission with more than one map is in fact preferable for our purpose. The same holds for distant objects such as the large and small Magellanic clouds and associated tidal structures, wrongly associated with nearby layers of emission by the procedure we use to fill the missing sky regions. Hence, the ‘layers’ presented here should be understood as layers of emission with roughly one single MBB (per pixel), originating mostly from a given range of distances from the Earth (see also Planck Collaboration XLIV 2016d, for a discussion of emission layers and their connection to spatial shells or different phases of the ISM). While this decomposition is not exact, it matches the purposes of this work. 4.2 Polarization layers We model polarization using equations (3) and (4). Geometric terms depending on ψ and α are computed using a simple large-scale model of the GMF. This regular magnetic field is assumed to roughly follow the spiral arms of the Milky Way. Several plausible configurations have been proposed, based on rotational symmetry around the Galactic Centre, and on mirror symmetry with respect to the Galactic plane. A widely used parametrization, named in the literature as bisymmetric spiral (BSS; Sofue & Fujimoto 1983; Han & Qiao 1993; Stanev 1997; Harari, Mollerach & Roulet 1999; Tinyakov & Tkachev 2002), defines the radial and azimuthal field components (in Galactocentric cylindrical coordinates) as $$B_r=B(r,\theta ,z)\sin q, \, \, \, \, \, \, B_{\theta }=-B(r,\theta ,z)\cos q,$$ (11)where q is the pitch angle of the logarithmic spiral, and where the function B(r, θ, z) is defined as $$B(r,\theta ,z) = - B_0(r)\cos \left(\theta + \beta \log \frac{r}{r_0} \right) \exp (-|z|/z_0),$$ (12)where β = 1/tan q. We model the regular magnetic field using such a BSS parametrization, in which we consider the z-component of the GMF to be zero. The model is restricted for r > 1 kpc to avoid divergence of the field at small radius (and is hence assumed to vanish for r ≤ 1 kpc). The value of the pitch angle of the spiral arms in the Milky Way is still a matter of debate in the community. Estimates of this angle range from −5° to −55° depending on the tracer used to determine it, with the most commonly cited value being around −11.5°. A possible explanation for the wide range of pitch angles determined from different data sets is that the pitch angle is not constant but varies with radius, meaning the spirals are not exactly logarithmic (e.g. slightly irregular). In our case, the model should reproduce as well as possible the polarized dust emission on large scales, and at high galactic latitude in particular. The simple large-scale density model of equation (7) together with the BSS large -scale magnetic field from equations (11) and (12) can be integrated following equations (2)–(4) to provide a first guess of dust intensity and polarization distribution for each layer ($$I^i_m,Q^i_m,U^i_m$$). We initially assume that the intrinsic local polarization fraction p(r) in equations (3) and (4) is constant and equal to 20 per cent. Since we already have layers of intensity emission ($$I^i_{353}$$), the polarized emission in each layer i can be generated as $$\widetilde{Q}^{i}_{353}=\left(\frac{Q^{i}_m}{I_m^i}\right)I^i_{353}, \, \, \, \, \, \, \, \, \, \, \, \, \widetilde{U}^{i}_{353}=\left(\frac{U^{i}_m}{I_m^i}\right)I^i_{353},$$ (13)The best-fitting pitch angle q can be found minimizing some function of the difference between the simple polarization model obtained from equation (13) and the observations. We minimize the L1 norm of the difference in Q and U, summed for all the pixels at high galactic latitude: $$G(q)= \sum _{p} \left( \left|\widetilde{Q}^{{\rm model}}_{353}(q)- Q^{{\rm obs}}_{353}\right| + \left|\widetilde{U}^{{\rm model}}_{353}(q) -U^{{\rm obs}}_{353}\right| \right),$$ (14)where the dependence on the pitch angle q has been specified for clarity, and where the total modelled Q is the sum of the simple layer contributions from equation (13): $$\widetilde{Q}^{{\rm model}}_{353} = \sum _{i=1}^{N} \widetilde{Q}^{i}_{353}$$ (15)and similarly for U. We find that a pitch angle of −33° provides the best fit of the GNILC maps by the BSS model at galactic latitude |b| ≥ 15°, which is the region of the sky with more interest for CMB observations. Finally, to match the observations, we redistribute in the modelled layers of emission the residuals (observed emission minus modelled emission for q = 33°) weighted with some pixel-dependent weights Fi: $$Q^{i}_{353}=\left(\frac{Q^{i}_m}{I_m^i}\right)I^i_{353}+F_i \left[Q^{{\rm obs}}_{353}-\sum _{j=1}^{N}\left(\frac{Q_{m}^j}{I_m^j}\right)I^j_{353}\right],$$ (16)This guarantees that the model matches the observation at 353 GHz on the angular scales that are observed with good signal-to-noise ratio by Planck. However, these weights Fi must be such that the polarization fraction after the redistribution of residuals does not exceed some maximum value pmax, which is a free parameter of our model, and which we pick to be 25 per cent. We fix the value of Fi as Fi = Pi/∑jPj, i.e. proportionally to the polarized dust emission fraction in each layer, unless the resulting polarization fraction exceeds pmax. When this happens, we redistribute the polarization excess in neighbouring layers. The first term in the sum on the right hand side of equation (16) is the predicted polarization of layer i, based on a polarization fraction predicted by the BSS magnetic field applied to an intensity map for that layer. The second term is the correction that is applied to force the sum of all layers's emissions to match the observed sky. The U Stokes parameters is modelled in a similar way. With this approach, we straightforwardly constrain the sum of emissions from all the layers to match the total observed emission for both Q and U. Fig. 4 shows the polarized layers $$Q_m^i$$ and $$U_m^i$$ given by the large-scale model of the magnetic field while Fig. 5 shows the polarized layers after redistributing the residuals all over the former layers. After adding the small-scale features (next section), we get the maps displayed in Fig. 6. A visual comparison with Fig. 4 shows that while the regular BSS field model does a reasonable job at predicting the very large scale polarization patterns (lowest modes of emission) at high galactic latitude (after picking the appropriate pitch angle), it fails at predicting most of the features of the observed polarized dust emission on intermediate scales. In addition, the amplitude of the modelled polarized emission at high Galactic latitude is seen to be too strong as compared to the observations. It is thus important, for the modelled emission to be reasonably consistent with Planck data, to enforce that the model match the observations, as we do, and not just rely on a simple regular model of the magnetic field, which does not exactly capture the observed features of the real emission. Figure 4. View largeDownload slide U and Q dust emission layers obtained using a model of dust fraction in each layer based on a simple model of dust density distribution in the Galaxy, and a large-scale bi-symmetric spiral model of GMF to infer thermal dust polarization emission from dust intensity maps at 353 GHz. Figure 4. View largeDownload slide U and Q dust emission layers obtained using a model of dust fraction in each layer based on a simple model of dust density distribution in the Galaxy, and a large-scale bi-symmetric spiral model of GMF to infer thermal dust polarization emission from dust intensity maps at 353 GHz. Figure 5. View largeDownload slide U and Q dust emission layers after renormalization of the sum to match the observed sky (Planck HFI GNILC dust polarization maps at 353 GHz). Figure 5. View largeDownload slide U and Q dust emission layers after renormalization of the sum to match the observed sky (Planck HFI GNILC dust polarization maps at 353 GHz). Figure 6. View largeDownload slide U and Q layers after matching with the observed sky (as in Fig. 5), after adding random small-scale fluctuations at a level matching an extrapolation of the temperature and polarization angular power spectra and cross-spectra. Figure 6. View largeDownload slide U and Q layers after matching with the observed sky (as in Fig. 5), after adding random small-scale fluctuations at a level matching an extrapolation of the temperature and polarization angular power spectra and cross-spectra. 5 SMALL SCALES The polarization maps we have generated are normalized to match the observed dust polarization in the GNILC 353 GHz maps obtained as described in Section 3. However, the polarization GNILC maps are produced at 1° angular resolution. In the galactic plane, where the polarized signal is strong, this is the actual resolution of the GNILC map. At high galactic latitude however, the amount of polarized sky emission power is low compared to noise even at intermediate scales. The GNILC processing then ‘filters’ the maps to subtract noise when no significant signal is locally detected. Hence, there is a general lack of small scales in the template E and B maps used to model polarized emission so far: everywhere on the sky on scales smaller than 1° (because the GNILC maps are produced at 1° angular resolution), but also on scales larger than that at high galactic latitude (because of the GNILC filtering). We must then complement the maps with small scales in a range of harmonic modes that depends on the layer considered, the first three layers covering most of the high galactic latitude sky, and the last three dominating the emission in the galactic plane and close to it, where the NILC filters less of the intermediate scales. Small angular scale polarized emission arises from both small-scale distribution of matter in three dimensions, but also from the fact that on small scales, the magnetic field becomes gradually more irregular, tangled and turbulent. Fully characterizing the strength, direction, and structure of the GMF in the entire Milky Way is a daunting task, involving measurements of very different observational tracers (see Han 2017 for a recent review). This field can be considered as a combination of a regular field as discussed above, complemented by a turbulent field that is caused by local phenomena such as supernova explosions and shock waves. The GMF is altered by gas dynamics, magnetic reconnection, turbulence effects. Observations constrain only one component of the magnetic field (e.g. strength or direction, parallel or perpendicular to the LOS) in one particular tracer (ionized gas, dense cold gas, dense dust, diffuse dust, cosmic ray electrons, etc.). This provides us with only partial information, making it extremely difficult to generate an accurate 3D picture. The small-scale magnetic field can be modelled with a combination of components that can be isotropic, or somewhat ordered with, e.g. a direction that does not vary on small scales while the sign of the B vector does, as illustrated in fig. 1 of Jaffe et al. (2010). The amplitude of these small-scale fields depend on the turbulent energy density. In both the Milky Way and in other spiral galaxies, the fields have been found to be more turbulent within the material spiral arms than in between them (Jaffe et al. 2010). Different strategies to constrain the strength of the random magnetic fields (including or not both turbulent fields) estimate an amplitude of the turbulent field of about the same order of magnitude as that of the regular part, ranging however from 0.7 to 4 μG for different estimates (Haverkorn 2015). In a typical model, the power spectrum of the random magnetic field is assumed to follow a Kolmogorov spectrum (with spectral index n = 5/3) with an outer scale of 100 pc. In our work, we do model the large scale, regular magnetic field using the BSS model of equations (11) and (12) to get a first guess of the layer-dependent dust polarization, but we do not attempt to directly model the 3D turbulent magnetic field. Indeed, it is not possible to implement a description of the real field down to those small scales, by lack of observations. The alternate strategy that consists in generating a random turbulent magnetic field, as in Fauvet et al. (2011), generates fluctuations with random phases and orientations, and dust polarization fluctuations that cannot be expected to match those observed in the real sky. Hence – as we detail next – we propose instead to rely on the observed polarized dust on scales where those observations are reliable, and extend the power spectra of our maps at high l in polarization, independently for each layer, to empirically model the effect of a small-scale turbulent component of the GMF, on scales missing or noisy in the GNILC 353 GHz map. To do so, we add small-scale fluctuations independently in each layer of our model, both for intensity and for polarization. In the case of intensity, we simply fit the power spectrum of the original map in the multipole interval 30 ≤ l ≤ 300, obtaining spectral indexes in harmonic space ranging from −2.2 to −3.2 as a function of the layer (steeper at further distances). We use these fitted spectral indexes to generate maps of fake intensity fluctuations, generated with a lognormal distribution in pixel space (so that the dust emission is never negative), with an amplitude proportional to the large-scale intensity, and globally adjusted to match the level of the angular power spectrum. We use a similar prescription for E and B, except that following the Planck results presented in Planck Collaboration XXX (2016a), we assume a power-law dependence for EE and BB power spectra at high l, of the form Cl = A(l/lfit)α with α = −2.42 for both E and B. We use a Gaussian distribution, instead of lognormal, for polarization fields. For each layer, we fix the amplitude A and lfit to match the power spectrum of the large-scale map for that layer in the range 30 ≤ l ≤ 100. The amplitude of the small-scale fluctuations is scaled by the polarized intensity map in each layer. The randomly generated T and E harmonic coefficients are drawn with 30 per cent correlation between the two, while B is uncorrelated with both T and E. We then make combined maps which use large scales from the observations, and the smallest scales from the simulations, as follows. For each layer, we have an observed map, with a beam window function bℓ for temperature and hℓ for polarization, i.e. $$a_{\ell m}^{T, \rm {obs}} = b_\ell \, a_{\ell m}^{T, \rm {sky}}; \quad a_{\ell m}^{E, \rm {obs}} = h_\ell \, a_{\ell m}^{E, \rm {sky}}$$ (17)and we have available $$a_{\ell m}^{T, \rm {rnd}}$$ and $$a_{\ell m}^{E, \rm {rnd}}$$ randomly generated following modelled statistics $$C_\ell ^{TT}$$, $$C_\ell ^{EE}$$ and $$C_\ell ^{TE}$$, which we assume match the statistics of real sky emission. We complement the observed aℓm by forming $$a_{\ell m}^{T, \rm {sim}} = a_{\ell m}^{T, \rm {obs}} + \sqrt{1-b_\ell ^2} \, a_{\ell m}^{T, \rm {rnd}}$$ (18)and similarly $$a_{\ell m}^{E, \rm {sim}} = a_{\ell m}^{E, \rm {obs}} + \sqrt{1-h_\ell ^2} \, a_{\ell m}^{E, \rm {rnd}},$$ (19)i.e. we make the transition between large and small scales in the harmonic domain using smooth harmonic windows, corresponding to that of a Gaussian beam of 5 arcmin for all layers in intensity, 2.5° for polarization layers 1, 2, and 3 (emission mostly at high galactic latitude), and of 2° for polarization layers 4, 5, and 6 (emission mostly near to the Galactic plane). These simulated sets of aℓm have correct $$C_\ell ^{TT}$$ and $$C_\ell ^{EE}$$, but not cross-spectrum $$C_\ell ^{TE}$$. Indeed $$C_\ell ^{TE, {\rm sim}} = C_\ell ^{TE} \left[ b_\ell h_\ell + \sqrt{1-b_\ell ^2}\sqrt{1-h_\ell ^2} \right].$$ (20)we obtain final simulated aℓm as $$a_{\ell m}^{\rm {final}} = \left[ C_\ell \right]^{1/2} [ C_\ell ^{\rm sim} ]^{-1/2} \, a_{\ell m}^{\rm {sim}},$$ (21)where for each ℓ, Cℓ, and $$C_\ell ^{\rm sim}$$ are 2 × 2 matrices corresponding to the terms of the multivariate (T, E) power spectra of the model and of the simulated maps with small scales added. Fig. 7 shows the maps of polarized emission after the various steps of our simulation process, summing up the contributions of all layers. Final maps of polarized intensity can be seen in Fig. 8. The percentage of polarized pixels with a given polarization fraction decreases with the polarization fraction, as seen in Fig. 9. Figure 7. View largeDownload slide First row: Full sky Q and U maps given by the BSS model. Second row: Q and U GNILC maps. Third row: Q and U total simulated maps after matching the GNILC maps on large scales and adding random small-scale fluctuations. The BSS model provides only a crude approximation of the observed dust emission. Figure 7. View largeDownload slide First row: Full sky Q and U maps given by the BSS model. Second row: Q and U GNILC maps. Third row: Q and U total simulated maps after matching the GNILC maps on large scales and adding random small-scale fluctuations. The BSS model provides only a crude approximation of the observed dust emission. Figure 8. View largeDownload slide Layers of polarized intensity ($$P = \sqrt{Q^2+U^2}$$), as modelled in our work. Figure 8. View largeDownload slide Layers of polarized intensity ($$P = \sqrt{Q^2+U^2}$$), as modelled in our work. Figure 9. View largeDownload slide Histograms of polarization fraction for each layer. We only use pixels where the polarization fraction is well defined, i.e. I(p) ≠ 0. This excludes high galactic latitude pixels for the most distant layers. Figure 9. View largeDownload slide Histograms of polarization fraction for each layer. We only use pixels where the polarization fraction is well defined, i.e. I(p) ≠ 0. This excludes high galactic latitude pixels for the most distant layers. The power spectra of simulated maps in all the layers after this full process are shown in Fig. 10. The power spectra of the original GNILC maps with those resulting from the individual sum of the simulations with small-scale fluctuations added in each layer is shown in Fig. 11: the missing power on small scales is complemented with fake, simulated small-scale fluctuations. We show full-sky maps of E and B at 353 GHz in Fig. 12. A detail at (l, b) = (0°, 50°) is shown in Fig. 13. E and B power spectra of the original GNILC maps and the simulations at 143 and 217 GHz are shown in Fig. 14. Figure 10. View largeDownload slide T, E, B power spectra for each layer. The first three rows also display the power spectra for 75 per cent and 25 per cent of the sky. Figure 10. View largeDownload slide T, E, B power spectra for each layer. The first three rows also display the power spectra for 75 per cent and 25 per cent of the sky. Figure 11. View largeDownload slide TT, EE, BB, TE power spectra of both GNILC maps and of simulated maps including small-scale fluctuations. Figure 11. View largeDownload slide TT, EE, BB, TE power spectra of both GNILC maps and of simulated maps including small-scale fluctuations. Figure 12. View largeDownload slide Modelled E and B modes maps at 353 GHz, after adding small-scale fluctuations, adding-up six layers of emission (see the text). Figure 12. View largeDownload slide Modelled E and B modes maps at 353 GHz, after adding small-scale fluctuations, adding-up six layers of emission (see the text). Figure 13. View largeDownload slide Observed and modelled E and B modes maps at 353 GHz – detail around (l, b) = (0°, 50°). Top row: T, E, and B modes, observed with Planck after GNILC processing; Bottom row: Modelled T, E, and B modes at Nside= 512, after adding small-scale fluctuations, adding-up six layers of emission. Figure 13. View largeDownload slide Observed and modelled E and B modes maps at 353 GHz – detail around (l, b) = (0°, 50°). Top row: T, E, and B modes, observed with Planck after GNILC processing; Bottom row: Modelled T, E, and B modes at Nside= 512, after adding small-scale fluctuations, adding-up six layers of emission. Figure 14. View largeDownload slide E and B power spectra of both GNILC maps and GNILC maps + small-scale fluctuations at 143 and 217 GHz. Figure 14. View largeDownload slide E and B power spectra of both GNILC maps and GNILC maps + small-scale fluctuations at 143 and 217 GHz. 6 SCALING LAWS We now need a prescription for scaling the 353 GHz polarized dust emission templates obtained above across the range of frequencies covered by Planck and future CMB experiments. We stick with the empirical form of dust emission laws (for each layer, a MBB with pixel-dependent temperature and spectral index), but now, we must define as many templates of T(p) and β(p) as there are layers in our model, i.e. six maps of T and six maps of β. A complete description of the temperature and the spectral index distribution in 3D would require observations of the intensity emission at different frequencies in each layer, which are not presently available. We cannot either use for each layer the same temperature and spectral index maps (otherwise there is no point using several layers to model the total emission). Finally, the scaling law we use for all the layers must be such that the final dust emission across frequencies should match the observations, i.e. (i) On average, the dust intensity scaled to other Planck frequencies (besides 353 GHz, at which matching the observations is enforced by construction) should be as close as possible to the actual Planck observed dust intensity. (ii) Similarly, each of the dust Q and U polarization maps, scaled to other frequencies than 353 GHz, should match the observed polarization at those frequencies. (iii) If we perform a MBB fit on our modelled dust intensity maps, the statistical distribution of temperature and spectral index should match those observed on the real sky: same angular power spectra and cross-spectra, similar non-stationary distribution of amplitudes of fluctuations across the sky, similar T–β scatter plot. With only 353 GHz polarization maps with good signal-to-noise ratio, we construct our model for frequency scaling on intensity alone. In a first step, we make use of the fraction of dust assigned to each layer to compute the weighted mean of the spectral index and temperature maps for each layer, using the overall maps obtained from the MMB fit made in Planck Collaboration XI (2014), which we assume to hold for all of the I, Q, and U Stokes parameters. We compute, for each layer i: \begin{eqnarray} T_{{\rm avg}}^i&=&\sum _{p} w^i_T(p) T_d(p), \nonumber\\ \beta _{{\rm avg}}^i&=&\sum _{p} w^i_\beta (p) \beta _d(p), \end{eqnarray} (22)where Td(p) and βd(p) are the best-fitting values of the overall MMB fit of Planck dust emission in each pixel, and where $$w^i_T(p)$$ and $$w^i_\beta (p)$$ are some weights used for computing the average. We use the same weights both for temperature $$T_{{\rm avg}}^i$$ and spectral index $$\beta _{{\rm avg}}^i$$, $$w^i_T(p) = w^i_\beta (p) = f_i(p),$$ (23)i.e. we empirically weight the maps by the pixel-dependent fraction fi(p) of dust emission in layer i, to take into account the fact that we are mostly interested in the temperature and spectral index of the regions of sky where that layer contributes most to the total emission. The simplest way to scale to other frequencies is to assume that $$T_{{\rm avg}}^i$$ and $$\beta _{{\rm avg}}^i$$ are constant across the sky in a given layer. This however implements only a variability of the physical parameters T and β along the LOS, and not across the sky anymore. It provides a (uniform) prediction of the scaling law in each layer that is informed by the observed emission law, but which does not reproduce the observed variability across the pixels of the globally fitted T and β (even if a global fit might find fluctuations because of the varying proportions of the various layers in the total emission as a function of sky pixel). To generate fluctuations of the spectral index and temperature of dust emission in each layer, we first generate, for each layer, Gaussian random variations around $$T_{{\rm avg}}^i$$ and $$\beta _{{\rm avg}}^i$$ following the auto and cross-spectra of the MBB fit obtained on the observed Planck dust maps (Planck Collaboration XLVIII 2016e). To take into account the non-Gaussianity of the distribution of T and β, we then re-map the fluctuations to match the observed probability distribution function in pixel space. This slightly changes the map spectra. As a final step, we thus re-filter the maps to match the observed auto and cross-spectra of T and β. One such iteration yields simulated temperature and spectral index maps in good statistical agreement with the observations. In Fig. 15 we show a random realization of temperature and spectral index maps for the first layer, its power spectra and its scatter plot. Figure 15. View largeDownload slide Top left: Power spectra used to draw random realizations of temperature and spectral index maps (note the negative sign of the Tβ cross-spectrum). Bottom left: Scatter plot of T and β for a pair of random maps (right), showing an overall anticorrelation and the same general behaviour as observed by Planck Collaboration XI (2014) on Planck observations (see their fig. 16). Right: Maps of randomly generated temperature and spectral index for the first layer, with Tavg = 19.10, σT = 2.059, βavg = 1.627, σβ = 0.209. Figure 15. View largeDownload slide Top left: Power spectra used to draw random realizations of temperature and spectral index maps (note the negative sign of the Tβ cross-spectrum). Bottom left: Scatter plot of T and β for a pair of random maps (right), showing an overall anticorrelation and the same general behaviour as observed by Planck Collaboration XI (2014) on Planck observations (see their fig. 16). Right: Maps of randomly generated temperature and spectral index for the first layer, with Tavg = 19.10, σT = 2.059, βavg = 1.627, σβ = 0.209. We then model the total emission at 353, 545, 857 GHz and 100 microns using those scaling laws, summing-up contributions from all six layers, and, in order to validate that the simulation is compatible with the observations, check with a MBB fit on the global map whether the distribution of the fitted parameters for the model is similar to that inferred on the real Planck observations. We find two problems. First, the average temperature and spectral index fit on the total emission turn out to be slightly larger and smaller respectively than observed on the real sky. This is not surprising: as the emission in each pixel is a sum, the layer with the largest temperature and the smallest spectral index tends to dominate the total emission both at ν = 3 THz, pulling the temperature towards higher values, and at low frequency, pulling the spectral index towards lower values. We find that the average MBB fit temperature from the model matches the observations if we rescale the temperature in individual layers by a factor 0.982. Secondly, the standard deviations of the resulting fitted T and β are significantly smaller than those of the real sky, presumably because of averaging effects. We recover a global distribution of temperature and spectral index as fit on the total emission, if we rescale the amplitude of the temperature and spectral index fluctuations generated in each layer. We find that to match the observed T and β inhomogeneities of the MBB fit performed on GNILC Planck and IRAS maps, we need to multiply the amplitude of temperature fluctuations in each layer by 1.84 and the spectral index fluctuations by 1.94. With this re-scaling, we find a good match between the simulated and the observed temperature and spectral index distributions in the global MBB fit. Table 1 shows the standard deviation and the average values for T and β in each layer for one single realization of the simulation, compared with those from the Planck MBB fit and the fit performed on this realization. The average values from several simulations are in good agreement with those of the Planck MMB fit. Table 1. Averages and standard deviation values of temperature and spectral index in each layer, for a simulation with 6.87 arcmin pixels healpix pixels at Nside= 512. The average and standard deviation of the resulting temperature and spectral index, as obtained from an MBB fit on the total intensity maps at 353, 545, 857, and 3000 GHz, is compared to what is obtained on Planck observations. Layer 1 2 3 4 5 6 Tavg 19.10 18.96 18.98 19.35 19.23 20.05 σT 2.059 2.100 2.022 2.076 2.117 2.069 βavg 1.627 1.628 1.598 1.538 1.513 1.689 σβ 0.209 0.210 0.207 0.208 0.202 0.204 $$T_{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{T}^{{\rm {MMB}}}$$ $$\beta _{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{\beta }^{{\rm {MMB}}}$$ Planck fit 19.396 1.247 1.598 0.126 Simul. fit 19.389 1.253 1.598 0.135 Layer 1 2 3 4 5 6 Tavg 19.10 18.96 18.98 19.35 19.23 20.05 σT 2.059 2.100 2.022 2.076 2.117 2.069 βavg 1.627 1.628 1.598 1.538 1.513 1.689 σβ 0.209 0.210 0.207 0.208 0.202 0.204 $$T_{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{T}^{{\rm {MMB}}}$$ $$\beta _{{\rm avg}}^{{\rm {MMB}}}$$ $$\sigma _{\beta }^{{\rm {MMB}}}$$ Planck fit 19.396 1.247 1.598 0.126 Simul. fit 19.389 1.253 1.598 0.135 View Large 7 VALIDATION AND PREDICTIONS We use our model to generate maps of polarized dust emission at 143, 217 GHz, and compare them to Planck observations in polarization and intensity (Fig. 16). Even if our model is not specifically constrained to exactly match the observations at these other frequencies, we observe a reasonable overall agreement both for polarization and intensity. Naturally, the discrepancies between model and observation become larger as we move further away from the reference frequency. Randomly drawn temperature and spectral index fluctuations are not expected to be those of the real microwave sky. Figure 16. View largeDownload slide GNILC maps both in intensity and polarization are shown in the first and the fourth row (subindex G), while maps obtained using our 3D model are shown in the second and the fifth row (subindex m). The differences between them are also shown in the third and sixth row. Note the different colour scales for difference maps. Figure 16. View largeDownload slide GNILC maps both in intensity and polarization are shown in the first and the fourth row (subindex G), while maps obtained using our 3D model are shown in the second and the fifth row (subindex m). The differences between them are also shown in the third and sixth row. Note the different colour scales for difference maps. Fig. 17 shows cross-correlation for E and B power spectra between Planck observation and the modelled dust maps when we use uniform temperature and spectral index map in each layer. Fig. 18 shows cross-correlation between various modelling options, showing that those models differ only at a subdominant level. These correlations are computed for maps smoothed to 2° angular resolution over 70 per cent of sky. Each figure compares the correlation as a function of angular scale between real-sky GNILC maps, as obtained from Planck data and modelled emission. Three models are considered: A 3D model in which the temperature and spectral index are constant in each layer, using the average values from Table 1; A 2D model in which the 353 GHz total maps of E and B are simply scaled using the temperature and spectral index from the fit on the intensity maps (from equation 1); A 3D model in which each layer has a different pixel-dependent map of T and β (the main model developed in this paper). Figure 17. View largeDownload slide Cross-correlation between simulations and observations for T, E, and B power spectra at 143 and 217 GHz 70 per cent of sky. We show in blue and green the correlation in intensity and polarization between maps generated with our model and the observations. While blue curves are computed using one single value of temperature and spectral index per layer, green curves consider one template per layer, with fluctuations of the temperature and the spectral index (model b). Red curves show the correlation between the observed polarized sky maps and maps obtained from a 2D model, i.e. one single template for temperature and spectral index from the MBB fit obtained on the observed Planck dust maps (Planck Collaboration XLVIII 2016e). Figure 17. View largeDownload slide Cross-correlation between simulations and observations for T, E, and B power spectra at 143 and 217 GHz 70 per cent of sky. We show in blue and green the correlation in intensity and polarization between maps generated with our model and the observations. While blue curves are computed using one single value of temperature and spectral index per layer, green curves consider one template per layer, with fluctuations of the temperature and the spectral index (model b). Red curves show the correlation between the observed polarized sky maps and maps obtained from a 2D model, i.e. one single template for temperature and spectral index from the MBB fit obtained on the observed Planck dust maps (Planck Collaboration XLVIII 2016e). Figure 18. View largeDownload slide Cross-correlations of T, E, and B between various modelling options. Differences between these models in polarization are at the level of a few per cent at most for ℓ ≤ 100. Figure 18. View largeDownload slide Cross-correlations of T, E, and B between various modelling options. Differences between these models in polarization are at the level of a few per cent at most for ℓ ≤ 100. We see an excellent correlation overall in all cases, of more than 96 per cent at 143 GHz, and more than 99 per cent at 217 GHz for polarization, slightly worse for intensity, a difference that might be due to the presence of other foreground emission in the Planck foreground intensity maps – free–free, point sources, and/or CO line contamination. This shows that the large-scale polarization maps are in excellent agreement with the observations across the frequency channels where there is the best sensitivity to the CMB. The correlation decreases at higher ℓ. This is probably due to a combination of non-vanishing noise in the GNILC maps, residuals of small-scale fluctuations in the template 353 GHz E and B maps that are used to model the total polarization, and lack of small scales in the modelled scaling law of each layer. Because GNILC, in a way ‘selects’ modes that are correlated between channels, it may also be that the correlation of the model with the GNILC data is artificially high. We postpone further investigations of this possible effect to future work. As expected, when random fluctuations of T and β are generated in each layer, the correlation with the real observations is reduced. We also compute the average intensity emission across frequencies (Fig. 19), and note that, as expected, our multilayer model has more power at low frequency than a 2D model with one single MBB per pixel. The same effect is observed both in intensity and in polarization. Figure 19. View largeDownload slide Left: Average total sky emission in intensity for our 3D multi-MBB model as compared to a ‘2D’ model with one single MBB per pixel. Both have the same intensity at 353 GHz by construction. The 3D model has flatter emission law at low frequency, an effect that originates from the increasing importance at low frequency of components with flatter spectral index that may be subdominant at higher frequency where the emission is dominated by hotter components. Right: Ratio of the average emission law of the 3D model and the 2D model, for both intensity and polarized intensity. Figure 19. View largeDownload slide Left: Average total sky emission in intensity for our 3D multi-MBB model as compared to a ‘2D’ model with one single MBB per pixel. Both have the same intensity at 353 GHz by construction. The 3D model has flatter emission law at low frequency, an effect that originates from the increasing importance at low frequency of components with flatter spectral index that may be subdominant at higher frequency where the emission is dominated by hotter components. Right: Ratio of the average emission law of the 3D model and the 2D model, for both intensity and polarized intensity. Finally, we can compute the level of decorrelation between polarization maps at different frequencies as predicted by our model. Understanding this decorrelation is essential for future component separation work to detect CMB B modes with component separation methods that exploit correlations between foregrounds at different frequencies, such as variants of the ILC (Tegmark, de Oliveira-Costa & Hamilton 2003; Eriksen et al. 2004; Delabrouille et al. 2009), CCA (Bonaldi et al. 2006), or SMICA (Delabrouille, Cardoso & Patanchon 2003; Cardoso et al. 2008; Betoule et al. 2009). We generate maps with small scales and with random fluctuations of temperature and spectral index in each layer. We compute the correlation between polarization maps (both E and B) at 143 or 217, and 353 GHz (see Fig. 20) for our 3D model. The correlations obtained in both cases are ranging from 97 per cent on small scale to close to 100 per cent on large scales, which is larger than what is observed on real Planck maps (Planck Collaboration L 2017). This shows that even if our multilayer model adds a level of complexity to dust emission modelling, it cannot produce a decorrelation between frequencies as strong as originally claimed in the first analysis of Planck polarization maps (Planck Collaboration L 2017). Our model, however, is compatible with the lack of evidence for such decorrelation between 217 and 353 GHz at the 0.4 per cent level for 55 ≤ ℓ ≤ 90 claimed in Sheehy & Slosar (2017), and predicts increased decorrelation (of the order of 1–2 per cent) between 143 and 353 GHz over the same range of ℓ. More multifrequency observations of polarized dust emission are necessary to better model dust polarization and refine these predictions. We also note that in our model, as shown in Fig. 21 the correlations do not significantly depend on the region of sky, as they remain similar for smaller sky fractions. Figure 20. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over 70 per cent of sky. Figure 20. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over 70 per cent of sky. Figure 21. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over different sky fractions. Figure 21. View largeDownload slide Correlation between maps at different frequencies obtained with our 3D model, computed over different sky fractions. 8 CONCLUSION We have developed a 3D model of polarized Galactic dust emission that is consistent with the large-scale Planck HFI polarization observations at 143, 217, and 353 GHz. The model is composed of six layers of emission, loosely associated with different distance ranges from the Solar system as estimated from stellar extinction data. Each of these layers is assigned an integrated intensity and polarization emission at 353 GHz, adjusted so that the sum matches the Planck observation on large scales. Small-scale fluctuations are randomly generated to model the emission on scales that have not been observed with sufficient signal-to-noise ratio with Planck. For intensity, these random small scales extend the dust template beyond the Planck resolution of about 5 arcmin . For polarization, small-scale fluctuations of emission originating from the turbulence of the GMF are randomly generated on scales smaller than 2° or 2.5°, depending on the layer of emission considered. The level and correlations of randomly generated fluctuations are adjusted to extend the observed multivariate spectrum of the T, E, and B components of the observed dust emission, assuming a 30 per cent correlation of T and E. One of the primary motivation of this work is the recognition of the fact that if the parameters that define the scaling of dust emission between frequencies of observation vary across the sky, they must also vary along the LOS. We hence assign to each layer of emission a different, pixel-dependent, scaling law in the form of a MBB emission characterized, for each pixel, by a temperature and an emissivity spectral index. Observational constraints to infer the real scaling law for each layer are lacking. We hence generate random scaling laws adjusted to match on average the observed global scaling, and with fluctuations of temperature and spectral index compatible with the observed distribution of these two parameters as fitted on the Planck HFI data. The model developed here does not pretend to be exact. The lack of multifrequency high signal-to-noise dust observations in polarization forbids such an ambition. None the less, the model provides a means to simulate a dust component that features some of the plausible complexity of the polarized dust component, while being compatible with the observed large-scale polarized emission at 353 GHz and with most of the observed statistical properties of dust (temperature and polarization power spectra, amplitude and correlation of temperature and spectral index of the best-fitting MBB emission). However, this model fails to predict the strong decorrelation of dust polarization between frequency channels on small angular scales seen in Planck Collaboration L (2017), a limitation that must be addressed in the future if that decorrelation is confirmed. In the meantime, we expect these simulated maps to be useful to investigate the component separation problem for future CMB polarization surveys such as CMB-S4, PIXIE, CORE, or LiteBIRD. Simulated maps at a set of observing frequencies can be made available by the authors upon request. Acknowledgements We thank François Boulanger, Jan Tauber, and Mathieu Remazeilles for useful discussions and valuable comments on the first draft of this paper. Extensive use of the healpix pixelization scheme (Górski et al. 2005), available from the healpix webpage,5 was made for this research project. We thank Douglas Finkbeiner for pointing out a mistake in the assignment of distances to the various layers in the first preprint of this paper, and an anonymous referee for many useful comments and suggestions. Footnotes 1 Nor does the specific analytic form of the depolarization function. 2 After the GNILC process to de-noise the observations, these maps bring only limited additional information: considering their noise level, their dust component over most of the sky is obtained largely by GNILC from their correlation with the 353 GHz map locally in needlet space. 3 However, they can be used to get an initial estimate of the dust density on very large scale. We will make use of this in the next section for an initial guess of the polarization fraction of dust emission in each layer. 4 We postpone to Section 6 the discussion of temperature and spectral index maps. In equation (8), we use average values given in Table 1. 5 http://healpix.sourceforge.net REFERENCES Abazajian K. N. et al. , 2016, preprint (arXiv:1610.02743) André P. et al. , 2014, J. Cosmol. Astropart. Phys. , 2, 006 CrossRef Search ADS Basak S., Delabrouille J., 2012, MNRAS , 419, 1163 CrossRef Search ADS Basak S., Delabrouille J., 2013, MNRAS , 435, 18 CrossRef Search ADS Bennett C. L. et al. , 2013, ApJS , 208, 20 CrossRef Search ADS Benoit A. et al. , 2002, Astropart. Phys. , 17, 101 CrossRef Search ADS Betoule M., Pierpaoli E., Delabrouille J., Le Jeune M., Cardoso J.-F., 2009, A&A , 503, 691 CrossRef Search ADS BICEP2 Collaboration, 2014, Phys. Rev. Lett. , 112, 241101 CrossRef Search ADS PubMed BICEP2/Keck & Planck Collaborations, 2015, Phys. Rev. Lett. , 114, 101301 CrossRef Search ADS PubMed Bonaldi A., Ricciardi S., 2011, MNRAS , 414, 615 CrossRef Search ADS Bonaldi A., Bedini L., Salerno E., Baccigalupi C., de Zotti G., 2006, MNRAS , 373, 271 CrossRef Search ADS Bonaldi A., Ricciardi S., Brown M. L., 2014, MNRAS , 444, 1034 CrossRef Search ADS Cardoso J.-F., Le Jeune M., Delabrouille J., Betoule M., Patanchon G., 2008, IEEE J. Sel. Top. Signal Process. , 2, 735 CrossRef Search ADS Challinor A. et al. , 2017, preprint (arXiv:1707.02259) CORE Collaboration, 2016, preprint (arXiv:1612.08270) Das S. et al. , 2014, J. Cosmol. Astropart. Phys. , 4, 014 CrossRef Search ADS de Bernardis P. et al. , 2000, Nature , 404, 955 CrossRef Search ADS PubMed Delabrouille J., Cardoso J.-F., 2009, in Martínez V. J., Saar E., Martínez-González E., Pons-Bordería M.-J., eds, Lecture Notes in Physics, Vol. 665, Data Analysis in Cosmology . Springer-Verlag, Berlin, p. 159 Delabrouille J., Cardoso J.-F., Patanchon G., 2003, MNRAS , 346, 1089 CrossRef Search ADS Delabrouille J., Cardoso J.-F., Le Jeune M., Betoule M., Fay G., Guilloux F., 2009, A&A , 493, 835 CrossRef Search ADS Delabrouille J. et al. , 2013, A&A , 553, A96 CrossRef Search ADS Delabrouille J. et al. , 2017, preprint (arXiv:1706.04516) Dunkley J. et al. , 2009, in Dodelson S. et al. , eds, AIP Conf. Ser. Vol. 1141, CMB Polarization Workshop: Theory and Foregrounds: CMBPol Mission Concept Study . Am. Inst. Phys., New York, p. 222 Efstathiou G., Gratton S., Paci F., 2009, MNRAS , 397, 1355 CrossRef Search ADS Eriksen H. K., Banday A. J., Górski K. M., Lilje P. B., 2004, ApJ , 612, 633 CrossRef Search ADS Errard J., Stompor R., 2012, Phys. Rev. D , 85, 083006 CrossRef Search ADS Fauvet L. et al. , 2011, A&A , 526, A145 CrossRef Search ADS Faÿ G., Guilloux F., Betoule M., Cardoso J.-F., Delabrouille J., Le Jeune M., 2008, Phys. Rev. D , 78, 083013 CrossRef Search ADS Ghosh T. et al. , 2017, A&A , 601, A71 CrossRef Search ADS Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ , 622, 759 CrossRef Search ADS Green G. M. et al. , 2015, ApJ , 810, 25 CrossRef Search ADS Han J. L., 2017, ARA&A , 55, 111 CrossRef Search ADS Han J. L., Qiao G. J., 1993, Acta Astrophys. Sin. , 13, 385 Hanany S. et al. , 2000, ApJ , 545, L5 CrossRef Search ADS Harari D., Mollerach S., Roulet E., 1999, J. High Energy Phys. , 8, 022 CrossRef Search ADS Haverkorn M., 2015, in Lazarian A., de Gouveia Dal Pino E. M., Melioli C., eds, Astrophysics and Space Science Library, Vol. 407, Magnetic Fields in Diffuse Media . Springer-Verlag Berlin, p. 483 Jaffe T. R., Leahy J. P., Banday A. J., Leach S. M., Lowe S. R., Wilkinson A., 2010, MNRAS , 401, 1013 CrossRef Search ADS Jones W. C. et al. , 2006, ApJ , 647, 823 CrossRef Search ADS Kamionkowski M., Kovetz E. D., 2016, ARA&A , 54, 227 CrossRef Search ADS Kogut A. et al. , 2011, J. Cosmol. Astropart. Phys. , 7, 025 CrossRef Search ADS Kovac J. M., Leitch E. M., Pryke C., Carlstrom J. E., Halverson N. W., Holzapfel W. L., 2002, Nature , 420, 772 CrossRef Search ADS PubMed Lallement R., Snowden S., Kuntz K. D., Dame T. M., Koutroumpa D., Grenier I., Casandjian J. M., 2016, A&A , 595, A131 CrossRef Search ADS Leach S. M. et al. , 2008, A&A , 491, 597 CrossRef Search ADS Lewis A., Challinor A., 2006, Phys. Rep. , 429, 1 CrossRef Search ADS Marinucci D. et al. , 2008, MNRAS , 383, 539 CrossRef Search ADS Matsumura T. et al. , 2014, J. Low Temp. Phys. , 176, 733 CrossRef Search ADS Narcowich F., Petrushev P., Ward J., 2006, SIAM J. Math. Anal. , 38, 574 CrossRef Search ADS O'Dea D. T., Clark C. N., Contaldi C. R., MacTavish C. J., 2012, MNRAS , 419, 1795 CrossRef Search ADS Penzias A. A., Wilson R. W., 1965, ApJ , 142, 419 CrossRef Search ADS Planck Collaboration XI, 2014, A&A , 571, A11 CrossRef Search ADS Planck Collaboration XXX, 2016a, A&A , 586, A133 CrossRef Search ADS Planck Collaboration I, 2016b, A&A , 594, A1 CrossRef Search ADS Planck Collaboration XIII, 2016c, A&A , 594, A13 CrossRef Search ADS Planck Collaboration XLIV, 2016d, A&A , 596, A105 CrossRef Search ADS Planck Collaboration XLVIII, 2016e, A&A , 596, A109 CrossRef Search ADS Planck Collaboration L, 2017, A&A , 599, A51 CrossRef Search ADS Reichardt C. L. et al. , 2009, ApJ , 694, 1200 CrossRef Search ADS Remazeilles M., Delabrouille J., Cardoso J.-F., 2011, MNRAS , 418, 467 CrossRef Search ADS Remazeilles M., Dickinson C., Eriksen H. K. K., Wehus I. K., 2016, MNRAS , 458, 2032 CrossRef Search ADS Remazeilles M. et al. , 2017, preprint (arXiv:1704.04501) Rezaei Kh. S., Bailer-Jones C. A. L., Hanson R. J., Fouesneau M., 2017, A&A , 598, A125 CrossRef Search ADS Sheehy C., Slosar A., 2017, preprint (arXiv:1709.09729) Smoot G. F. et al. , 1992, ApJ , 396, L1 CrossRef Search ADS Sofue Y., Fujimoto M., 1983, ApJ , 265, 722 CrossRef Search ADS Stanev T., 1997, ApJ , 479, 290 CrossRef Search ADS Stompor R., Errard J., Poletti D., 2016, Phys. Rev. D , 94, 083526 CrossRef Search ADS Story K. T. et al. , 2013, ApJ , 779, 86 CrossRef Search ADS Tassis K., Pavlidou V., 2015, MNRAS , 451, L90 CrossRef Search ADS Tauber J. A. et al. , 2010, A&A , 520, A1 CrossRef Search ADS Tegmark M., de Oliveira-Costa A., Hamilton A. J., 2003, Phys. Rev. D , 68, 123523 CrossRef Search ADS The COrE Collaboration, 2011, preprint (arXiv:1102.2181) Tinyakov P. G., Tkachev I. I., 2002, Astropart. Phys. , 18, 165 CrossRef Search ADS Tucci M., Martínez-González E., Vielva P., Delabrouille J., 2005, MNRAS , 360, 935 CrossRef Search ADS Vansyngel F. et al. , 2017, A&A , 603, A62 CrossRef Search ADS © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society
### Journal
Monthly Notices of the Royal Astronomical SocietyOxford University Press
Published: May 1, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
{}
|
aivika-lattice-0.3: Nested discrete event simulation module for the Aivika library using lattice
Simulation.Aivika.Lattice.Estimate
Description
Tested with: GHC 8.0.1
The module defines the Estimate monad transformer which is destined for estimating computations within lattice nodes. Such computations are separated from the Event computations. An idea is that the forward-traversing Event computations provide with something that can be observed, while the Estimate computations estimate the received information and they can be backward-traversing.
Synopsis
data Estimate m a Source #
A value in the Estimate monad transformer represents something that can be estimated within lattice nodes.
Instances
Source # Methodslift :: Monad m => m a -> Estimate m a # Source # MethodsliftParameter :: Parameter m a -> Estimate m a # Source # MethodsliftComp :: m a -> Estimate m a # Source # MethodsliftEstimate :: Estimate m a -> Estimate m a Source # Monad m => Monad (Estimate m) Source # Methods(>>=) :: Estimate m a -> (a -> Estimate m b) -> Estimate m b #(>>) :: Estimate m a -> Estimate m b -> Estimate m b #return :: a -> Estimate m a #fail :: String -> Estimate m a # Functor m => Functor (Estimate m) Source # Methodsfmap :: (a -> b) -> Estimate m a -> Estimate m b #(<\$) :: a -> Estimate m b -> Estimate m a # MonadFix m => MonadFix (Estimate m) Source # Methodsmfix :: (a -> Estimate m a) -> Estimate m a # Source # Methodspure :: a -> Estimate m a #(<*>) :: Estimate m (a -> b) -> Estimate m a -> Estimate m b #(*>) :: Estimate m a -> Estimate m b -> Estimate m b #(<*) :: Estimate m a -> Estimate m b -> Estimate m a # MonadIO m => MonadIO (Estimate m) Source # MethodsliftIO :: IO a -> Estimate m a #
class EstimateLift t m where Source #
A type class to lift the Estimate computations into other computations.
Minimal complete definition
liftEstimate
Methods
liftEstimate :: Estimate m a -> t m a Source #
Lift the specified Estimate computation into another computation.
Instances
Source # MethodsliftEstimate :: Estimate m a -> Estimate m a Source #
runEstimateInStartTime :: MonadDES m => Estimate m a -> Simulation m a Source #
Run the Estimate computation in the start time and return the estimate.
Like time estimates the current modeling time. It is more effcient than latticeTime.
# Computations within Lattice
Arguments
:: (a -> a -> Estimate LIO a) reduce in the intermediate nodes of the lattice -> Estimate LIO a estimate the computation in the final time point and beyond it -> Simulation LIO (Estimate LIO a)
Fold the estimation of the specified computation.
Arguments
:: (Estimate LIO a -> Estimate LIO a) estimate in the intermediate time point of the lattice -> Estimate LIO a estimate in the final time point of the lattice or beyond it -> Simulation LIO (Estimate LIO a)
Estimate the computation in the lattice nodes.
Estimate the computation in the up side node of the lattice, where latticeTimeIndex is increased by 1 but latticeMemberIndex remains the same.
It is merely equivalent to the following definition:
estimateUpSide = shiftEstimate 1 0
Estimate the computation in the down side node of the lattice, where the both latticeTimeIndex and latticeMemberIndex are increased by 1.
It is merely equivalent to the following definition:
estimateDownSide = shiftEstimate 1 1
Arguments
:: Int a positive shift of the lattice time index -> Int a shift of the lattice member index -> Estimate LIO a the source computation -> Estimate LIO a
Like shiftEstimate but only the first argument must be possitive.
Arguments
:: Int a shift of the lattice time index -> Int a shift of the lattice member index -> Estimate LIO a the source computation -> Estimate LIO a
Estimate the computation in the shifted lattice node, where the first parameter specifies the latticeTimeIndex shift of any sign, but the second parameter specifies the latticeMemberIndex shift af any sign too.
It allows looking into the future or past computations. The lattice is constructed in such a way that we can define the past Estimate computation in terms of the future Estimate computation. That is the point.
Regarding the Event computation, it is quite different. The future Event computation depends strongly on the past Event computations. But we can update Ref references within the corresponding discrete event simulation and then read them within the Estimate computation, because Ref is Observable.
Arguments
:: Int the lattice time index -> Int the lattice size index -> Estimate LIO a the computation -> Estimate LIO a
Estimate the computation at the specified latticeTimeIndex and latticeMemberIndex.
# Error Handling
catchEstimate :: (MonadException m, Exception e) => Estimate m a -> (e -> Estimate m a) -> Estimate m a Source #
Exception handling within Estimate computations.
finallyEstimate :: MonadException m => Estimate m a -> Estimate m b -> Estimate m a Source #
A computation with finalization part like the finally function.
throwEstimate :: (MonadException m, Exception e) => e -> Estimate m a Source #
Like the standard throw function.
# Debugging
Show the debug message with the current simulation time and lattice node indices.
|
{}
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## GATE CE 2001
Exam Held on Thu Jan 01 1970 00:00:00 GMT+0000 (Coordinated Universal Time)
Click View All Questions to see questions one by one or you can choose a single question from below.
## GATE CE
The determinant of the following matrix $$\left[ {\matrix{ 5 & 3 & 2 ... The eigen values of the matrix$$\left[ {\matrix{ 5 & 3 \cr 2 & ...
The product $$\left[ P \right]\,\,{\left[ Q \right]^T}$$ of the following two ma...
Limit of the following series as $$x$$ approaches $${\pi \over 2}$$ is <br>$$f... The solution for the following differential equation with boundary conditions ... The number of boundary conditions required to solve the differential equation ... The inverse Laplace transform of$$1/\left( {{s^2} + 2s} \right)$$is A 15 cm length of steel rod with relative density of 7.4 is submerged in a two l... Identify the FALSE statement from the following, pertaining to the design of con... Consider the following two statements related to reinforced concrete design, and... The effective spans for a simple one-way slab system, with an overhang are indic... Identify the most efficient butt joint (with double cover plates) for a plate in... Consider the following two statements related to structural steel design, and id... The relevant cross-sectional details of a compound beam comprising a symmetric ... The bending moment (in$$kN$$-$$m$$units) at the mid-span location$$X$$in the... The degree of static indeterminacy,$${N_s}$$and the degree of kinematic indete... Identify, from the following, the correct value of the bending moment$${M_A}\,\...
Identify the <b>FALSE</b> statement from the following, pertaining to the effect...
The frame below shows three beam elements $$OA, OB$$ and $$OC$$, with identical ...
The two-span continuous beam shown below is subject to a clockwise rotational sl...
The design value of lateral friction coefficient on highway is
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
{}
|
The second definition looks like the 'lax comma category' $C // T$, where a morphism $f \to f'$ is given by a 2-cell $f \to f' \circ \phi$. f'\phi$. The defining universal property is the same as for comma objects, except that the 2-cells in the squares are lax natural transformations. Your first definition should be the oplax version. See Kelly, On clubs and doctrines, LNM 420, or Gray, Adjointness For 2-Categories, LNM 391, who calls these '2-comma categories'. In more detail, Gray's 2-comma categories come from (Apologies for tersenessstrict, I think) 2-functors$A \overset{F}{\rightarrow} K \overset{G}{\leftarrow} B$. An object is a 1-cell$FA \to GB$, it's late on a Friday morphism is a square with a 2-cell in, and this a 2-cell is my first ever MO answer.given by a pair of 2-cells in$K$that fit into a commuting cylinder (it's pretty obvious if you draw a picture). In your example, (what I've called)$C // T$has 2-cells$(\phi,\phi^\sharp) \Rightarrow (\psi,\psi^\sharp)$given by 2-cells$\alpha \colon \phi \Rightarrow \psi$such that$\psi^\sharp \circ f'\alpha = \phi^\sharp$. (Again, pictures make it much clearer!) So your slices are actually 2-categories, coming from$C \overset{1}{\rightarrow} C \overset{T}{\leftarrow} \bullet$. 1 The second definition looks like the 'lax comma category'$C // T$, where a morphism$f \to f'$is given by a 2-cell$f \to f' \circ \phi\$. The defining universal property is the same as for comma objects, except that the 2-cells in the squares are lax natural transformations. Your first definition should be the oplax version.
|
{}
|
# How to implement large rotations in total lagrangian formulation (nonlinear FEM)?
I have developed an Octave script to solve the nonlinear Euler-Bernoulli beam equations with linearized von Karman-strains, i.e. higher-order terms are dropped. The simulation results agree with analytical results in small and moderately large displacements in static analysis. However, when doing a dynamic simulation with large displacements I get incorrect results.
I used this site as a reference for assembling the stiffness matrix and the tangent stiffness matrix. I should note that I think the site has a typo for the stiffness coefficients $$K_{21}$$. I think the 0.5 should be dropped from the $$K_{21}$$ expression and also for the tangent stiffness matrix there should be $$T_{21}=K_{21}$$ instead of $$T_{21}=2K_{21}$$ (that is, if you don't symmetrize the matrices).
Now as the static linear analysis seems to work fine, I tried a dynamic nonlinear analysis. For this I used a simple pendulum with one beam element (length L=1m) starting from rest and a horizontal position. For the first few time steps everything seems fine until the beam element starts elongating (see figures below). At t=0.5 the length of the beam is almost 1.5m, which should not be possible. The internal force vector is initially pointing vertically upward but starts oscillating along with the inertial force vector and is never really aligned with the length of the beam as you would excpect for a tension force. Also if I do a static analysis and rotate the beam 90 degrees with Dirichlet BCs, I get large moments acting on the beam even though we are simulating rigid body rotation.
So my question is, is it possible to simulate large rotations in the total Lagrangian description (assuming small strains) and using the von Karman strains? Or is there some other reason why the simulation is not working? Or is my time integration scheme not suitable for this problem?
For the dynamic analysis I used the Newmark method for time integration (trapezoid rule), with boundary conditions set to u(0)=0 and w(0)=0. The stiffness matrices are integrated with gauss quadrature with reduced integration for the nonlinear terms, i.e. $$K_{12}$$, $$K_{21}$$, $$T_{12}$$, $$T_{21}$$ and $$K_{22}^{NL}$$ and full integration for the linear terms.
• I have implemented a version of the geometrically exact beam in MATLAB/Octave. This follows the formulation given in the FEM book by Zienkiewicz (vol2). You can check it at this Github repository Jan 9 at 16:43
• I have used this code for the examples presented in paper1. Jan 9 at 16:48
• Thanks, thats a great paper. Will have to look closer into it!
– Tepa
Jan 9 at 20:17
|
{}
|
# Differentiable and analytic function
I have the following function and I am trying to find if it is analytic and differentiable. I use cauchy-riemann to prove it.
$$f(x) = x^2 -x+y+i(y^2-5y-x)$$
$$u(x,y) = x^2-x+y$$ $$v(x,y) = y^2-5y-x$$
$$u_x = 2x-1$$ $$u_y = 1$$ $$v_x= -1$$ $$v_y= 2y-5$$
As a result $$u_y = -v_x \Rightarrow 1 = -(-1) \Rightarrow 1 = 1$$ and $$u_x \neq v_y\Rightarrow y = x+2$$
I was wondering if we can say that there some regions that the function is differentiable or analytic.
-
This function fails to satisfy the Cauchy-Riemann equations and is is therefore not complex-differentiable.
-
I was wondering if we can say that is differentiable in some specific region. So that means it is not analytic too. I guess I guess didn't understand what open region means. – primer Jan 26 '12 at 15:25
This is an example of a function that is differentiable as a map from the plane into itself as a real (vector) function. However it is not analytic – ncmathsadist Jan 26 '12 at 15:37
On a separate note, a subset $G\subseteq\mathbb{C}$ is open if for each $z\in G$ there is some $r > 0$ so that the open ball centered at $z$, $B_r(z) = \{w\in\mathbb{C}| |w - z| < r\}$ is contained in $G$. – ncmathsadist Jan 26 '12 at 15:39
$h(x,y)=U(x,y)+iV(x,y)$
If $\partial_{x}(U(x,y))=\partial_{y}(V(x,y))$ and $\partial_{y}(U(x,y))=-\partial_{x}(V(x,y))$ then The function can be expressed as $h(x,y)=U(x,y)+iV(x,y)=f(z)=f(x+iy)$
For example
$h(x,y)=e^{x}\cos(y)+ie^{x}\sin(y)$ then
$U(x,y)=e^{x}\cos(y)$
$V(x,y)=e^{x}\sin(y)$
$\partial_{x}(U(x,y))=e^{x}\cos(y)$
$\partial_{y}(U(x,y))=-e^{x}\sin(y)$
$\partial_{x}(V(x,y))=e^{x}\sin(y)$
$\partial_{y}(V(x,y))=e^{x}\cos(y)$
$\partial_{x}(U(x,y))=\partial_{y}(V(x,y))$ and $\partial_{y}(U(x,y))=-\partial_{x}(V(x,y))$
Thus $h(x,y)$ can be expressed as $h(x,y)=f(z)=f(x+iy)$
Really if we check $h(x,y)=e^{x}\cos(y)+ie^{x}\sin(y)=e^{x}(\cos(y)+i\sin(y))=e^{x}e^{iy}=e^{x+iy}=e^{z}$
$h(x,y)=f(z)=e^{z}$
-
Mathlover, put a whack (\) in front of sines and cosines. I did this; notice how it improved the appearance of your calculation. – ncmathsadist Jan 27 '12 at 2:09
|
{}
|
# List of abbreviations below the abstract and before the introduction in a 2 column research paper spanning over the whole page
My list of abbreviations is long so I need to add list of abbreviations with 4 columns before the introduction in a two column research article spanning over the whole page. I have tried nomenclature and its likes but they appear on the 1st column. Using \begin{table*}[h] displays my table on the top of the 2nd page. The closest I have come is using only \begin{tabular} but it is overwritten over the text of the 2nd column. I need something like this But what I get is this
The minimum working code is as under:
\documentclass[a4paper,fleqn]{cas-dc}
\usepackage[utf8]{inputenc}
\usepackage[numbers]{natbib}
\begin{document}
\def\floatpagepagefraction{1}
\def\textpagefraction{.001}
\shorttitle{Article to be written}
\shortauthors{abc et al}
\title [mode = title]{An article to be written}
\author[1-]{Abc wyz}[ orcid=NA]
\fnmark[1]
\cormark[1]
\author[1]{efg}
\fnmark[2]
\cortext[cor1]{Principal corresponding author}
\begin{abstract}
A common anti-skeptical argument is that if one knows nothing, one cannot know that one knows nothing, and so cannot exclude the possibility that one knows something after all. However, such an argument is only effective against the complete denial of the possibility of knowledge. Sextus argued that claims to either know or to not know were both dogmatic, and as such, Pyrrhonists claimed neither. Instead, they claimed to be continuing to search for something that might be knowable.
\end{abstract}
\begin{keywords}
Home \sep
Home \sep
\end{keywords}
\maketitle
\begin{tabular}{|c|c|c|c|}
\hline
sdfsdsfsddf & sdfsdsfsddasdasdasdasdasfsd & sdfsd & sdfsfsdasdasdass\\
\hline
\end{tabular}
\section{Introduction}
he works of Sextus Empiricus (c. 200 CE) are the main surviving account of ancient Pyrrhonism. By Sextus' time, the Academy had ceased to be skeptical. Sextus' empiricism was limited to the "absolute minimum" already mentioned—that there seem to be appearances. Sextus compiled and further developed the Pyrrhonists' skeptical arguments, most of which were directed against the Stoics but included arguments against all of the schools of Hellenistic philosophy, including the Academic skeptics.
\\
A common anti-skeptical argument is that if one knows nothing, one cannot know that one knows nothing, and so cannot exclude the possibility that one knows something after all. However, such an argument is only effective against the complete denial of the possibility of knowledge. Sextus argued that claims to either know or to not know were both dogmatic, and as such, Pyrrhonists claimed neither. Instead, they claimed to be continuing to search for something that might be knowable.
\\
Sextus, as the most systematic author of the works by Hellenistic sceptics which have survived, noted that there are at least ten modes of skepticism. These modes may be broken down into three categories: one may be skeptical of the subjective perceiver, of the objective world, and the relation between perceiver and the world.[15] His arguments are as follows.
\\
Subjectively, both the powers of the senses and of reasoning may vary among different people. And since knowledge is a product of one or the other, and since neither are reliable, knowledge would seem to be in trouble. For instance, a color-blind person sees the world quite differently from everyone else. Moreover, one cannot even give preference on the basis of the power of reason, i.e., by treating the rational animal as a carrier of greater knowledge than the irrational animal, since the irrational animal is still adept at navigating their environment, which suggests the ability to "know" about some aspects of the environment.
\\
Secondly, the personality of the individual might also influence what they observe, since (it is argued) preferences are based on sense-impressions, differences in preferences can be attributed to differences in the way that people are affected by the object. (Empiricus:56)
\\
\end{document}
The other necessary files for the code is available under the link http://mirrors.ctan.org/macros/latex/contrib/els-cas-templates.zip
|
{}
|
# CSIR NET 2018 December Physical Science Solutions Part-B
The CSIR NET 2018 held on December 16: Indian
Assistant Professor and PhD scholarship exam
solution, prepared by me. The answers and
detailed explanations are available for 18
out of 25 questions of Part-B. The detailed
explanations and answers to Part-A is also
(this is entirely free stuff: help spread the word)
( CSIR NET 2018 December: Part A ) See the detailed explanations and solutions to part A
The article aims to make the best attempt at finding the answers for the recently concluded 2018 CSIR NET. Detailed explanatory answers for physical sciences section ( part-B ) is available ( for 18 out of 25 questions at the moment ). Also full explanation based solutions to part-A is available, check link above.
## CSIR NET 2018 December physical sciences
### Part B
Q - 21. Consider the decay A → B + C of a relativistic
spin 1/2 particle A. Which of the following statements
is true in the rest frame of the particle A?
1. The spin of both B and C may be 1/2.
2. The sum of the masses of B and C is greater than
the mass of A.
3. The energy of B is uniquely determined by the
masses of the particles.
4. The spin of both B and C may be integral.
click to see or hide answer to Q – 21
— the answer is: option 3.
Explanation: obviously the second option is incorrect as it violates conservation of energy in relativistic kinematics, rest-masses of the product particles can not be more than that of the parent particle. Option 3 is explained with a diagram. The value of the energy of one of the daughter particle ( B ) is determined uniquely as evinced by the given formula for the same.
In a two body relativistic decay in the parent rest frame: EB = ( MA2 – MC2 + MB2 ) / 2 MA
Also the other options talk about the spin of the particles but we need not bother since option 3 is the correct option.
CSIR NET 2018 December Physical Sciences: answer to question 21, the two body decay. The energy and momenta of the daughter particles are uniquely determined in parent rest frame. Photo Credit: mdashf.org
Hence the correct answer is option 3.
Q - 22. Two current-carrying circular loops, each
of radius R, are placed perpendicular to each other,
as shown in the figure below. the loop in the
xy-plane carries a current I0 while that in the
xz-plane carries a current 2I0. The resulting magnetic
field $\vec{B}$ at the origin is
CSIR NET 2018 December Physical Sciences: question 22. Photo Credit: mdashf.org
1. $\frac{\mu_0 I_0}{2R}[2\hat{j}+\hat{k}]\,$ 2. $\frac{\mu_0 I_0}{2R}[2\hat{j}-\hat{k}]\,$
3. $\frac{\mu_0 I_0}{2R}[-2\hat{j}+\hat{k}]\,$ 4. $\frac{\mu_0 I_0}{2R}[-2\hat{j}-\hat{k}]\,$
click to see or hide answer to Q – 22
— the answer is: option 3.
We only need to find the direction of the net magnetic field of the two circular loops to be able to select the correct answer. Thats because all the options have the same common magnitude of the field: ( μI0 ) / 2 R. For the vertical loop with current 2I0 if we curl our palm along the shown direction for current our thumb points in the – y axis. Thus we should have a vector: $-2\hat{j}$. Similarly for the horizontal loop if we curl our palm along the shown direction for the current our thumb points in the +ve z axis and we must have a vector: $\hat{k}$. As a result the total field created by the two loops of currents is: $\frac{\mu_0 I_0}{2R}[-2\hat{j}+\hat{k}]\,$.
Hence the correct answer is option 3.
Q - 23. An electric dipole of dipole moment $\vec{P}=qb\,\hat{i}$
is placed at the origin in the vicinity of
two charges +q and -q at (L, b) and (L, -b)
respectively, as shown in the figure below.
The electrostatic potential at the point
(L/2, 0) is
CSIR NET 2018 December Physical Sciences: question 23. Photo Credit: mdashf.org
1. $\frac{qb}{\pi \epsilon_0}\big(\frac{1}{L^2}+\frac{2}{L^2+4b^2}\big)$ 2. $\frac{4qbL}{\pi\epsilon_0[L^2+4b^2]^{\frac{3}{2}}}$
3. $\frac{qb}{\pi \epsilon_0 L^2}$ 4. $\frac{3qb}{\pi \epsilon_0 L^2}$
click to see or hide answer to Q – 23
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 24. A monochromatic and linearly polarised
light is used in a Young's double slit
experiment. A linear polarizer, whose pass axis
is at an angle 450 to the polarisation of the
incident wave, is placed in front of one of the
slits. If Imax and Imin, respectively denote,
the maximum and minimum intensities of the
interference pattern on the screen, the visibility,
defined as the ratio (Imax - Imin)/(Imax + Imin), is
1. √2/3 2. 2/3
3. 2√2/3 4. √2/√3
click to see or hide answer to Q – 24
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 25. An electromagnetic wave propagates in
a non-magnetic medium with relative permittivity
ε = 4. The magnetic field for this wave is
$\vec{H}(x,y)= \hat{k}H_0 \cos(\omega t-\alpha x - \alpha \sqrt{3}y)$ where H0 is a constant.
The corresponding electric field $\vec{E}(x,y)$ is
1. $\frac{1}{4}\mu_0 H_0 c(-\sqrt{3}\hat{i}+\hat{j})\cos(\omega t-\alpha x - \alpha \sqrt{3}y)$
2. $\frac{1}{4}\mu_0 H_0 c(\sqrt{3}\hat{i}+\hat{j})\cos(\omega t-\alpha x - \alpha \sqrt{3}y)$
3. $\frac{1}{4}\mu_0 H_0 c(-\sqrt{3}\hat{i}-\hat{j})\cos(\omega t-\alpha x - \alpha \sqrt{3}y)$
4. $\frac{1}{4}\mu_0 H_0 c(-\sqrt{3}\hat{i}-\hat{j})\cos(\omega t-\alpha x - \alpha \sqrt{3}y)$
click to see or hide answer to Q – 25
— the answer is: option 1.
Explanation: the idea is to determine the direction of the $\vec{E}$ field as the magnitude is same in all 4 given options. The first thing to calculate is the direction of the em wave. The direction of the em wave ( same as wave propagation vector and the direction of Poynting vector $\vec{S}$ ) in a cos ( kx – ωt ) = cos ( ωt – kx ) variation is in the + x direction. This means our given wave travels in the direction of the vector $(\sqrt{3}\hat{j}+\hat{i})$. But we know that $\vec{S}=\vec{E}\times \vec{H}$. The option 1 gives us: $(-\sqrt{3}\hat{i}+\hat{j}) \times \hat{k}=\sqrt{3}\hat{j}+\hat{i}$. We used the fact that: direction of the magnetic field vector H/B is $\hat{k}$.
Hence the correct answer is option 1.
Q - 26. The ground state energy of an anisotropic
harmonic oscillator described by the potential:
V(x,y,z) = (1/2) mω2x2 + 2mω2y2 + 8mω2z2
(in units of $\hbar\omega$)
1. 5/2 2. 7/2
3. 3/2 4. 1/2
click to see or hide answer to Q – 26
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 27. The product ΔxΔp of uncertainties in the
position and momentum of a simple harmonic oscillator
of mass m and angular frequency ω in the ground
state $|0\rangle$, is $\frac{\hbar}{2}$. The value of the product ΔxΔp in the
state $e^{-i\hat{p}l/\hbar}|0\rangle$
( where l is a constant and $\hat{p}$ is the momentum
operator ) is
1. $\frac{\hbar}{2}\sqrt{\frac{m\omega l^2}{\hbar}}$ 2. $\hbar$
3. $\frac{\hbar}{2}$ 4. $\frac{\hbar ^2}{m\omega l^2}$
click to see or hide answer to Q – 27
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 28. Let the wavefunction of the electron in
a hydrogen atom be $\psi(\vec{r}) = \frac{1}{\sqrt{6}}\phi_{200}(\vec{r})+\sqrt{\frac{2}{3}}\phi_{21-1}(\vec{r})-\frac{1}{\sqrt{6}}\phi_{100}(\vec{r})$,
where $\phi_{nlm}(\vec{r})$ are the eigenstates of the
Hamiltonian in the standard notation. The
expectation value of the energy in this state is
1. -10.8 eV 2. -6.2 eV
3. -9.5 eV 4. -5.1 eV
click to see or hide answer to Q – 28
— the answer is: option 4.
Explanation: the given state is already normalized ( amplitude squares add up to 1 ). All we need to do is compute ( amplitude ) 2 / n2 and multiply to -13.6 eV. The ( amplitude ) 2 and corresponding n are given as: ( 1/6, 4 ), ( 2/3, 4 ), ( 1/6, 1). This gives -5.1 eV, answer that is given in option 4.
Hence the correct answer is option 4.
Q - 29. Three identical spin 1/2 particles of mass m
are confined to a one-dimensional box of length L, but
are otherwise free. Assuming that they are
non-interacting, the energy of the lowest two energy
eigenstates, in units of $\frac{\pi^2\hbar^2}{2mL^2}$, are
1. 3 and 6 2. 6 and 9
3. 6 and 11 4. 3 and 9
click to see or hide answer to Q – 29
— the answer is: option 1.
Explanation: the energy eigenstates of the 1 dimensional box are given as: $\psi = \sqrt{\frac{2}{L}}\sin \frac{n\pi}{L}x$ and the eigenvalues are given as $n^2 \times \big(\frac{\pi^2\hbar^2}{2mL^2}\big)$. Accordingly the lowest two energy eigenstates correspond to the quantum numbers ( 1, 1, 1 ) where all the 3 particles are in the ground state and ( 2, 1, 1 ) where at-most 1 particle is in the first excited state and the other two in the ground state. Accordingly the lowest two energy states correspond to 12+12+12 = 3 and 22+12+12 = 6 in units of $\frac{\pi^2\hbar^2}{2mL^2}$.
Hence the correct answer is option 1.
Q - 30. The heat capacity CV at constant volume of
a metal, as a function of temperature, is αT+βT3,
where α and β are constants. The temperature
dependence of the entropy at constant volume is
1. αT+(1/3)βT3 2. αT+βT3
3. (1/2)αT+(1/3)βT3 4. (1/2)αT+(1/4)βT3
click to see or hide answer to Q – 30
— the answer is: option 1.
Explanation: The specific heat at constant volume is given by the expression: $C_V = T\big(\frac{\partial S}{\partial T}\big)_{N,\,V}$. An integration of the same yields the temperature dependence of the entropy: S = [∫ (CV/T ) dT ]N, V = ∫α dT + ∫βTdT = αT + (1/3)βT3.
Hence the correct answer is option 1.
Q - 31. The rotational energy levels of a molecule
are $E_l = \frac{\hbar^2}{2I_0}l(l+1)$ where l = 0, 1, 2, ... and I0 is its
moment of inertia. The contribution of the rotational
motion to the Helmholtz free energy per molecule, at
low temperatures in a dilute gas of these molecules,
is approximately
1. $-k_B T\big(1+\frac{\hbar^2}{I_0 k_B T}\big)$
2. $-k_B T e^{-\frac{\hbar^2}{I_0 k_B T}}$
3. -kBT
4. $-3k_B T e^{-\frac{\hbar^2}{I_0 k_B T}}$
click to see or hide answer to Q – 31
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 32. The vibrational motions of a diatomic
molecule may be considered to be that of a
simple harmonic oscillator with angular
frequency ω. If a gas of these molecules is
at a temperature T, what is the probability
that a randomly picked molecule will be found
in its lowest vibrational state?
1. $1-e^{-\frac{\hbar \omega}{k_B T}}$
2. $e^{-\frac{\hbar \omega}{2k_B T}}$
3. $\tanh\big(\frac{\hbar \omega}{k_B T}\big)$
4. $\frac{1}{2}cosech\,\big(\frac{\hbar \omega}{2k_B T}\big)$
click to see or hide answer to Q – 32
— the answer is: option 2.
Explanation: when a system is in thermal equilibrium with a heat reservoir at temperature T, the probability Pr that the system be found in a state ( with index r ) with energy Er is given by this temperature T and energy of the state Er in the following manner ( its called canonical distribution ) : P= C e-βEr. Here β = (kBT)-1 and C is the constant which is the inverse of Z the sum of states or partition function ( C = Z-1 ) given by: Z = ∑r e-βEr. We can take C to be 1. What remains is the probability or the Boltzmann factor: e-βEr. So the probability of finding the randomly picked molecule in an energy state Er is e-(1/(kBT))Er. But energy of the diatomic molecules is given by: $\big(n+\frac{1}{2}\big)\hbar \omega$. In the ground state ( n = 0 ) the energy is: $\big(\frac{1}{2}\big)\hbar \omega$. Thus the probability of finding a randomly picked molecule in its lowest vibrational state is: $e^{-\frac{\hbar \omega}{2k_B T}}$.
Hence the correct answer is option 2.
Q - 33. Consider an ideal Fermi gas in a grand
canonical ensemble at a constant chemical
potential. The variance of the occupation number
of the single particle energy level with mean
occupation number $\bar{n}$ is
1. $\bar{n}(1-\bar{n})$
2. $\sqrt{\bar{n}}$
3. $\bar{n}$
4. $\frac{1}{\sqrt{\bar{n}}}$
click to see or hide answer to Q – 33
— the answer is: option 1.
Explanation: For a general statistical variable x the variance ( Δ which is the square of the standard deviation σ ) is given by: $\Delta =Npq$ where gives the probability of occurrence and q=(1-p) the probability of non-occurrence. Since we have a single particle we can safely take N = 1. This gives us the correct option when we realize that the probability of occurrence p is the same as the mean of the occupation number $\bar{n}$. Since grand canonical ensemble of a single fermion system is only a subsystem of the general statistical system this result follows. Other options can’t follow from this general formula. ( I have checked the answer to be correct, although there is a rigorous proof, but I am unable to plough further at the moment. )
Hence the correct answer is option 1.
Q - 34. Consider the following circuit, consisting of
an RS flip-flop and two AND gates.
CSIR NET 2018 December Physical Sciences: question 34. Photo Credit: mdashf.org
Which of the following connections will allow the
entire circuit to act as a JK flip-flop?
1. Connect Q to pin 1 and $\bar{Q}$ to pin 2
2. Connect Q to pin 2 and $\bar{Q}$ to pin 1
3. Connect Q to K input and $\bar{Q}$ to J input
4. Connect Q to J input and $\bar{Q}$ to K input
click to see or hide answer to Q – 34
— the answer is: option 2.
Explanation: let’s follow the figure. Also I have given the truth table for a JK flip-flop, for the enthused. ( One can verify the flip flop works correctly with the truth table. )
CSIR NET 2018 December Physical Sciences: converting a RS flip-flop into a JK flip-flop. Photo Credit: mdashf.org
C ( clk ) J K Qn+1 Action ↑ 0 0 Qn (last) No change ↑ 0 1 0 RESET ↑ 1 0 1 SET ↑ 1 1 $\bar{Q}_n$ (toggle) Toggle
Hence the correct answer is option 2.
Q - 35. The truth table below gives the value Y(A,B,C)
where A, B and C are binary variables.
A B C Y 0 0 0 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 1 1
The output Y can be represented by
1. $\bar{A}\bar{B}C+\bar{A}B\bar{C}+A\bar{B}C+AB\bar{C}$
2. $\bar{A}\bar{B}\bar{C}+\bar{A}BC+A\bar{B}\bar{C}+ABC$
3. $\bar{A}\bar{B}C+\bar{A}BC+A\bar{B}C+ABC$
4. $\bar{A}\bar{B}\bar{C}+\bar{A}B\bar{C}+A\bar{B}\bar{C}+AB\bar{C}$
click to see or hide answer to Q – 35
— the answer is: option 2.
Explanation: Only option 2 satisfies all rows of the table. eg if we take row 1: A = 0, B = 0, C = 0. Their inversions are respectively 1, 1, 1 and we get Y = 1+ 0+0+0=1 which is also shown in the table. Let’s take the first row of table and apply on option 1. We see Y = 0. But table says 1. Hence option 1 can’t be correct. Similarly easily it can be checked that option 3 and 4 are incorrect.
Hence the correct answer is option 2.
Q - 36. A sinusoidal signal is an input to the
following circuit.
CSIR NET 2018 December Physical Sciences: Question 36 Photo Credit: mdashf.org
Which of the following graphs best describes the
output waveform?
CSIR NET 2018 December Physical Sciences: Question 36 options Photo Credit: mdashf.org
click to see or hide answer to Q – 36
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 37. A sinusoidal voltage having a peak value of Vp
is an input to the following circuit, in which the DC
voltage is Vb.
CSIR NET 2018 December Physical Sciences: Question 37 Photo Credit: mdashf.org
assuming an ideal diode, which of the following best
describes the output waveform?
CSIR NET 2018 December Physical Sciences: Question 37 options Photo Credit: mdashf.org
click to see or hide answer to Q – 37
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 38. One of the eigenvalues of the matrix eA is ea,
where $A = \begin{pmatrix}a & 0 & 0 \\ 0 & 0 & a \\ 0 & a & 0 \end{pmatrix}$. The product of the other two
eigenvalues of eA is
1. e2a 2. e-a
3. e-2a 4. 1
click to see or hide answer to Q – 38
— the answer is: option 4.
Explanation: To solve this we need to know 2 identities of the elegant matrix method. one: det (eA) = eTr (A) two: the product of eigenvalues of eA det (eA). Since Tr (A) = a therefore we see that the product of eigenvalues of eA = ea. Thus the product of the other two eigenvalues is just 1.
Hence the correct answer is option 4.
Q - 39. The polynomial f(x) = 1 + 5x + 3x2 is written
as a linear combination of the Legendre polynomials
(P0(x) = 1, P1(x) = x, P2(x) = (1/2)(3x2 - 1))
as f(x) = Σn cnPn(x). The value of c0 is
1. 1/4 2. 1/2
3. 2 4. 0
click to see or hide answer to Q – 39
— the answer is: option 3.
Explanation: Let’s write f(x) = c0P0(x) + c1P1(x) + c2P2(x) = 1 + 5x + 3x2. This means c0+c1x+c2(3x21)/2 = 1 + 5x + 3xand c0-(c2/2)=1, c1=5, (c2)×(3/2) =3. Solving for cthis gives: c= 2. Solving for cgives c= 2.
Hence the correct answer is option 3.
Q - 40. The value of the integral $\oint_C \frac{dz}{z}\frac{\tanh 2z}{\sin \pi z}$ where C is
a circle of radius π/2, traversed counter-clockwise,
with centre at z = 0, is
1. 4 2. 4i
3. 2i 4. 0
click to see or hide answer to Q – 40
— the answer is: option 4.
Explanation: According to the residue theorem: $\oint_C f(z)\,dz=2\pi i \times \sum (residues\,of\,f(z)\,inside\,C)$. Here: f(z)=g(z)/zh(z) where g(z) = tanh z and h(z) = sin πz. We see that f(z) has a simple pole at z = 0, h(0) = 0 and g(z) is analytic. So residue is given as: R(0) = lim z→0 [zf(z)]= lim z→0 [zg(z)/(zh(z))]. Then by L’Hospital’s rule: R(0) = g(0) lim z→0 [1/h(z)] = g(0) lim z→0 [1/h'(z)]=g(0)/h'(0) = 0/1=0. So the integral is zero.
Hence the correct answer is option 4.
Q - 41. A particle of mass m, moving along the
x-direction, experiences a damping force -γv2,
where γ is a constant and v is its instantaneous
speed. If the speed at t = 0 is v0, the speed
at time t is
1. v0e-(γv0t)/m 2. v0/{1+ln[1+(γv0t)/m]}
3. mv0/(m+γv0t) 4. 2v0/[1+ e(γv0t)/m]
click to see or hide answer to Q – 41
— the answer is: option 3.
Explanation: from the given information we can write: m(d2x/dt2) = -γv2 so m(dv/dt) = -γvand (mdv)/v2=dt. Integrating and using given initial conditions ( v = v0 at t = 0 ) we get option 3: v = mv0/(m+γv0t).
Hence the correct answer is option 3.
Q - 42. The integral I = ∫c ezdz is evaluated
from the point (-1,0) to (1,0) along the
contour C, which is an arc of the parabola
y = x2 - 1, as shown in the figure.
CSIR NET 2018 December Physical Sciences: question 42. Photo Credit: mdashf.org
The value of I is
1. 0 2. 2 sinh 1
3. e2i sinh 1 4. e + e-1
click to see or hide answer to Q – 42
— the answer is: option .
Explanation:
Hence the correct answer is option .
Q - 43. In terms of arbitrary constants A and B,
the general solution to the differential equation
x2(d2y/dx2) + 5x(dy/dx) + 3y = 0 is
1. y = A/x + Bx3 2. y = Ax + B/x3
3. y = Ax + Bx3 4. y = A/x + B/x3
click to see or hide answer to Q – 43
— the answer is: option 4.
Explanation: it can be easily checked that 1/x and 1/x3 satisfy the given differential equation hence a linear combination of them viz. y = A/x + B/x3 also satisfies the differential equation.
Hence the correct answer is option 4.
Q - 44. In the attractive Kepler problem described
by the central potential V(r) = -k/r ( where k is a
positive constant ), a particle of mass m with a
non-zero angular momentum can never reach the center
due to the centrifugal barrier. If we modify the
potential to V(r) = -k/r - β/r3 one finds that there
is a critical value of the angular momentum lc
below which there is no centrifugal barrier. This
value of lc is
1. [12km2β]1/2 2. [12km2β]-1/2
3. [12km2β]1/4 4. [12km2β]-1/4
click to see or hide answer to Q – 44
— the answer is tentative: option 1.
Explanation: k has units of ( energy × distance ) / mass and β has units of ( energy × volume ) / mass. So [12km2β]1/2 has units of energy × area. Angular momentum has units of energy × time. So option 1 seems to be close to the correct answer ( I am guessing: other options have energy units in the square root or inverse ). Or something is missing.
Hence the correct answer seems to be option 1.
Q - 45. The time period of a particle of mass m,
undergoing small oscillations around x = 0, in the
potential V = V0 cosh (x/L), is
1. $\pi \sqrt{\frac{mL^2}{V_0}}$ 2. $2\pi \sqrt{\frac{mL^2}{2V_0}}$
3. $2\pi \sqrt{\frac{mL^2}{V_0}}$ 4. $2\pi \sqrt{\frac{2mL^2}{V_0}}$
click to see or hide answer to Q – 45
— the answer is tentative: option 3.
Explanation: F = -dV/dx = -V0/L sinh (x/L). For small oscillations sinh (x/L) ~ x/L. So, F = -(V0x/L2) = -kx. So, k = V0/L2. So time period is given as: $2\pi \sqrt{m/k}=2\pi \sqrt{\frac{mL^2}{V_0}}$.
Hence the correct answer is option 3.
Categories: CSIR NET (Physics)
|
{}
|
## Notes to God and Other Ultimates
1. These names are found in Hinduism, Daoism, Buddhism, monotheisms such as Judaism, Christianity, Islam and Sikhism, and Plotinus’ and Charles Peirce’s thought, respectively. For Peirce, see Kasser 2013.
2. See, e.g., Huxley 1945, Hartshorne and Reese (eds) 1953, Holm and Bowker (eds) 1994, Ward 1998, Neville and Wildman 2001, Diller and Kasher (eds) 2013, and Wildman 2017.
3. According to Hedges, particularists take each tradition to be a “network of terms and practices that make sense only in relation to itself” (2014: 205–206). If so, no concepts can be shared between them; a fortiori, no concept(s) of ultimacy either.
4. Sister Gayatriprana (2020) in particular identifies multiple chains of influence between models of God and Brahman running from East to West and West to East. One interesting case in point is the rise of more personalist views of Brahman in the Bhakti schools of Hinduism involving “a degree of deep interconnection between the human and the divine” occurring during the period of Muslim rule in India from the twelfth to the sixteenth centuries (2020: xxii; more coming on the Bhakti schools in Section 2 of this entry).
5. Wildman 2001: 269; 2020: 119, 127. Berthrong backs Neville’s and Wildman’s idea, saying that
anything less [than vague categories and metaphors]…would be trivial in light of the mass of data demanding to be addressed. There is an old and ironic Chinese curse that states, may you live in interesting times. For the comparative philosopher or theologian this can be transposed into: may you live in cultural and conceptually rich times. (2001: 255–256)
6. To demonstrate how Schellenberg’s account surfaces on the face of the vague categorial terms from the paragraph above: Tillich’s “object of ultimate concern” is soteriologically ultimate; John Hick’s “the Real” is metaphysically ultimate; and Neville and Wildman’s “that which is most important to religious life because of the nature of reality” implies both soteriological and metaphysical ultimacy.
7. Clooney 2001 follows Schellenberg’s conjuncts in his discussion of Indian religious ultimates:
Ultimate reality might be described as follows: that which cannot be surpassed [axiological ultimacy]; that from which all realities, persons, and things come, that on which they depend, and that into which they return upon dissolution [metaphysical ultimacy]; that by knowledge of which one knows everything else and reaches liberation [soteriological ultimacy]. (2001: 95, bracketed additions mine)
Similarly, Mary-Jane Rubenstein says God is “the source of all things, the life of all things, the end of all things”, a statement that agrees with Schellenberg if the “life” of a thing is its value (2019: 16:29).
8. E.g., one route is to take metaphysical ultimacy to entail axiological ultimacy which, with the existence of the cosmos, entails soteriological ultimacy for its creatures. For a creative reconstruction of, e.g., Aquinas’ argument along these lines, see Zagzebski 2007: 81–84.
9. See Section 2.2 for an explanation. The inheritance is visible in the fact that each of the three categories of being is maximized and in their conjunction. (In fact, axiological ultimacy by itself may capture Anselm’s formula). Elliott also reads Schellenberg as “taking a cue from Anselm (and frankly, the entire history of Perfect Being Theology)” (2017: 104).
10. “Chroniclers assumed that the ‘Great Spirit’, ‘Master of Life’, or ‘Grandfather God’ were somehow real Native American terms, instead of Euro-Christian interpolations. In fact, Natives have no high-God concept, let alone God concepts that mimic male-dominated hierarchies. Instead, we have Councils of Elder Spirits operating in replication of our participatory democracies….each [of whom] simply knows more about itself and its immediate purview than does anyone else….[and who] act in concert” and by accident and consensus to create, e.g., life on earth (Mann 2010: 33–34).
11. Schellenberg draws this distinction:
we have something distinctively religious here [in the triple formula], something that can clearly be distinguished from what a materialist might say, who nevertheless thinks there exists something metaphysically ultimate. (2009: 31)
Richard Dawkins says something similar when he calls the view that “God is nature….sexed-up atheism” and concludes that “Deliberately to confuse [it with theism and deism] is…an act of intellectual high treason” (2006: 18–19).
12. Specifically, using “M”, “A” and “S” for metaphysical, axiological and soteriological ultimacy respectively, the disjunction entails these disjuncts: M, A, S, $$(\textrm{M} \lor \textrm{A}),$$ $$(\textrm{M} \lor \textrm{S}),$$ $$(\textrm{A} \lor \textrm{S}),$$ or $$(\textrm{M} \lor \textrm{A} \lor \textrm{S}).$$ So this entry uses “metaphysically ultimate” for the disjunct M and “ultimate” and variants for two or more of these disjuncts. Thanks go to John Bishop for what is right in this paragraph.
13. In addition, to neologize another adjective: on a prototype theory of concepts and taking the full conjunction $$(M \lor A \lor S)$$ as the prototype of ultimacy, perhaps there are degrees of ultimacy, ranging from triple ultimacy as paradigmatically ultimate to single ultimacy as “ultimate-ish”.
14. The model’s finding: in aggregate, it was in fact better overall for agents to cooperate instead of defect. The example is cited in Emily C. Parke and Anya Plutynski 2020: 65. Regarding definitions of “model”: Parke and Plutynski define “models” in a scientific context as “idealized interpreted structures representing target systems” and takes their structure to be “relevant to a scientist’s research agenda and aims” (2020: 61, 63). Ian Barbour similarly defines “model” “broadly speaking” as “a symbolic representation of selected aspects of…a complex system for particular purposes” (1974: 6).
15. On different aims: Barbour suggests that both scientific and religious models have cognitive functions of explaining and interpreting observations of the natural world or human experiences, respectively, but that religious models also have non-cognitive functions that “have no parallel in science”, such as the expression of attitudes and commitment to a form of morality and life (1974: 7, 27–8, 68–9). On similarities (1) and (2): Barbour suggests that models represent “aspects of the world which are not directly accessible to us” and “that only certain aspects of the world are brought into prominence by a model, while other aspects are neglected”, respectively (1974: 7, 47); Parke and Plutynski echo both points (2020: 55, 60).
16. Descartes’ Reply to the Third Set of Objections to the Meditations (by Hobbes), Fifth Objection, at AT VII 181 in Cottingham, Stoothoff and Murdoch 1988, vol. II, p. 127.
17. Swinburne 1993, which attempts to show that it that it is logically possible for God to be omnipresent, incorporeal, personal, free, creator of the universe, omnipotent, omniscient, perfectly good (impeccable), a source of moral obligation, eternal, immutable, and necessary, by a careful analysis of each. Regarding “concepts” vs. “conceptions”, see Bishop 2009: 422. Though his account there discusses “conceptions” as metaphysical fillers of role concepts, there are metaphysical fillers for higher-level concepts more broadly. For example, the concept of God as “that which is perfect” is not a role concept since its attributes are intrinsic instead of extrinsic, but it still has fillers in the different conceptions of what it takes to be perfect.
18. About the hesitancy: Neville’s indexical signs and other mere reference-fixers for ultimacy may be limiting cases of models since the representation is instrumental or perhaps “alienans”, as Vallicella explains it: the representation “shifts, alters, alienates, the sense of the noun that it modifies”, as “decoy” does in “decoy duck” since it implies the referent of “duck” is not a duck (Vallicella 2006 [2019: sec. 4.2]). See the ineffability objection in the next section for more forms of speech about ultimacy which also may be alienans. Regarding purposes modelers of ultimacy may have, see footnote 15.
19. Perry Schmidt-Leukel makes two interesting claims about Nagarjuna’s related distinction between conventional reality and ultimate reality:
1. (1) that this distinction is made only “from the perspective of conventional reality…where conceptual distinctions apply – in order to point towards ultimate reality where no conceptual distinctions whatsoever apply;” and
2. (2) “that ultimate truth/reality cannot be taught without recourse to the conventional” (2019: 477).
In other words, one has to slog through the kind of conceptual distinctions at play in this entry to understand what is ultimate in a way that will allow one to drop these distinctions.
20. Nota bene, Schmidt-Leukel writes:
At times proponents of this model oscillate between a conception that understands all three factors as distinct ultimates in their own right, and a conception that sees them as three dimensions of a single complex ultimate reality. (2019: 482)
21. They are also framed mereologically, notwithstanding Mullins 2016: 139ff.
22. The pantheistic-sounding passages include:
God is the only substance that exists or can be conceived…. [and]….Whatever exists is in God, and nothing can exist or be conceived without God. (1677, Ethics Part I.14, 15)
See Rubenstein 2018 for a variety of pantheisms from Bruno to animisms.
23. One might read these distinctions existentially by replacing the world with oneself, to understand at a fundamental level who one is and how one fits into the wider reality. So, e.g., monism becomes “what is ultimate and I are the same stuff;” dualism that “what is ultimate is different from me;” panentheism that “I am in (or part of) what is ultimate;” merotheism that “what is ultimate is in (or part of) me”. Thanks to David Perry for this thought.
24. Many Hindus refer to the tradition as “Sanātana Dharma” (the eternal dharma, or order, or way of life). This is an older, and an indigenous, term, but it, too, did not become widespread as a designator for the whole collection of these traditions until relatively recently.
25. The Vedanta school takes the Upanishads to be the end of the Vedas both literally as an appendix to the Vedas and figuratively as their whole point. The chronological order of the Vedas is the original hymns (e.g., the Rig Veda), the Brahmanas, the Aranyakas (a.k.a the forest treatises), and then the Upanishads, though the oldest Upanishad is an Aranyaka.
26. Nicholson 2010. From Jeffery Long:
The six schools are “Sāṃkhya and Yoga [the relationship of these two could be seen as one of theory (Sāṃkhya) to practice (Yoga)]; Nyāya and Vaiśeṣika [the traditional Hindu systems of logic and cosmology, respectively, which eventually merged]; and Mīmāṃsā and Vedānta [systems of Vedic exegesis, with Mīmāṃsā being focused on the ritualistic, early Vedic texts and Vedānta being focused on ‘the end of the Veda’—the Upaniṣads and the philosophy taught therein]”. (Correspondence, 12 March 2020)
Interestingly, the Samkhya school has a view of God reminiscent of Aristotle’s Unmoved Mover or deism: God is a purusha (a soul) that unlike most souls never got bound to the cycle of rebirth and thus is ever-free and not engaged with the world.
27. The one but important exception is the Dvaita schools which identify Brahman with God, i.e., Ishvara. For “theocosm”, see Long 2007: 81. Long was unaware of this at the time, but “theocosm” was used previously, e.g., in Reconciliation by Incarnation by David Worthington Simon (1898: 201) and in The God of Science (1928) by Arvid Reuterdahl where he developed a “Theocosmic Diagram.”
28. In process thought, there is no name for the theocosm but, e.g., in his gloss on process thought Wesley Wildman calls the theocosm “ultimate reality” and the parts “God” and “the world”:
Ultimate reality is the eternal symbiotic relationship between this natural God and the rest of natural reality, in which the two mutually influence and constitute each other. (2017: 22)
29. These associations make Vedantic philosophies like what Pierre Hadot takes Western ancient philosophies to be: they are a bios or way of life (1981 [1995]). For more on the relationship between text and experience in Vedanta, see Long 2020, section 5.4.
30. Interestingly, pace those who read him as a pantheist, Shankara supports the asymmetric claim of panentheism counterfactually: if the unique qualities of a material effect were destroyed, the essential qualities of its/the material cause would still remain, e.g., if I shatter the pot I made from the clay, I still have clay; if I smelt down a gold necklace, I still have gold. See Shankara’s commentary on the Brahmasutra 2.1.9, Swami Vireswarananda 1936: 166-167, and Rambachan 2006: 75.
31. Ramanuja’s synthesis grew out of his spiritual path, first as a serious student of Advaita, then as a convert to Vaishnavism. It is testimony to Shankara’s importance that it took 300 years for someone to dissent, and that Ramanuja had to survive a murder plot to make his conversion and critique (Tapasyananda 1990: 1–14, 32–33).
32. It is said that during these six months of direct perception of nirguna Brahman, Ramakrishna’s “perception of the world vanished entirely” (Long 2020: 172, quoting Swami Saradananda).
33. Jeffery D. Long conveys Ramakrishna’s idea of the deep complexity of Brahman by dubbing his system “Anekanta Vedanta”, where “anekanta” is the Jain concept “non-one-sided” (2020: 158).
34. It is clearer to say “reality-providing” cause vs. material cause to clarify that Brahman is not material (Rambachan 2006: 126, footnote 10). He offers a helpful analogy:
Like a spider projecting a web from itself, but unlike a bird building its nest, Brahman brings forth the world without the aid of anything extraneous. (2006: 70)
35. In support, he quotes: “Let me be many, let me be born”, Taittiriya Upanisad 2.6.1.
36. Davies’ framing implies that one thing produces another thing, but other perfect being theologians deny that God is a thing at all. The objectors are still generally dualists, though, taking God’s nature to be absolutely distinct from the cosmos’ because, e.g., God transcends the world.
37. The Greek philosophers each had different names and ideas of the ultimate, but most used nascent perfections. To offer just two examples from the pre-Socratics: Parmenides’ described his “One Being” as “unborn, imperishable, whole, unique, immovable and without end” (in Guthrie 1965, see especially pp. 26 and 31, verses 3–5 and 22–25 of fr. 8, see early mention of simplicity, immutability, and eternality), and Anaxagoras’ said “Mind”
is something infinite and independent, and is mixed with no thing….the finest and purest of all things, and has all judgment of everything and greatest power, and everything that has life, both greater and smaller, all these Mind controls. (in Guthrie 1965: 272–273, with proto-concepts of aseity, axiological perfection, impeccability, omniscience and omnipotence)
38. On dropping immutability and impassibility:
We clearly find in Scripture, it is argued, that God does experience emotional change—for example, really does rejoice and/or become sorrowful in response to our actions. This however, is not an essential change in God’s nature. God is essentially perfect in every way. And for God to be affected by (appropriately emotively dependent on) what happens to those with whom God is in relationship makes God a more complete and admirable being than one who is incapable of experiencing such change. (Basinger 2013: 266)
39. Both are directed to x, and thus can be unfulfilled when not x and fulfilled when x, etc. See Pfeifer 2016: 44–46.
40. In her treatment of Alexander’s model, Emily Thomas noted its merotheistic God-world relation, without giving it a name:
Alexander is sometimes taken to be a “panentheist”. If panentheism is taken to mean that the universe is “in” God, then this characterization is straightforwardly incorrect: in fact, Alexander holds that deity is strictly contained “in” the universe. (2016: 255)
41. For the contemporary revival of axiarchism, see Derek Parfit 1998, Nicholas Rescher 2010, John Leslie 1979, 1989, 2001 and 2016. Though Leslie in particular argues for an “extreme” form of axiarchism on which the purely ideal “goodness of [a] possible world is what makes it actual” (Tim Mulgan 2017: sec. 1.1), Bishop and Perszyk ratchet back to a milder axiarchism, on which something concretely real makes the universe actual, namely actual, concrete “full realizations of the supreme good” (2017: 613). Mulgan argues that axiarchism is not as implausible as it might sound: it is already at work in the ontological argument and the fine-tuning argument; it has a fit with the growing “non-naturalism among moral philosophers”, and more (2017: sec. 1.1 and 1.2). He also offers his own axiarchic view there in which reality comes to be because it has a purpose, but because the purpose is not about us, he calls this an “ananthropocentric purposivism”. Mulgan’s view will be referenced in Section 2.3 where its ananthropocentrism surfaces in a standard model of the Dao.
42. One still might ask though with Marilyn McCord Adams how “we eliminate the parallel hypothesis” that things seem as directed to evil as they do to Love (2016: 137)? Bishop and Perszyk respond that euteleological Christians anyway can answer a posteriori: Jesus’ death and resurrection show that the power of love is stronger than the power of evil, so Love will win in the end (2016: 124).
43. Even if deity is formally the next level the universe will realize, this does not entail that it is the universe’s telos. It is possible that a next stage of development can, far from drawing purpose out of what came before, actually obscure it, as a prolonged death without dignity can make “a life that has been well lived…ever after [be] seen through the smudged glass of its last few years” (Nuland 1995: 105).
44. There is a debate among scholars about whether to define the start of Daoism with the Daodejing or with the first definable Daoist community, generally taken to be when in 142 CE Zhang Ling became the first Celestial Master of what became Celestial Master Daoism, a movement which spread to all parts of China by the fourth century. For more see Kleeman 2016.
45. Yinyang is a binary “pattern embedded in the nature of all beings”, from being receptive/still/empty, waxing to being creative/energetic/full, then waning back again. See Wang 2012: 41ff.
46. One important competing interpretation of 0 and 1 is visible in both Schipper and Robson who separately suggest that 0 is primordial chaos which “holds within itself the whole universe but in a diffuse, undifferentiated and potential state” [not identified as the Dao] and 1 is Qi, pure energy-matter that emerges from the chaos, followed by yinyang, etc. (Schipper 1982 [1993: 35]; Robson 2015: 1483).
47. Moreover, even if we were to read the standard model as a monism as in, e.g., traditional Advaita Vedanta, it would still offer a fresh insight because being and non-being are reversed: in Advaita, Brahman is being and the cosmos is nonbeing, a mere appearance; in the common view of Daoism traced here, Dao is nonbeing and makes possible the world of beings.
48. “Dao is not really ‘infinite’ nor ‘transcendent’ but infinitesimal, the faintest, most imperceptible of breaths, the darkest shade of light, the smallest possible contrast that, in its infinite fractal-like recursions, multiplies to constitute the shocking wealth of cosmic power. This is the ultimate mystery of Dao: that subtle void and intangible formlessness should be the root of all becoming” (Kohn 2001: 18). In other words, as Stephen Yablo once said in an entirely different context: this is apparently meager input with torrential output.
49. The full quote from Kierkegaard is:
If one who lives in the midst of Christendom goes up to the house of God, the house of the true God, with the true conception of God in his knowledge, and prays, but prays in a false spirit; and one who lives in an idolatrous community prays with the entire passion of the infinite, although his eyes rest upon an idol: where is there [the] most truth? The one prays in truth to God though he worships an idol; the other prays falsely to the true God, and hence worships in fact an idol. (1846/1974, pp. 179–180)
50. As Elliott put it, the more general the model of the ultimate, the greater its “epistemic comfort”, its “honest chance of being true” (2017: 105 and footnote 28).
|
{}
|
# Use external Executors#
Normally, we have seen how Flow ties up Executors together, and how an Executor lives in the context of a Flow.
However, this is not always the case, and sometimes you may want to launch an Executor on its own, and perhaps have the same Executor be used by different Flows.
Where can external Executors run?
External Executors can run anywhere from the same environment as the Flow, to a Docker container, or even a remote environment, such as JCloud.
As the first step in this tutorial, you will learn how to add already running external Executors to your Flow. After that, you will see how to create and use an external Executor yourself.
If you want to add an external Executor to your Flow, all you really need to know is how to find it. You need:
• host, the host address of the Executor
• port, the port on which the Executor receives information
Then, adding the Executor is a simple call to add() with the external argument set to True. This tells the Flow that it does not need to start the Executor itself.:
from jina import Flow
exec_host, exec_port = 'localhost', 12345
After that, the external Executor will behave just like an internal one. And you can even add the same Executor to multiple Flows!
Note
If an external Executor needs multiple predecessors, reducing needs to be enabled. So setting disable_reduce=True is not allowed for these cases.
## Starting standalone Executors#
The example above assumes that there already is an Executor running, and you just want to access it from your Flow.
You can, however, also start your own standalone Executors, which can then be accessed from anywhere. In the following sections we will describe how to run standalone Executors via the Jina command line interface (CLI). For more options to run your Executor, including in Kubernetes and Docker Compose, please read the Executor API section.
This tutorial walks through the basics of spawing a standalone (external) Executor. For more advanced options, refer to the CLI and Executor API section
## Using Jina Hub#
The Jina CLI allows you to spawn executors straight from the Hub. In this example, we will use CLIPTextEncoder to create embeddings for our Documents.
First, we start the Executor from the terminal. All we need to decide is the port that will be used by the Executor. Here we pick 12345.
jina executor --uses jinahub+docker://CLIPTextEncoder --port 12345
jina executor --uses jinahub://CLIPTextEncoder --port 12345
This might take a few seconds, but in the end you should be greeted with the following message:
[email protected] 1[L]: Executor CLIPTextEncoder started
And just like that, our Executor is up and running.
Next, let’s access it from a Flow and encode some Documents. You can do this from a different machine, as long you know the first machine’s host address, or simply from the same machine in a different process using localhost.
So, if you are still working on the same machine, hop over to a new terminal or your code editor of choice, and define the following Flow in a Python file:
from jina import Flow
Now we can encode our Documents:
from docarray import Document, DocumentArray
docs = DocumentArray([Document(text='Embed me please!') for _ in range(5)])
def print_embedding(resp):
doc = resp.docs[0]
print(f'"{doc.text}" has been embedded to shape {doc.embedding.shape}')
with f:
f.index(inputs=docs, on_done=print_embedding)
"Embed me please!" has been embedded to shape (512,)
We obtain embeddings for our Documents, just like we would with a local Executor.
## Using a custom Executor#
You can achieve the same while using your own, locally defined Executor. Let’s walk through it.
First, we create a file exec.py, and in it we define our custom Executor:
from jina import Executor, requests
class MyExecutor(Executor):
@requests
def foo(self, docs, **kwargs):
for doc in docs:
Since we can’t rely on the Hub this time around, we need to tell Jina how to find the Executor that we just defined. We do this using a YAML file.
In a new file called my-exec.yml we type:
!MyExecutor
metas:
py_modules:
- exec.py
This simply points Jina to our file and Executor class.
Now we can run the CLI command again, this time using our custom Executor:
jina executor --uses my-exec.yml --port 12345
Now that your Executor is up and running, we can tap into it just like before, and even use it from two different Flows.
from jina import Flow, Document, DocumentArray
Received: "Greetings from Flow1"
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Sep 2019, 00:07
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# What is the digit on the units place in the expanded value of 97^275 –
Author Message
TAGS:
### Hide Tags
Manager
Joined: 08 Jan 2018
Posts: 98
Location: India
GPA: 4
WE: Information Technology (Computer Software)
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:29
1
Hi,
To find the unit digits, we need to consider only the unit of the base and last two digits of power.
Another rule is power raised to certain number work in a cycle that means after some powers it will repeat the same digits.
Thus,
$${97^{275}}$$ = $${7^{75}}$$
using cycle formula => $${a^{4k+1}}$$ = $${a^1}$$, where a is base and k is some constant:
$${7^{75}}$$ = $${7^{3}}$$ => 3 (as 75 = 4*18 + 3)
For, $${2^{44}}$$ => $${2^4}$$ => 6 (as $${a^{4k}}$$ = $${a^4}$$ and 44 = 4*11)
So, as per question :
3-6 = 7 (13-6)
so unit digit is 7.
Please hit kudos if you like the solution.
Manager
Joined: 08 Apr 2019
Posts: 150
Location: India
GPA: 4
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:29
1
This question tests your knowledge of the concept of cyclicity.
Since we're only concerned with the unit's place, it's best to rewrite this as 7^275 - 2^44 (since the tens digit would not have any role to play in determining the units place)
Now, 275 = 4*68 + 3 can be written in the form 4k + 3, and similarly, 44 = 4*11 can be written as 4n
Now, we know that units digit cyclicity of both 7 and 2 is 4, i.e. they repeat their units digit after 4. Knowing this, we get the units digit for 7^(4k+3) to be 3 (eg. 7,49,343) and units digit for 2^(4n) to be 6 (2,4,8,16)
Now, subtracting 6 from 3, we get 7 as the units digit and that's our answer (D)
Intern
Joined: 15 Jun 2019
Posts: 32
Location: Kazakhstan
Schools: Carey '21
GPA: 3.93
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:31
The last digit of 97^275 is the same as the last digit in 7^275
7^1=7
7^2=49
7^3=343
7^4=2401
7^5=16807
....
So for 7^275 the unit digit is 7
The last digit of 32^44 is the same as the last digit in 2^44
2^1=2
2^2=4
2^3=8
2^4=16
2^5=32
44=4*11 so the unit digit of 2^44 is 6
7-6=1
Manager
Joined: 11 Feb 2013
Posts: 149
GMAT 1: 490 Q44 V15
GMAT 2: 690 Q47 V38
GPA: 3.05
WE: Analyst (Commercial Banking)
What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
Updated on: 02 Jul 2019, 09:29
1
(1) All UNIT DIGITS FOLLOW "THE CYCLICITY OF 4" i.e. after every four powers, UNIT DIGIT REMAINS SAME.
So, Divide the all POWERs by 4 and work with the REMAINDER. (IF the REMAINDER is ZERO, take 4 as the remaining number because you have divided the numbers by 4)
A SHORTCUT regarding the DIVISION OF 4: JUST take LAST TWO DIGIT & DIVIDE them by 4.
For example, 275/4 is same as 75/4 (REMAINDER=3).
(2) When you are asked to find out UNIT DIGIT, work with UNIT DIGIT only (CROSS OUT TENS & HUNDREDS).
For example, UNIT DIGIT of (97^275) and UNIT DIGIT of (9^275) are the SAME.
considering cyclic of 4 & unit digit only, the question {what is the unit digit of( 97^3)-(32^4) becomes what is the unit digit of (7^3)-(2^4)?
Here, UNIT DIGIT of 7^3=3 and
UNIT DIGIT of 2^4=6 [NOTE: After dividing 44 by 4, REMAINDER is ZERO, for UNIT DIGIT CYCLICITY PURPOSE we will take 4 as remaining number because remainder must be an integer between 1&4).
NOW, CHECK whether (97^275) IS GREATER THAN (32^44)?
Case 1: if (97^275) IS GREATER THAN (32^44)?, the value of (97^275)- (32^44)=*****************3-************6= ****************7 (because we consider 3 as 13 because first term is GREATER)
Case 1: if (97^275) IS LESS THAN (32^44)?, the value of (97^275)- (32^44)=*****************3-************6=- **********3 (because simply 6 MINUS 3 because SECOND term is GREATER)
CHECKING:
97>32 &
275>44.
SO, (97^275) IS DEFINITELY GREATER THAN(32^44).
SO, ONLY CASE 1 POSSIBLE.
so, UNIT DIGIT=7 (D is the ANSWER)
Originally posted by BelalHossain046 on 02 Jul 2019, 08:32.
Last edited by BelalHossain046 on 02 Jul 2019, 09:29, edited 1 time in total.
Manager
Joined: 21 Jan 2019
Posts: 100
What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:32
2
Quote:
What is the digit on the units place in the expanded value of 97275–324497275–3244?
A. 1
B. 3
C. 5
D. 7
E. 9
in no. 97, cyclicity of unit's digit 7 is 7,9,3,1
and 275 /4 gives 3 as a remainder.
in no.32, cyclicity of unit's digit 2 is 2,4,8,6
and 44/4 gives 0 as remainder.
hence the equation is ...3-...6 which will end up to be 7 as the unit's digit.
hence option D
Manager
Joined: 23 Oct 2018
Posts: 50
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:34
1
What is the digit on the units place in the expanded value of 97^275–32^44?
Since we are only asked about the unit digit of the above expression, no. cyclicity principle will help.
7 has a cyclicity of 4- 3,9,7,1- divide 275 by 4 and we get the remainder 3 so the unit digit of the expression 97^275 will be 3.
Same way, 2 has a cyclicity of 4- 2,4,6,8- divide 44 by 4 and there is no remainder, so the unit digit is 6.
Try some nos. 13-6 will have unit digit 7, 23-6 will have unit digit 7 and so on.
Manager
Joined: 01 Oct 2018
Posts: 112
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:37
1
(97^275) - 32^44
We interested in only the last digit, so:
7^275 - 2^44
7^1 = 7
7^2 = 9
7^3 = 3
7^4 = 1
7^5 = 7
So this pattern repeat, that's mean:
272 is the last multiple of 4 before 275
Number-----273--274--275
LastDigit---7----9----3
Ok, last digit of 7^275 is 3
Than make the same analyze with 2^44
last digit is 6
So 3 - 6 = -3 this can't becase (97^275) > 32^44, so
13 - 6 = 7
Answ D
Posted from my mobile device
Intern
Joined: 15 Sep 2017
Posts: 44
Location: United States (IL)
GPA: 3.4
What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
Updated on: 02 Jul 2019, 11:13
1
To find the unit digit of the given number we need to know the cyclicity of 7 and 2 and that is 4 numbers after that it repeats the same numbers.
7: 7-9-3-1
2 : 2-4-8-6
Therefore unit digit of 97^275 is 3 and unit digit of 32^44 is 6
3-6 = 7
_________________
Tashin
Originally posted by Tashin Azad on 02 Jul 2019, 08:41.
Last edited by Tashin Azad on 02 Jul 2019, 11:13, edited 1 time in total.
Intern
Joined: 21 Feb 2018
Posts: 12
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:44
1
97^275 - 32^44. Lets assume this to be A - B
To find the units digit of a A-B, we must first know the unit's digit of A and B respectively.
Unit's Digit of A - 97^275 depends on the unit's digit when obtained from 7^275. Since 7 has a cyclicity of 4(7,9,3,1) and 275 = 4(68) + 3 => the unit's digit of A will be 3
Similarly, Unit's Digit of A - 32^44 depends on the unit's digit when obtained from 2^44. Since 2 has a cyclicity of 4(2,4,8,6) and 44 = 4(11) => the unit's digit of A will be 6
Unit's digit of A-B = 3 - 6 = 13 (By Borrowing from the Ten's digit in A) - 6 = 7
Hence, The answer must be D
Manager
Joined: 12 Mar 2018
Posts: 83
Location: United States
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:45
1
The units digit of 97^275 will be 3 and the units digit of 2^44 will be 6. So the units digit of the difference then will be 7.
Manager
Joined: 24 Jun 2019
Posts: 108
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:48
1
97^275:
Use only last digit to get the units place:
Power1 - 7^1 = 7
Power2 - 7x7 = 49
Power3 - 9x7 = 63
Power4 - 3x7= 21
Power5 - 1x7 = 7 .... same as 1
so units digit will cycle through 7, 9, 3, 1... four unique digits repeating
275 divde by 4 gives quotient 272 and remainder 3 - so units digit will be 3 (3rd in the cycle)
32^44:
Same logic as above
Power1 - 2^1 = 2
Power2 - 2x2 = 4
Power3 - 4x2 = 8
Power4 - 8x2 = 16
Power5 - 6x2 = 12 .... same units digit as 1
so the cycle here is 2, 4, 8, 6.... again cycle of 4 digits
44 is divisible by 4 - so 44th power of 2 will have units digit 6 (4th in cycle)
Units digit of difference will be 3 - 6 = 13-6 (Do manual subtraction on paper - 1 will be carried to 3 to make it 13) = 7
Ans is D - 7
Manager
Joined: 28 Feb 2014
Posts: 146
Location: India
GPA: 3.97
WE: Engineering (Education)
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:52
1
What is the digit on the units place in the expanded value of 97^275–32^44?
A. 1
B. 3
C. 5
D. 7
E. 9
This can be done with the help of cyclicity of 7 and 2 which is 4. Question can be rephrased as what is the unit digit of 7^275 - 2^44
on dividing 275 with 4 (cyclicity of 7) we get remainder as 3
and dividing 44 with 4 (cyclicity of 2) we get remainder as 0 which will be equal to 4 itself
Unit digit of 7^3 - 2^4 = 3 - 6 = 7 (at one's place)
Manager
Joined: 18 Apr 2019
Posts: 67
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:54
1
Concept tested:
This question is based on cyclicity.
The cyclicity of 7 is 7,9,3,1 and that of 2 is 2,4,8,6.
Soln:
Now, if we only see the units digit and the power. 7 is raised to 275.
275 when divided by 4 has quotient 68 and leaves remainder 3. What is means is 7 completes 68 cycles of 7,9,1,3 and then 3 units are left. So taking the 3rd digit in cyclicity, we have the units digit of 37^275 as 3.
Applying the same concept to 2, we come to the conclusion that it completes 11 cycles and leaves no remainder. Hence units digit is the 4th digit in the cyclicity - 6.
Now just consider the first 2 digit number that ends with 3 and subtract 6 from it. This gives you the answer 7. [D]
Note: we can't subtract 3 from 6 and say the remainder is -3. So we take a number that is greater than 6 and ends with 3.
Senior Manager
Joined: 31 May 2018
Posts: 302
Location: United States
Concentration: Finance, Marketing
What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
Updated on: 02 Jul 2019, 08:57
1
we need to find the unit digit of
97^275 - 32^44
so we will consider unit digits of both
$$7^1$$=7
$$7^2$$=9 (unit digit)
$$7^3$$=3
$$7^4$$=1
$$7^5$$=7
$$7^6$$=9
$$7^7$$=3
$$7^8$$=1
from here we conclude that it follows a cyclic pattern $$7^4$$,$$7^8$$ = each unit digit = 1
we need to find unit digit of 7^275
so we will write this in terms of $$7^4$$ -- (7^4)^68 * $$7^3$$ = 1*$$7^3$$ = 3 (unit digit of 97^275)
now unit digit of 2^44
we need to find cyclic pattern
by performing the same operation above on 2 we find pattern $$2^4$$,$$2^8$$ = each unit digit = 6
so we will write 2^44 in terms of $$2^4$$-- (2^4)^11 = 6^11 = 6 (unit digit of 32^44)
the difference of unit digit
97^275 - 32^44
(...........3) - (....6) = 7 (since 97^275 is larger value than 32^44)
correct answer is 7 option D
Originally posted by shridhar786 on 02 Jul 2019, 08:55.
Last edited by shridhar786 on 02 Jul 2019, 08:57, edited 1 time in total.
Manager
Joined: 30 May 2019
Posts: 108
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 08:56
1
Here, concept of cyclicity is tested. They gave us these huge, ugly looking numbers to distract and make us panic. But we won't. For 97^275 it is enough to know units digit of 7^275. Cyclicity of 7 is 4. that is
7^1=07
7^2=49
7^3=_43
7^4=__01
So, units digit when 7 is raised to the power of 275 is 43.
Likewise, we need to know remainder of 2^44. 2 has also cyclicity of 4, that is
2^1=02
2^2=04
2^3=08
2^4=16,
So, units digit when 2 is raised to the power of 44 is 6, So 43-6=_7 (D)
Intern
Joined: 08 Nov 2016
Posts: 19
Location: India
GMAT 1: 610 Q48 V27
GPA: 3.99
WE: Web Development (Computer Software)
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 09:01
1
What is the digit on the units place in the expanded value of 97^275–32^44?
A. 1
B. 3
C. 5
D. 7
E. 9
We can get the series of unit places for all powers of 7 & 2 to see the series of repetition.
7^1 = unit digit 7
7^2 = unit digit 9
7^3 = unit digit 3
7^4 = unit digit 1
7^5 = unit digit 7 , So after 4 it gets repeated. Same with 2, after 4, unit digits get repeated.
Now if we calculate for power 275 for 7, 3 is remainder and the unit digit should be "3", and for power 44 for 2, remainder is 0, and unit digit will be "6".
Now unit digit(3) - unit digit(6) = 13-6 = 7.
Intern
Joined: 23 Jul 2017
Posts: 20
Location: India
Concentration: Technology, Entrepreneurship
GPA: 2.16
WE: Other (Other)
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 09:05
1
Any power of a number ending with 7 will have the units digit either 7,9,3 or 1 and any number ending with 2 will have the units digit either 2,4,8 or 6. Therefore 97^275 and 32^44 will be ending with 3 and 6, giving the answer as 7.
Intern
Joined: 09 Feb 2019
Posts: 20
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 09:07
7th & 2nd unit place repeats after every 4 multiple
7^1=7
7^2=4
7^3=3
7^4=1
7^5=7
2^1=2
2^2=4
2^3=8
2^4=6
2^5=2
7-6 =1 (A)
Manager
Joined: 15 Nov 2015
Posts: 150
Location: India
GPA: 3.7
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 09:07
1
Both 7&2 has power cycle of 4
Units digit of 3-6 =7
Hence option D
Intern
Joined: 04 Feb 2019
Posts: 5
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink]
### Show Tags
02 Jul 2019, 09:09
1
This can be simplified as units digit of 7^275 - 2^44
We know periodicity of 7 is 4 ie. (7^4)^n will return 1 in units digit for all n>=1. Now 275 = 4 x 68 + 3, so the units digit is determined by units digit of 7^3 ie. 3.
Again we know We know periodicity of 2 is 4 ie. (2^4)^n will return 6 in units digit for all n>=1. Now 44 = 4 x 11, so the units digit always 6.
Units digit 3 - Units digit 6 = Units digit 7
Re: What is the digit on the units place in the expanded value of 97^275 – [#permalink] 02 Jul 2019, 09:09
Go to page Previous 1 2 3 4 5 6 Next [ 107 posts ]
Display posts from previous: Sort by
|
{}
|
# Sergey Karayev
Computer Science Department
University of California, Berkeley
Computer Vision group with Trevor Darrell.
Digital facets
Updated 06 Mar 2013
## Review of Kanan and Cottrell, Robust Classification of Objects, Faces, and Flowers Using Natural Image Statistics, CVPR 2010.
The paper’s approach has three parts. The first is using an ICA-based spatial pyramid feature; the second is computing a saliency map to sample interest points; and the third is in using Naive Bayes Nearest Neighbor (NBNN) for classification. The approach is evaluated on three single-object datasets: Caltech-101 and -256, Aleix and Robert faces dataset of 120 individuals with 26 images each, and 102 Flowers (8200 images). The results are best yet published for Caltech-101 single-feature approaches, and match best multiple-feature performances; comparable to state-of-the-art on Caltech-256; match state-of-the-art on the AR Faces; and beat the single previously published result on the Flowers dataset.
### ICA-based Local Features and Saliency
The images are first pre-processed by converting to a standard size, converting to the LMS color space (designed to match human color receptor distributions), normalizing, and then applying a nonlinear transform inspired by modulation to luminance that happens in photoreceptors (a logarithmic compression). [Note: It would be interesting to see the effects of not performing this mapping.]
ICA filters of size $b \times b$ ($b$ tuned on a Butterfly and Bird dataset to 24 pixels) are learned on about 5000 color image patches from the McGill color image dataset. To learn $d$ ICA features, the authors first run PCA on the patches, discard the first principal component, retain $d$ following principal components, and then learn the ICA decomposition. I’m not quite sure how this works—I guess ICA is then only able to learn $d$ non-garbage bases?
#### Saliency Map
The ICA bases are used to place a saliency map over the image following the Saliency Using Natural statistics (SUN) framework \cite{Zhang:2008:SUN}. The basic idea is that saliency of a point is the inverse $P(F)^{-1}$ of its probability under the ICA model $P(F=\mathbf{f})=\prod_i P(\mathbf{f}_i)$. Each unidimensional distribution is fit with a generalized Gaussian distribution:
$P(\mathbf{f}_i) = \frac{\theta_i}{2 \sigma_i \Gamma(\theta_i^{-1})} exp(-|\frac{\mathbf{f}_i}{\sigma_i}|^{\theta_i})$
Parameters are fit still using the McGill color database. A further strange nonlinear weighting of the dimensions of $\mathbf{f}$ is then done to weight rarer responses more heavily.
### Fixations
The saliency map is normalized to a probability distribution, and “fixations” are sampled from it $T$ times. At each location $l_t$, an interesting fixation feature is extracted. It is a spatial pyramid over an area of $w=51$ pixels, using average pooling. So, the initial window of $w \times w \times d$ is represented by a vector of size $21d$, where $21 = 4 \times 4 \times 2 \times 2 \times 1 \times 1$ shows the structure of the spatial pyramid. Importantly, the normalized location $l_t$ of the fixation is also stored. To cast SIFT in this framework, we would set $w=17$, $d=8$, and the spatial aggregation would be a flat $4 \times 4$ grid.
After gathering $T$ fixations on every image in the training set, the unit-normalized SP vectors are then additionally processed by retaining only the first 500 PCA components and whitening them. The chain of re-normalizations in this paper is quite long and I would appreciate theoretical justifications for these decisions.
### Classification
The paper uses Kernel Density Estimation (KDE) to model $P(\mathbf{g}_t|C=k)$, where $\mathbf{g}_t$ is the vector of fixation features. A Naive Bayes assumption is made, such that each fixation contributes independently to the total probability. The posterior is estimated with Bayes rule, assuming uniform class priors. 1-nearest neighbor KDE is used, and the Euclidean distance between the fixation locations is considered in addition to the feature-to-exemplar distance. The final posterior probability is:
$P(\mathbf{g}_t | C=k) \propto max_i \frac{1}{||\mathbf{w}_{k,i}-\mathbf{g}_t||^2_2 + \alpha ||\mathbf{v}_{k,i}-\ell_t||^2_2 + \epsilon}$
where $\mathbf{w}_{k,i}$ is a vector representing the $i$’th examplar of a fixation from class $k$.
### Discussion
The authors attribute the strength of their approach largely to the exemplar-based classifier. Their approach does outperform the comparable single-descriptor version of the Boiman and Irani NBNN classifier \cite{Boiman:2008}; that could be due to a number of factors:
1. They also use location information in their comparison of fixations. EDIT: NBNN paper also appends location to the feature vector.
2. They sample features from a saliency map (vs. densely for NBNN)
3. They use their ICA feature instead of SIFT and other standard descriptors.
It would be excellent to see a controlled evaluation of each of these factors. The paper as it is presents a very specific and unorthodox approach, and does not justify many of its design decisions.
My questions:
1. What is the contribution of the saliency map? How would the performance change under a random sampling scheme? What about an interest-point sampling scheme?
2. Why is the saliency computed at a single scale? The only reason for this working well is that the dataset is single-object and fixed-scale.
3. How would performance change if a standard feature, for example SIFT, was extracted instead of the ICA SP feature?
4. What is the contribution of the location feature? Why is it weighted at $\alpha=0.5$; what would cross-validation tune it to?
In my mind, the most important part of the approach is NN classification. It would be interesting to re-implement this framework with a different bottom-up saliency map, for example the multi-scale one used in \cite{Alexe:2010} and traditional SIFT features.
|
{}
|
# Lagrangian Point in General Relativity
• B
D.S.Beyer
TL;DR Summary
If gravity is a pseudo force, what is going on at the Lagrangian points in GR terms?
Is there a relationship between the Lagrangian ‘hill diagram’ and the spacetime curvature embedment graphs?
The Lagrangian map shows effective potential, which deals with centrifugal force. As centrifugal force is a fictitious force (and gravity is as well), I would assume the underlying phenomena to be an aspect of curved spacetime.
Would a Lagrangian point, without any mass in it, have a slight spacetime embedment due to the planet / sun system?
My gut says no, but it would be cool if it did. Thoughts?
Side note : Can anyone point me to an embedment graph of a 2-body system? Ideally something real like the sun / Earth system, or Earth / moon system. I'd like to see curvature as it relates to actual astronomical distances, not just a cartoon of a rubber sheet with some balls.
Staff Emeritus
I think you'd want to consider at least two massive bodies to have an interesting Lagrangian problem.
If you have a massive body orbiting another massive body, I don't believe there is an analytical solution for the metric.
We can certainly do an approximate analysis of the orbit in this case, though - it's well known that the body inspirals, due to the existence of gravitational radiation. It's also a very tiny effect.
So, basically, the mathematical perfection of the problem as a stable points is spoiled, but practically I'd expect there to be very little difference in the behavior in solar system three body problems, where Newtonian formula are an excellent approximation. I'm not too sure what what might happen in the more interesting strong field cases. I haven't seen or done any analysis.
D.S.Beyer
I think you'd want to consider at least two massive bodies to have an interesting Lagrangian problem.
If you have a massive body orbiting another massive body, I don't believe there is an analytical solution for the metric.
We can certainly do an approximate analysis of the orbit in this case, though - it's well known that the body inspirals, due to the existence of gravitational radiation. It's also a very tiny effect.
So, basically, the mathematical perfection of the problem as a stable points is spoiled, but practically I'd expect there to be very little difference in the behavior in solar system three body problems, where Newtonian formula are an excellent approximation. I'm not too sure what what might happen in the more interesting strong field cases. I haven't seen or done any analysis.
Okay, maybe let's simplify this a little.
Can we rationalize the existence of an L1 'like' saddle point in spacetime curvature between two massive bodies?
Similar to a Roche Lobe, but in spacetime?
Mentor
Can we rationalize the existence of an L1 'like' saddle point in spacetime curvature between two massive bodies?
The "saddle point" is not in spacetime curvature. It is in the Newtonian potential. That's not the same thing.
Mentor
Summary:: If gravity is a pseudo force, what is going on at the Lagrangian points in GR terms?
If you use the same coordinates in both cases then there is not any important difference.
In the Newtonian approach you have the fictitious centrifugal and Coriolis forces and the real gravitational force. The biggest difference in GR is that the gravitational force is also fictitious.
Of course there is a minor difference in that the exact value of all of these fictitious forces are slightly different than the Newtonian version.
Last edited:
D.S.Beyer
The "saddle point" is not in spacetime curvature. It is in the Newtonian potential. That's not the same thing.
I understand, (or think I understand) that the models of Roche Lobes, Lagrange Points, and Hill Spheres are of Newtonian potential, and that the embedment diagrams of spacetime (rubber sheet models) are representing the lengths of paths through spacetime, 2D slices of curved 3D space.
What I enjoy about the Newtonian potential models is how they show the interaction of multiple bodies, which creates the interesting topologies of saddles between objects. I'm not sure I have ever seen a embedment diagram that consists of more than a single, spherical, non rotating body.
Maybe the first question is : Can an embedment diagram be made with more than one massive object?
Follow up : Would the topological features be similar to Newtonian potential models?
Mentor
he embedment diagrams of spacetime (rubber sheet models) are representing the lengths of paths through spacetime
No, they're not. They're representing space, not spacetime; and space only in a particular system of coordinates. They are not good tools to use if you want to understand spacetime.
D.S.Beyer
No, they're not. They're representing space, not spacetime; and space only in a particular system of coordinates. They are not good tools to use if you want to understand spacetime.
Sorry. I misspoke. You are correct, it's 'space' not 'spacetime'.
(as an aside : Do you think that answer (not mine) that I linked to on stackexchange does a good job of explaining it? I often refer people to it as a way to begin grappling with the rubber sheet analogy. I would love to know if you think it's a sound place to start.)
Mentor
Do you think that answer (not mine) that I linked to on stackexchange does a good job of explaining it?
It does a reasonable job of describing the "rubber sheet" as a visualization tool. It does, IMO, a terrible job of explaining the serious limitations of that visualization tool.
I would love to know if you think it's a sound place to start.)
I don't think the rubber sheet analogy is a good place to start at all. The "shape of space" in those particular coordinates is not a good thing to be focusing on.
Dale
If gravity is a pseudo force, what is going on at the Lagrangian points in GR terms?
Since you emphase gravity being a pseudo force in GR: Note that even in Newtonian physics, the computation of the Lagrangian Points is based on the pseudo Centrifugal force, and its potential. The potential of which the LPs are saddles and maxima, is the sum of Newtonian Gravity and Centrifugal force (pseudo force in the rotating common rest frame of the two masses)
Sorry. I misspoke. You are correct, it's 'space' not 'spacetime'.
It's not your fault, but a very common misunderstanding based on the misleading rubber sheet analogy. I explained this here:
- Rolling balls on a rubber sheet can be used as a qualitative analogy for the gravity well (gravitational potential). That's why it gives the correct qualitative result. But that has nothing to do with explaining General Relativity and curved space-time, because it applies equally to Newtonian Gravity.
- The indented rubber sheet can be used as a qualitative visualization of the space (not space-time) distortion in General Relativity (Flamm's paraboloid). But that has has nothing to do with explaining how masses attract each other in General Relativity, which requires including the time dimension (space-time distortion). Flamm's paraboloid represents a distortion of spatial distances between coordinates, and could just as well be shown with the funnel upwards, so the rolling balls would give a wrong result. Therefore rolling some balls on the curved surface representing Flamm's paraboloid makes no sense.
- The local intrinsic curvature of space-time you are asking about is primarily related to tidal-effects, or gravity gradient.
I often refer people to it as a way to begin grappling with the rubber sheet analogy.
You should not refer anyone to the rubber sheet analogy, as an explanation for gravitational attraction in GR. This is a better analogy:
Can anyone point me to an embedment graph of a 2-body system? I
Not one that explains the gravitational attraction for orbiting bodies. Since you need the time dimension, you can only show just one space dimension for curved space time (2D), because your non-curved embedding space is 3D.
You can do this for a radial fall:
https://www.physicsforums.com/threa...lly-at-rest-begin-to-fall.995946/post-6416452
Last edited:
Dale
Gold Member
2022 Award
Also in GR gravitation is not a "fictitious force" (I prefer the notion of "inertial force", but that's semantics). The distinction between "fictitious forces" and a gravitational field is that the former you can completely compensate by a change of the reference frame to a local inertial frame, which is not possible for the latter, where you always have tidal forces also in the local inertial frame.
Abhishek11235 and alantheastronomer
Staff Emeritus
Also in GR gravitation is not a "fictitious force" (I prefer the notion of "inertial force", but that's semantics). The distinction between "fictitious forces" and a gravitational field is that the former you can completely compensate by a change of the reference frame to a local inertial frame, which is not possible for the latter, where you always have tidal forces also in the local inertial frame.
How do you regard the effect, usually called a force, that one feels in an accelerating elevator? The effect that you'd measure by standing on a scale and taking it's reading while you are in the accelerating elevator?
It's quite common in the popular literature, I think, to call this effect a force at the lay-level. The next observation to make is to point out that it's not a real force. We seem to be arguing about what we want to call this effect now. But hopefully we can get everyone to agree that this effect is not an actual force, at least, even if we don't quite agree on exactly what we want to call it.
Eventually, we might also agree about how this effect, which happens in flat space-time, is related to gravity, which happens in curved space-time, too. After a lot more discussion :).
Going back to the problem of the Roche lobe, I'm not sure how one would set up the problem in coordinate-free terms to do a full GR analysis. Probably what one would do is pick some coordinates first, then make some further approximations and not use full GR. The PPN formalism comes to mind, it would both define some coordinates, and make some useful approximations. See for instance https://en.wikipedia.org/w/index.php?title=Parameterized_post-Newtonian_formalism&oldid=976721689 as a general guide to what PPN formalism is about.
The end result would be a whole lot of work, and I would expect that it wouldn't generate a lot of insight, probably not even any experimentally observable effects under most circumstances.
vanhees71
Mentor
Also in GR gravitation is not a "fictitious force" (I prefer the notion of "inertial force", but that's semantics).
I tend to use the word “gravitation” to include the entire set of all phenomena modeled by GR, including tidal effects, time dilation, deflection of light, frame dragging, etc.
So I agree that “gravitation” is not an inertial force.
I use the term “gravity” specifically to refer to the part of gravitation that shows up as an inertial force. That arises from the Christoffel symbols, just like any other inertial force, and has all of the other characteristics of an inertial force. This is also the quantity most associated with the word “gravity” in Newtonian physics. So saying gravity is an inertial force is both valid and common.
How do you regard the effect, usually called a force, that one feels in an accelerating elevator? The effect that you'd measure by standing on a scale and taking it's reading while you are in the accelerating elevator?
I would call that the normal force. Inertial forces, including gravity, are not measurable.
vanhees71
D.S.Beyer
Thanks everyone for jumping into this, and getting into the weeds about visualizing spacetime.
Maybe we can approach this from another direction, with a more concrete example question.
If we put satellites at each of the Lagrangian points (of the sun/earth system), what adjustments must they make to their internal clocks to remain synchronous with clocks on earth?
There are already a few satellites at L1 (SOHO, ACE, WIND) and L2 (WMAP, Plank, Herschel). So this problem has, ostensibly, been solved. I am particularly interested in the temporal adjustments needed in the L4 and L5 spots.
This doesn’t exactly solve the visualization problem, but could offer some additional understanding to what is going on with the fabric of spacetime at those locations.
(Mods : Let me know if this should be a different thread.)
If we put satellites at each of the Lagrangian points (of the sun/earth system), what adjustments must they make to their internal clocks to remain synchronous with clocks on earth?
This is a good question. In the rotating common rest frame (of Sun, Earth and Satellite) there is no kinetic time dilation. So any difference in clock rates is due to gravitational time dilation, or the effective potential difference (gravitational + centrifugal potential).
There are already a few satellites at L1 (SOHO, ACE, WIND) and L2 (WMAP, Plank, Herschel). So this problem has, ostensibly, been solved. I am particularly interested in the temporal adjustments needed in the L4 and L5 spots.
I don't know the exact engineering solution used.
This doesn’t exactly solve the visualization problem, but could offer some additional understanding to what is going on with the fabric of spacetime at those locations.
The effective potential should have stationary points there, just like in the Newtonian treatment. But visualizing L4 and L5 is tricky, because you need 2 spatial dimensions.
For L1, L2, L3, it is simpler, because they are in line with both massive bodies, so you need only 1 spatial dimension (in the rotating frame).
Here is the space-propertime diagram and a geodesic freefall worldline for a single massive body in an non rotating frame:
The red path is the geodesic world-line of a free falling object, that oscillates through a tunnel through a spherical mass. Note that the geodesic always deviates towards the "more stretched" proper time, or towards greater gravitational time dilation. Gravitational time dilation has an extreme point at the center of the mass (gradient is zero), so there is no gravity there (but the maximal gravitational time dilation).
Here is a rough sketch of such a space-propertime diagram for two massive body in the rotating common rest frame (it's supposed to be a surface of revolution around the shown space axis, like above):
Note that since geodesics deviate towards the "fatter parts" (greater time dilation / lower potential), the shown L points are unstable in the radial direction. This is also true based on the Newtonian potential (below). You actually need the Coriolis force to explain why any of them are semi-stable:
https://www.math.arizona.edu/~gabitov/teaching/141/math_485/Final_Report/Lagrange_Final_Report.pdf
Last edited:
D.S.Beyer
D.S.Beyer
Here is a rough sketch of such a space-propertime diagram for two massive body in the rotating common rest frame (it's supposed to be a surface of revolution around the shown space axis, like above):
View attachment 272668
Note that since geodesics deviate towards the "fatter parts" (greater time dilation / lower potential), the shown L points are unstable in the radial direction. This is also true based on the Newtonian potential (below). You actually need the Coriolis force to explain why any of them are semi-stable:
This is, quite possibly, the most instructive visual I've seen online in many years.
It wonderfully grounds these 'space proper-time diagrams', which are easily one of the most abstract visuals that come up in discussions like this.
Let me see if I'm reading this correctly.
Disregarding the time dilation from centrifugal potential for a minute... the L points have the fastest clocks, then the earth, then the sun. And, based on the Newtonian potential maps, I would guess that the L4 and L5 points would have even faster clocks than anything else in the system.
This is, quite possibly, the most instructive visual I've seen online in many years. It wonderfully grounds these 'space proper-time diagrams', which are easily one of the most abstract visuals that come up in discussions like this.
Thanks. A intuitive explanation of these diagrams is in the later chapters of this book:
Disregarding the time dilation from centrifugal potential for a minute... the L points have the fastest clocks, then the earth, then the sun. And, based on the Newtonian potential maps, I would guess that the L4 and L5 points would have even faster clocks than anything else in the system.
This is true for all clocks that are at rest relative to the Sun-Earth and the L-points (rotating with them).
Not sure why you want to disregard the centrifugal potential. The Newtonian potential map shown above also includes the centrifugal potential. That why it falls off towards negative infinity, when the distance from the barycenter towards infinity. The pure mass-attraction potential approaches a constant value at infinite distance.
Also keep in mind that both diagrams show a rotating frame. For the space-propertime diagram above the propertime "streching" shows the clock rate of clocks at rest in the rotating frame, along the Sun-Earth-line. In the non-rotating frame, clocks along this rotating line would approach light-speed, if far enough from the rotation center, and thus have infinite kinetic time dilation. In the rotating frame, where these clocks are at rest, this is accounted by the centrifugal potential. That's why the propertime blows up to infinity at both ends (proper time rate goes to zero, potential goes to negative infinity just like in the Newtonian picture)
Last edited:
Mentor
I have calculated the time dilation factors in a Sun-centered inertial frame, and here are the results I get.
For the Earth and Sun, I am using the time dilation factor at the center of the object, assuming it to be a sphere of uniform density. Of course this is not really accurate, but it's a reasonable approximation for the time dilation factor. For a spherical object of radius ##R## and total mass ##M##, the time dilation factor at the center, according to GR, solely due to the object's own gravity, is:
$$\frac{3}{2} \sqrt{1 - \frac{2 G M}{c^2 R}} - \frac{1}{2}$$
where ##G## is Newton's gravitational constant and ##c## is the speed of light.
We then combine this (what "combine" means will be specified further below) with the time dilation factor due to the other object's gravity and the time dilation factor due to velocity (if any).
The time dilation factor due to the Earth's gravity, for objects other than the Earth itself, is
$$\sqrt{1 - \frac{2 G M_E}{c^2 r}}$$
where ##M_E## is the mass of the Earth and ##r## is the object's distance from the Earth's center. A similar equation applies for the time dilation factor due to the sun's gravity, with the sun's mass ##M_S## in place of the Earth's mass and the object's distance from the center of the Sun used in place of the distance from the center of the Earth.
The time dilation factor due to a velocity ##v## in the given inertial frame is
$$\sqrt{1 - \frac{v^2}{c^2}}$$
When multiple time dilation factors apply to the same object, we combine them by multiplying them together.
For the distances involved, we make use of the formulas from this Wikipedia article:
https://en.wikipedia.org/wiki/Lagrange_point
If we use ##R_H## for the radius of the Earth's Hill sphere, as given in the article, then for L1 and L2, we have ##r_E = R_H## and ##r_S = D_E \pm R_H## for the distances from the Earth and Sun, where ##D_E## is the Earth-Sun distance in meters (I used ##D_E = 1.49 \times 10^{11}##). For L3, we have ##r_E = 2 * D_E + R_3## and ##r_S = D_E + R_3##, where ##R_3## is the distance ##r## given in the L3 section of the Wikipedia article. For L4 and L5, we have ##r_E = r_S = D_E##.
We also note that, since all of the Lagrange point objects have the same orbital period as the Earth about the Sun, their velocities are all given by ##v = \omega r_S##, where ##r_S## is the distance from the Sun and ##\omega = 2 \pi / Y##, where ##Y## is the length of the Earth's year in seconds (I used ##Y = 3.1 \times 10^7##).
Putting all the above together, I come up with the following time dilation factors:
$$V_S = 1 - 3.2002517723617174 \times 10^{-6}$$
$$V_E = 1 - 1.6082657650073884 \times 10^{-8}$$
$$V_1 = 1 - 1.5038067791017795 \times 10^{-8}$$
$$V_2 = 1 - 1.5041770162760315 \times 10^{-8}$$
$$V_3 = 1 - 1.5035442113564557 \times 10^{-8}$$
$$V_4 = V_5 = 1 - 1.5035456768508482 \times 10^{-8}$$
If we just take these raw numbers, we have for the clock rates Sun < Earth < L2 < L1 < L4,L5 < L3. However, these results are really only valid to two or three significant figures, so the Lagrange point values are really indistinguishable at this accuracy and we can really only say Sun < Earth < Lagrange points.
D.S.Beyer
Gold Member
2022 Award
How do you regard the effect, usually called a force, that one feels in an accelerating elevator? The effect that you'd measure by standing on a scale and taking it's reading while you are in the accelerating elevator?
You mean that the scale shows a larger/smaller weight when accelerating upwards (downwards)? Seen from my restframe it's an inertial force or equivalently part of the gravitational force (in GR it's the same, i.e., it cannot be distinguished whether you take it as inertial or gravitational foce as far as local physics is concerned, that's the GR version of the equivalence principle).
My point of view is the following: I think it's not very important, how you name these "forces" or rather "interactions". The only important thing to remember is that in each spacetime point there's a locally inertial frame of reference, which is realized by a Fermi-Walker transported (i.e., non-rotating) tetrad of a free-falling point-like observer. Whether or not you have "purely inertial" or "real gravitaty" is determined by the curvature tensor, i.e., if it is 0 there's no gravitational field present, and this is a frame-independent and thus physical definition.
Gold Member
2022 Award
I tend to use the word “gravitation” to include the entire set of all phenomena modeled by GR, including tidal effects, time dilation, deflection of light, frame dragging, etc.
So I agree that “gravitation” is not an inertial force.
I use the term “gravity” specifically to refer to the part of gravitation that shows up as an inertial force. That arises from the Christoffel symbols, just like any other inertial force, and has all of the other characteristics of an inertial force. This is also the quantity most associated with the word “gravity” in Newtonian physics. So saying gravity is an inertial force is both valid and common.
I would call that the normal force. Inertial forces, including gravity, are not measurable.
I think that's a good terminology, but one always has to define it, because as this discussion shows, it's not so common to be as accurate even in university textbooks (let alone in the original research literature or, even worse, the popular-science literature).
Dale
Mentor
You mean that the scale shows a larger/smaller weight when accelerating upwards (downwards)? Seen from my restframe it's an inertial force or equivalently part of the gravitational force (in GR it's the same, i.e., it cannot be distinguished whether you take it as inertial or gravitational foce as far as local physics is concerned, that's the GR version of the equivalence principle).
Be careful. The reading on the scale is purely dependent on the real force (the normal force). The scale cannot detect inertial forces.
PeterDonis
Gold Member
2022 Award
As I said, it's semantics, and I guess we are in danger to get into endless (somehow useless) debates, but what do mean by "normal force" here?
Take a scale at rest on Earth. If I stand on it the gravitational force due to the presence of the Earth acts on me and there's an equal and opposite force (of electromagnetic nature on the fundamental level) which compensates this gravitational force. The reaction of the spring in the scale to this interaction is that it is shortened somwhat and that length difference is shown by the scale.
If now the accelerator is accelerated upwards this is due to some external force and this has to be compensated additionally by the scale's spring and that's why it reads "more weight". From my point of view in the frame accelerated relative to the Earth's rest frame it's an additional inertial force, which within GR however is indistinguishable from a gravitational force.
Mentor
the gravitational force due to the presence of the Earth
There is no such force in relativity. The only force acting on you according to relativity is the force of the Earth's substance pushing up on you. That's the force that the scale is indicating. And if you get in a rocket with a thrust of more than 1 g, so it can propel you upward, the force of the rocket's engine is what the scale indicates. There is never any "gravitational force" at all in relativity.
Mentor
As I said, it's semantics, and I guess we are in danger to get into endless (somehow useless) debates, but what do mean by "normal force" here?
It is not semantics. This is experimentally testable. Only real forces are measurable, inertial forces are not measurable.
The normal force is the contact force between the scale and the object being weighed.
Take a scale at rest on Earth. If I stand on it the gravitational force due to the presence of the Earth acts on me and there's an equal and opposite force (of electromagnetic nature on the fundamental level) which compensates this gravitational force. The reaction of the spring in the scale to this interaction is that it is shortened somwhat and that length difference is shown by the scale.
The only thing that the scale measures is the real contact force between your feet and the scale. It does not measure the inertial force. Consider the same measurement in the frame of a nearby inertial (free falling) observer. The inertial force is changed to zero, the real force is unchanged, and the scale reading is unchanged.
If now the accelerator is accelerated upwards this is due to some external force and this has to be compensated additionally by the scale's spring and that's why it reads "more weight". From my point of view in the frame accelerated relative to the Earth's rest frame it's an additional inertial force, which within GR however is indistinguishable from a gravitational force.
Same as above. A nearby inertial observer will have no inertial force but the same real force and the same scale reading. Therefore the scale does not detect the inertial force, it detects the real force only.
We have discussed this before, do you not recall?
If now the accelerator is accelerated upwards this is due to some external force and this has to be compensated additionally by the scale's spring and that's why it reads "more weight". From my point of view in the frame accelerated relative to the Earth's rest frame it's an additional inertial force, which within GR however is indistinguishable from a gravitational force.
The additional contact force by the scale, and the additional inertial force have opposite directions and act differently on the body (foot soles vs. whole body volume). They are not the same thing.
Staff Emeritus
Let me see if I'm reading this correctly.
Disregarding the time dilation from centrifugal potential for a minute... the L points have the fastest clocks, then the earth, then the sun. And, based on the Newtonian potential maps, I would guess that the L4 and L5 points would have even faster clocks than anything else in the system.
There's an interesting graphic in a Nasa domain webpage on the Lagrange points, https://map.gsfc.nasa.gov/ContentMedia/lagrange.pdf that graphs the generalized potential with a contour diagram, though it includes the centrifugal potential.
The graph doesn't / can't include the velocity dependent terms in the generalized potential, however.
It's a PDF file, so I can't easily just past the graphic here, it's figure 2. The graphic shows that L4 and L5 are peaks in the effective potential, a little bit "higher" than L3. L1 and L2 are troughs, which are actually saddle points, in the potential.
The reference also has the analytic form of the generalized potential.
If it weren't for the velocity dependent terms in the generalized potential, L4 and L5 would be unstable, like a marble on the top of a hill.
The approach to determine stability, though is more involved than looking at the graphic. From said website
Nasa said:
Usually it is enough to look at the shape of the effective potential and see if the equilibrium points occur at hills, valleys, or saddles. However, this simple criterion fails when we have a velocity dependent potential. Instead, we must perform a linear stability analysis about each Lagrange point. This entails linearising the equations of motion about each equilibrium solution and solving for small departues from equilibrium.
When you add in the velocity-dependent terms, due to the coriolis force, L4 and L5 become stable for high enough mass ratios between the primary and secondary.
The link between time dilation and the potential to Newtonian order is very direct. The square of the time dilation , modulo possible factors of G and c depending on one's unit conventions, is that square of the time dilation is equal to ##|g_{00}|##, as ##d\tau^2 = g_{00} \, dt^2## for a body at rest. For a single central mass, ##|g_{00}| = 1 - 2U = 1 + 2 \Phi##, where U > 0 and ##\Phi < 0##, i.e. U and ##\Phi## are the same quantity with different signs.
If you include the usual factors of G and c, for a central body, ##U(r) = \frac{GM}{c^2 r}##. To Newtonian order, you can simply add the potentials from multiple bodies together.
Finding the approximate effects of GR would involve using a post-Newtonian approximation. But generally, I'd expect these corrections would be negligible.
Keeping the factors of c and G, we can write U(r) in a non-rotating frame with a single central mass as ##\frac{GM}{rc^2}##. And to Newtonian order, we can just add the potentials together from the various bodies.
Staff Emeritus
Oh, something i wanted to add. Knowing the time dilation to Newtonian order, we can use the principle of maximal aging - or it's big brother, the principle of extremal aging- to determine the equations of motion to Newtonian order.
E.F Taylor's book "Exploring Black Holes" goes into this approach in much more detail. A second edition is available on the Author's website for free , nowadays. http://www.eftaylor.com/exploringblackholes/
Much of the introductory work on said principle is in the first chapter.
PeterDonis
There's an interesting graphic in a Nasa domain webpage on the Lagrange points, https://map.gsfc.nasa.gov/ContentMedia/lagrange.pdf that graphs the generalized potential with a contour diagram, though it includes the centrifugal potential.
The graph doesn't / can't include the velocity dependent terms in the generalized potential, however.
It's a PDF file, so I can't easily just past the graphic here, it's figure 2. The graphic shows that L4 and L5 are peaks in the effective potential, a little bit "higher" than L3. L1 and L2 are troughs, which are actually saddle points, in the potential.
The reference also has the analytic form of the generalized potential.
If it weren't for the velocity dependent terms in the generalized potential, L4 and L5 would be unstable, like a marble on the top of a hill.
And for the Sun-Earth the saddle points L1-3 are actually unstable, despite the velocity dependent terms.
In short, Lagrange points are complicated enough in Newtonian physics. Expecting intuitive visualizations of them based on General Relativity is maybe asking too much.
Last edited:
Gold Member
2022 Award
There is no such force in relativity. The only force acting on you according to relativity is the force of the Earth's substance pushing up on you. That's the force that the scale is indicating. And if you get in a rocket with a thrust of more than 1 g, so it can propel you upward, the force of the rocket's engine is what the scale indicates. There is never any "gravitational force" at all in relativity.
That's one possible point of view. For me the gravitational interaction is an interaction as any other. From your point of view all phenomena we call "gravitation" were just inertial forces in non-inertial reference frames in Minkowski space. That's definitely not the case according to GR in the geometrical interpretation, where the presence of any energy-momentum-stress distribution leads to a spacetime with curvature. Gravitation is only equivalent to inertial forces in a local sense!
In the geometrical interpretation of the gravitational field, the gravitation of the Earth leads to a non-flat spacetime and the gravitational interaction between a test body and the Earth can only locally be compensated by changing to a free-falling reference frame. Of course, the "gravity I feel" is in fact the reaction force of the Earth on me, which is of electromagnetic nature.
As I said before, all this is just semantics and doesn't lead to much deeper understanding of GR. I think, however, it's important to stress that gravitation is not simply inertial forces only but that this is a local concept as is the equivalence principle (in its various weak and strong forms).
Gold Member
2022 Award
The additional contact force by the scale, and the additional inertial force have opposite directions and act differently on the body (foot soles vs. whole body volume). They are not the same thing.
I've not said that they are the same thing.
Gold Member
2022 Award
It is not semantics. This is experimentally testable. Only real forces are measurable, inertial forces are not measurable.
The normal force is the contact force between the scale and the object being weighed.
The only thing that the scale measures is the real contact force between your feet and the scale. It does not measure the inertial force. Consider the same measurement in the frame of a nearby inertial (free falling) observer. The inertial force is changed to zero, the real force is unchanged, and the scale reading is unchanged.
Same as above. A nearby inertial observer will have no inertial force but the same real force and the same scale reading. Therefore the scale does not detect the inertial force, it detects the real force only.
We have discussed this before, do you not recall?
Yes, you can locally transform the gravitational force away due to the equivalence principle. The only point I want to make is that one has to stress the word "locally" and that you cannot globally transform away a "true" gravitational field.
Mentor
As I said before, all this is just semantics
Again, this is not just semantics. The equivalence principle has clear experimental consequences. A scale measures the real contact force, not inertial forces and not (local) gravity.
The only point I want to make is that one has to stress the word “locally" and that you cannot globally transform away a "true" gravitational field.
Agreed. So make that point. The “only semantics” point is incorrect.
vanhees71
Gold Member
2022 Award
But then you should admit that it's wrong to say gravitational forces (I prefer the term gravitational interaction though since forces are something I want to restrict to the use in Newtonian physics only) are purely fictitious. This is indeed what's only local as is the equivalence principle. A lot of unnecessary discussion has occurred in the literature only because the local meaning of the equivalence principle hasn't been considered, e.g., the question whether a free falling charged body radiates or whether it doesn't.
In this sense it's right that it's not only semantics, because the claim that gravitational interactions are only like fictitious forces is inaccurate at best if not plain wrong.
Mentor
But then you should admit that it's wrong to say gravitational forces (I prefer the term gravitational interaction though since forces are something I want to restrict to the use in Newtonian physics only) are purely fictitious. This is indeed what's only local as is the equivalence principle.
Fully agreed, I have no objection to admitting that the inertial force designation is only local.
Do you similarly admit that the local designation of gravity as an inertial force is not purely semantic but has clear experimental consequences as described above?
vanhees71
|
{}
|
Skip to main content
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# High responsivity in MoS2 phototransistors based on charge trapping HfO2 dielectrics
## Abstract
2D Transition Metal Dichalcogenides hold a promising potential in future optoelectronic applications due to their high photoresponsivity and tunable band structure for broadband photodetection. In imaging applications, the detection of weak light signals is crucial for creating a better contrast between bright and dark pixels in order to achieve high resolution images. The photogating effect has been previously shown to offer high light sensitivities; however, the key features required to create this as a dominating photoresponse has yet to be discussed. Here, we report high responsivity and high photogain MoS2 phototransistors based on the dual function of HfO2 as a dielectric and charge trapping layer to enhance the photogating effect. As a result, these devices offered a very large responsivity of 1.1 × 106 A W−1, a photogain >109, and a detectivity of 5.6 × 1013 Jones under low light illumination. This work offers a CMOS compatible process and technique to develop highly photosensitive phototransistors for future low-powered imaging applications.
## Introduction
Transition metal dichalcogenides (TMDCs) have been recently studied with great interest due to their unique electronic and optoelectronic properties. Unlike graphene, these materials have an intrinsic bandgap that makes them a promising candidate for developing future electronic devices, including transistors1, integrated circuits2, and non-volatile memory devices3. Although MoS2 has been extensively studied, other TMDC materials such as molybdenum diselenide (MoSe2), tungsten diselenide (WSe2), tungsten disulfide (WS2), and pallidum diselenide (PdSe2) have been investigated to explore interesting properties, such as interface charge transport mechanisms4, controlling doping carrier type by field emission5, developing high mobility transistors using two-dimensional (2D) hexagonal boron nitride (h-BN) dielectrics6, and understanding the influence of external stimuli on charge transport properties7.
For 2D photodetection applications, graphene as a photoactive material has been pursued for its broadband detection ability8 and fast time response speeds9,10; however, its short carrier lifetimes in the picoseconds range and small optical absorption (~2%) limits its light detection sensitivity. On the other hand, TMDCs such as MoS2 hold a promising role in future photodetectors, since they offer attractive features such as high photoresponsivities, low dark currents, and tunable bandgaps via layer thickness for wider optical absorption11,12. There have been many techniques proposed to explore its photodetection ability and to enhance its photoresponse like combining MoS2 to form hybrid materials13, heterostructures14, PN junctions15, intrinsic photogating16, and three-dimensional device structures17. Enhancing the photosensitivity with intrinsic MoS2 is highly attractive, because it can offer a simple fabrication process and complementary metal–oxide–semiconductor (CMOS) compatibility. However, to achieve simultaneously a high light sensitivity under low power operation is challenging, yet highly desirable for its use in future image sensors.
The ability to design a dominating photocurrent generation mechanism can enable the opportunity to develop application-specific performance for photodetectors. MoS2 has been known to display in visible light a combination of two photocurrent generation mechanisms: the photoconductive and photogating effect. Overall, the photogating effect can provide higher light sensitivity, since the built-in electric field from trapped photocarriers can induce more majority free carriers. Although there has been an exploration of different kinds of applications of photogating such as the use of environmental gases to provide molecular gating18 and dual photogating with optical absorbing insulators19, there is still a lack of understanding how to control this effect with TMDC materials. In addition, previous reported works using TMDCs as photoactive channels have claimed to observe a dominant photogating effect in the on-state (accumulation); however, in the off-state (depletion) the photoconductive effect dominates16,20,21. One important parameter to consider is the influence of the dielectric layer properties on the photoresponse. To enable a dominating photogating effect for all operation modes (on- and off-state), a dielectric layer with an intrinsic affinity for charge trapping would be required to generate very large photocurrents.
Here we report a low-powered highly photosensitive MoS2 phototransistor through employing high-k HfO2 dielectrics. In this device structure, HfO2 serves as both a dielectric and charge-trapping layer. The intrinsic charge-trapping property of HfO2 via oxygen vacancies helps to enhance the photoresponse by trapping the photogenerated hole carriers. As a result, the photogating effect is strongly enhanced resulting in providing simultaneously a very large responsivity of 1.1 × 106 A W−1, detectivity of 5.6 × 1013 Jones, and photogain of 1.6 × 109 under weak light detection and low power operation.
## Results and discussion
### Multi-layered MoS2 and device characterization
The device schematic of the HfO2-based multi-layered MoS2 phototransistor can be seen in Fig. 1a and a close-up optical microscope image of the channel region in Fig. 1b. A back-gated device configuration was selected, since it allows for direct light illumination onto the MoS2 channel region for better optical absorption. Heavily doped n++ silicon was used as a back-gate where 10 nm of atomic layer deposition (ALD) HfO2 was deposited as the dielectric layer. Next, the multi-layered MoS2 flake was mechanically exfoliated from a bulk crystal and transferred onto HfO2. Finally, top contacts of Ti (5 nm)/Au (50 nm) with channel length of 5 μm were deposited by e-beam evaporation. Details of the device fabrication process can be found in the “Methods” section. To characterize the exfoliated multi-layered MoS2, a Raman spectroscopic and atomic force microscopic (AFM) measurements were performed. The Raman spectrum in Fig. 1c shows the in-plane $$E_{2g}^1$$ and out-of-plane A1g vibrational modes from the Mo-S bond in MoS2 where the two peaks were located at 382.9 and 407.7 cm−1, respectively. The wavenumber difference between the peaks was 24.8 cm−1, which is close to bulk MoS2 that has 25 cm−1. AFM height profile measurements were performed to obtain the film thickness of the multi-layered MoS2 flake as seen in Fig. 1d. Typical MoS2 flake thicknesses used in this study were in the range of 3–20 nm due to the result of our exfoliation/transferring method and in addition to evaluate near-infrared (NIR) wavelengths. Additionally, an AFM surface topographic scan of the channel region can be found in Supplementary Fig. 1.
Next, a performance evaluation of the HfO2-based MoS2 phototransistor was analyzed under the dark condition (no light illumination). The dark transfer characteristics of the phototransistor under different drain–source voltages (VDS) of 150, 250, and 500 mV can be seen in Fig. 1e. Under VDS bias of 150 mV, the Ion/Ioff ratio was 2.92 × 107, the subthreshold swing (SS) was determined to be 142 mV/dec, the threshold voltage (VTH) was −0.71 V from the linear extrapolation method, and the field-effect mobility of 5.07 cm2 V−1 s−1 was extracted from the linear region of the IDVG plot using the equation $$\mu = \left[ {\frac{{{\rm{d}}I_{\rm{D}}}}{{{\rm{d}}V_{\rm{G}}}}} \right]\frac{L}{{W\times {V_{{\rm{DS}}} \times }C_{{\rm{ox}}}}}$$, where W is the width, L is the channel length, VDS is the drain–source voltage, and Cox is the oxide capacitance per unit area. One aspect to point out is the large SS and low field-effect mobility that was obtained. In this structure, as-deposited ALD HfO2 is used with no surface pretreatments or post-deposition annealing to preserve an amorphous defect-rich interface. Back-gated MoS2 field-effect transistors have been reported to show lower mobilities compared to top-gate structures22,23. Some of the reasons include the exposure of the channel region to environmental gaseous absorbates like O2 and H2O that deplete electrons from MoS2 via electron transferring24 and the reduction in gate capacitance density due to the contribution of non-gapless contact of transferred MoS2, which is known as the van der Waals gap25. However, an improved electrical performance for back-gated devices can be achieved by channel encapsulation26. Lastly, the dark output characteristics are seen in Fig. 1f. A close-up of the IDVD plot at lower drain voltages can be seen in Supplementary Fig. 2. The linear relationship between the drain current and drain voltage under lower voltages indicates that our metal contacts have ohmic-type behavior.
### Photoresponse performance
The general light detection process starts with the absorption of incident light by the photoactive channel when the energy condition of Ephoton ≥ Eg,MoS2 is satisfied. Next, the photogenerated electron/hole pairs are separated by the applied electric field from VDS in the depletion region where the electrons and holes are collected at the electrodes. Generally, MoS2 has two dominant photocurrent generation mechanisms in visible light, which are the photoconductive and photogating effect27,28. Typically, when observing the photoresponse of MoS2 phototransistors, its transfer characteristics will display a combination of these effects. The photoconductive effect is the increase in conductivity of the semiconductor from illumination resulting in the generation of electron/hole pairs. These photogenerated carriers get collected by the electrodes and produce an increase in the current that adds to the dark current. The photogating effect is when one of the photogenerated carriers gets trapped and acts as a built-in local electric field. For n-type semiconductors, these trapped holes induce more majority carrier electrons and causes a horizontal shift in the IDVG curves.
An evaluation of the photoresponse with the HfO2-based MoS2 phototransistor under the illumination of blue light (λ = 460 nm) was studied. In addition, we compared its performance with the conventional SiO2-based MoS2 phototransistor with the same dielectric thickness. The effects of light illumination at different optical power densities can be seen in Fig. 2a, b for the HfO2 and SiO2 devices, respectively. As expected for both cases, there is an increase in ID as the optical power was increased. For the HfO2 device, the IDVG illumination curves are strongly shifted toward the left indicating a strong photogating effect. Since this device without illumination displays hysteresis, we found that its hysteresis window became larger with increasing optical powers (Supplementary Fig. 9). Its optical detection under red (λ = 630 nm) and NIR (λ = 850 nm) wavelengths were also measured and displayed the same photogating behavior (Supplementary Figs. 3 and 8). In addition, a measurement under vacuum conditions (2 × 10−3 Pa) was also performed with the HfO2 device and can be seen in Supplementary Figs. 10 and 11. We found that the multi-layered MoS2 channel region could not become fully depleted due to the poor gate control (back-gate structure) and absence of gaseous absorbates (O2 and H2O) that help to deplete the channel.
Also, its output characteristics under constant illumination in comparison to the dark condition can be found in Supplementary Fig. 4. On the other hand, the SiO2 device showed a stronger photoconductive effect where its IDVG illumination curves increased in the vertical direction. There has been a large variation in the reported responsivities29,30 and even in the dominant photoresponse behavior in SiO2-based MoS2 phototransistors. Some have reported to observing a dominating photogating effect31 or photoconductive effect32. This discrepancy comes from the interface between MoS2/SiO2 where SiO2 is well known for dangling bonds, which can act as trap sites. Also, the presence of moisture and surface absorbates at the interface have been shown to cause variations in hysteresis due to polar molecules like water33, which can act to screen the electric field in the channel region. In addition, due to the dielectric scaling down to 10 nm, the gate bias sweep has been reduced for both devices to 2 V in comparison to other works that requires much larger gate voltages between 20 and 40 V.
Next, the photocurrent (IPH = ILIGHT − IDARK) generation between the two devices was compared by plotting the photocurrent (IPH) versus gate voltage (VG) at VDS = 150 mV and Popt = 1.5 mW cm−2 in Fig. 2e. The HfO2 device displayed a much larger peak IPH of 2.1 μA compared to the SiO2 device whose peak IPH was 62.3 nA. Here the HfO2 device provided a 33 times higher photocurrent generation compared to the SiO2 device under the same biasing and illumination conditions. After the peak photogenerated current was reached, the SiO2 device showed a decline; however, the HfO2 device still displayed detection of photocurrent even into accumulation mode. A close-up of the illumination curves from Fig. 2a, b can be seen in Fig. 2c, d. For the HfO2 device, due to the strong IDVG parallel shifting under illumination, there is an increase in the on-state current in comparison to the dark state on-current. As a result, this device is still able to detect photocurrent in both depletion and accumulation mode operation. Conversely, the SiO2 device did not show this behavior as the illumination on-current is roughly the same as the dark condition. Lastly, the dependence of the photocurrent on the incident optical power density was plotted in a log scale in Fig. 2f. The gate voltages of −1.52 and −1 V were evaluated under depletion mode for the phototransistors and VDS was 150 mV. The photocurrent as a function of the optical power can be fitted using a power–law relationship: Iph$$P_{{\rm{opt}}}^\alpha$$, where the exponent α can range from 0 < α ≤ 1. A value of 1 represents a linear relationship where the increase in photocurrent is solely due to the photogenerated carriers (photoconductive effect)34. For the case of α < 1, it indicates a sub-linear relationship due to the presence of traps, defects, and other complicated photogeneration/recombination processes20,35. From the data fitting, the HfO2 device had α = 0.82 for VG = −1.52 V and α = 0.29 for VG = −1 V. On the other hand, the SiO2 device had α = 0.94 for VG = −1.52 V and α = 1.39 for VG = −1 V. The HfO2 device maintained the expected sub-linear relationship; however, the SiO2 device showed a close to linear photocurrent relationship with increasing optical powers. As a result of the photogating effect, the HfO2 device is capable of producing 103–105 order of magnitude larger photocurrents under depletion mode versus SiO2 whose photoresponse is photoconductive in this regime.
### High-temperature annealing and dielectric hole trapping
HfO2 is currently used in CMOS technology. In comparison to silicon dioxide, its use in TMDC-based transistors offers benefits, such as higher carrier densities, dielectric screening effects, and lower operating voltages. For silicon-based transistors, these types of metal oxide dielectrics have been found to have an inherent charge-trapping property36, which has been shown to have reliability issues such as degraded mobility from Coulomb and phonon scattering37,38 and threshold voltage shifts from charge injection into pre-existing traps in the high-k material39. One technique to improve the dielectric interface quality by reducing the interface trap density between the oxide and semiconductor layer is to perform a high-temperature anneal40. Here we explored the effects of thermal annealing to HfO2 and explore its impact on the photoresponse. After ALD deposition of HfO2, a rapid thermal annealing was performed at 1000 °C for 1 min before transferring MoS2. As-deposited HfO2 is amorphous; however, by applying a high-temperature anneal, it can introduce some crystalline domains to produce a polycrystalline film. An X-ray diffraction of both the amorphous and 1000 °C annealed HfO2 films can be seen in Supplementary Fig. 5. The 1000 °C HfO2 displayed some monoclinic phase peaks in its spectrum in comparison to the no anneal HfO2, which had none. The photoresponse of the 1000 °C annealed HfO2 device under the same biasing (VDS = 150 mV) and illumination conditions from before can be seen in Fig. 3a. It also displays the photogating effect; however, its illumination curves did not display strong parallel shifting like the no-anneal device. The photocurrent generation of the 1000 °C HfO2 device was also measured and plotted in Fig. 3b. Under a constant illumination of 1.5 mW cm−2, it generated a lower peak IPH of 46.7 nA.
To test the intrinsic potential of hole trapping with HfO2, a “stress and sense” IV measurement41 was performed under the dark condition. In this measurement, a negative gate pulse of −2 V was applied under varying stress time durations of 100 ms, 1 s, 10 s, and 100 s. After each gate pulse stress, an IV sweep around the threshold voltage was measured and plotted in Fig. 3c. The threshold voltage shift was measured with respect to the before stress threshold voltage (VTHO). As the stress time was increased, the IDVG curves moved toward the left as a parallel shift. This negative threshold voltage shift indicates the presence of trapped hole charges. Figure 3d shows the threshold voltage shift and effective density of defects (ΔNeff) generated from the negative bias with respect to stress time. In negative bias temperature instability, ΔNeff is a term that contains the total contribution of fast and slow defect states that are generated from the applied stressing conditions42. It can be determined from $${\Delta} N_{{\rm{eff}}} = \frac{{{\Delta} V_{{\rm{TH}}}\times C_{{\rm{ox}}}}}{q},$$ where ΔVTH is the threshold voltage shift, Cox is the oxide capacitance, and q is the electronic charge. The same measurement was also performed with the SiO2 device where the negative bias stress IDVG plot can be found in Supplementary Fig. 6. Comparing the two devices, the HfO2 device showed a larger threshold voltage shift and a larger defect density in ~1012 cm−2 with increasing bias stress time. Overall, these results show that a longer negative bias stress time leads to more defects generated in the oxide layer leading to larger threshold voltage shifting.
### Mechanism of photogating with HfO2
As previously discussed, the photogating mechanism relies on the charge trapping of the photogenerated holes. One method to confirm the photogating effect is to look at the amount of threshold voltage shifting under increasing optical powers. The threshold voltage shift is defined as: $${\Delta} V_{{\rm{TH}}} = V_{{\rm{TH}},\,{\rm{DARK}}} - V_{{\rm{TH}},\,{\rm{DARK}}}$$. In Fig. 4a, it shows a comparison of the threshold voltage shift versus optical power density for all three devices: no anneal HfO2, SiO2, and 1000 °C HfO2. The negative sign in ΔVTH indicates the presence of trapped hole charges. The no anneal HfO2 device overall displayed a stronger threshold voltage shift compared to the other devices indicating its higher sensitivity to the photogating effect. Another method to confirm the presence of photogating is to look at the relationship between the photocurrent and transconductance. Since the photogating effect produces a shift in the threshold voltage resulting in an increase to the drain current, the photocurrent should have a proportional relationship with the transconductance based on the following approximation: $$I_{{\rm{PH}}} \approx g_m\times {\Delta} V_{{\rm{TH}}}$$28, where $$g_m = \frac{{{\rm{d}}I_{\rm{D}}}}{{{\rm{d}}V_{\rm{G}}}}$$. Based on the results from Fig. 2c, we plotted the photocurrent at 1.5 mW cm−2 and the device’s transconductance as a function of the gate voltage up until the peak photocurrent in Fig. 4b, c for the HfO2 and SiO2 devices, respectively. Both devices displayed a similar trend for the photocurrent and transconductance; however, the HfO2 device showed a closer proportional relationship, thus further confirming a stronger photogating effect.
Light detection occurs in depletion mode where the bands near the surface of the channel bend upwards. A model of the charge-trapping process with the HfO2 dielectric can be found in Fig. 4f. The photogeneration process of free e/h+ occurs in the depletion region of MoS2 where the electric field from VDS assists to separate the charges. According to our proposed model, the photogenerated holes tunnel into HfO2 to occupy oxide trap levels near the valence band edge of MoS2. As a result of this hole accumulation process in HfO2, the trapped holes act as a local built-in electric field, which shifts the Fermi level in MoS2 to induce more electrons. Evidence of the presence of oxide traps can be seen from the strong horizontal IDVG shifting under light illumination. In order to suppress the photogenerated hole trapping in HfO2, the insertion of 3.4 nm of deposited SiO2 was inserted between the MoS2 channel and HfO2 to function as an insulating tunneling barrier layer (Fig. 4d). Next, the same blue light illumination measurement as before was performed in Fig. 4e. This device now showed a more dominate photoconductive behavior, thus indicating the successful separation of the oxide traps in HfO2 with the valence band edge of MoS2.
Next, the photoresponse of a different TMDC material WSe2 was analyzed utilizing the same device structure and metal contacts. Although higher optical powers were needed to clearly observe its light detection, its photoresponse to blue light can be seen in Fig. 4g where its flake thickness was close to 5 nm. Interestingly, the WSe2 device with the same HfO2 dielectric showed a strong photoconductive behavior. To understand this discrepancy with MoS2, an energy band diagram of multi-layered MoS2, multi-layered WSe2, and HfO2 with respect to the vacuum level can be seen in Fig. 4h. The valence band maximum (VBM) of WSe2 lies at a higher energy level in comparison to MoS2 and their VBM difference was determined to be 0.4 eV. HfO2 is known to have intrinsic defects such as oxygen vacancies and interstitials43 located within the bandgap where they can serve as electron and hole traps. According to a simulation study44 with monoclinic HfO2, there is a distribution of oxygen vacancies of different charged states (positive V+, negative V, and neutral Vo) located slightly below the mid bandgap region of HfO2. In particular, the oxygen vacancies of type V+ at 2.71 eV and Vo at 2.91 eV with respect to the valence band of HfO2 correspond to energy levels with respect to the vacuum level of 5.89 eV for V+ and 5.69 eV for Vo, which lie close in energy to the VBM of multi-layer MoS2 at −5.6 eV (monolayer MoS2 has VBM at ~5.8 eV). The phenomenon of charge tunneling relies on the potential of the barrier height and effective mass of the carrier. Although the effective hole mass for MoS2 (0.54 mo45) is heavier than WSe2 (0.36 mo45), the observed tunneling behavior for MoS2 most likely arises from the defect energy levels of the oxygen vacancies in HfO2 having good band alignment with the low-lying valence band edge of MoS2. We found the photogating effect for MoS2 to be reproducible and present in all devices made (Supplementary Table 1—all devices show high responsivity); therefore, this contributing defect state in HfO2 must be an intrinsic defect. As for the photoconductive behavior observed with WSe2, its valance band offset with these oxygen vacancies provides a trap energy-level misalignment resulting in no hole trapping and instead allows for the collection of the photogenerated hole carriers at the electrode.
### Photodetection metrics
Some of the figures of merit for photodetectors such as responsivity, detectivity, photogain, and time response were evaluated for the HfO2 phototransistor. The responsivity (R) represents the conversion efficiency of the incident photon flux (input signal) into photogenerated free carriers (output signal). It is defined as $$R = \frac{{I_{{\rm{ph}}}}}{{P_{{\rm{opt}}}\times A}}$$, where Iph is the photocurrent, Popt is the incident optical power density, and A is the area of the channel. Under a negative gate bias and VDS = 150 mV, the responsivity with respect to optical power density can be seen in Fig. 5a. The peak responsivity of 1.1 × 106 A W−1 was obtained under the lowest optical power of 0.33 pW. The detectivity describes the response to light (sensitivity) and the noise floor of a photodetector. The dark noise current was measured based on a previously reported technique46 where we obtained 10.6 pA Hz−1/2 at a frequency of 2 Hz, which was the lowest frequency we could experimentally obtain. Due to the low dark current of this device (avg Idark ~ 5 pA), the shot noise limit was determined to be 1.3 fA Hz−1/2 from $$I_{{\rm{shot}}} = \sqrt {2qI_{{\rm{dark}}}}$$, where q is electronic charge. Next, the noise equivalent power (NEP) was calculated from $${\rm{NEP}} = \frac{{I_{{\rm{noise}}}}}{R}$$, where R is the responsivity. The specific detectivity (D*) was obtained from $$D^ \ast = \frac{{\sqrt A }}{{{\rm{NEP}}}}$$, where A is the area of the channel region. Figure 5b shows the specific detectivity as a function of the optical power density at different VG biasing where the highest detectivity achieved was 5.62 × 1013 Jones. A more accurate measure of the detectivity of this phototransistor can be obtained by performing a dark noise current measurement at the intrinsic bandwidth of this detector at around 3 mHz (narrow bandwidth is due to its long carrier lifetime).
The photoswitching behavior was investigated to determine its time response. The device was biased under depletion mode with VG at −1.5 V and VDS at 150 and 500 mV in Fig. 5c. The blue region indicates the on-state where the light source was turned on for a duration of 30 s. Within the first few seconds the light source is turned on, there is a rapid increase in the current due to band-to-band transitions. Next, the current transitions into a slow increase where the peak current value occurs at the moment when the light source is cut off. This slow current generation is due to the photogenerated electrons that are induced from the photogenerated hole-trapping process. After the light source is turned off, the current at first decays rapidly but then transitions into a slow decay, which is called persistent photocurrent effect (PPC)47,48. The PPC effect is the sustained conductivity after illumination and is attributed to the presence of trapped charges at the interface between the semiconductor and dielectric. The relaxation time constant can be extracted from the slowly decaying drain current (ID) by using a stretched exponential decay function: $$I_{{\rm{PPC}}}\left( t \right) = I_{\rm{o}}{\rm{e}}^{ - \left( {\frac{t}{\tau }} \right)^\beta }$$ where τ is the relaxation time constant and β is the decay exponent that ranges from 0 to 1. Figure 5d shows the PPC model fitted to the decaying ID after illumination. The fitting parameters of τ and β were determined to be 312 s and 0.395 for VDS = 150 mV and 272 s and 0.326 for VDS = 500 mV, respectively. For both drain voltages, the time constants were large due to the slow de-trapping time of the oxide-trapped charges. On the other hand, the SiO2 phototransistor displayed a much faster photoswitching behavior as seen in Supplementary Fig. 7. The rise and fall times were 408 and 682 ms for VDS bias of 150 mV. The faster switching speeds and stable illumination current of the SiO2 device indicates the absence of slow deep-level traps.
Photogain is the ratio between the generated photocarrier lifetime and carrier transit time. For the case where μhμe, more electrons are collected, so the photogain can be determined by $$G = \frac{{\tau _{{\rm{photocarriers}}\,\times \mu \times V_{{\rm{DS}}}}}}{{L^2}}$$, where τ is lifetime of photogenerated carriers, μ is the carrier mobility, VDS is the drain–source voltage, and L is the channel length. For VDS = 150 and 500 mV, the photogain was determined to be 9.49 × 108 and 2.76 × 109, respectively. In general, there is a tradeoff between a fast time response and high photogain, since a higher photosensitivity relies on having longer carrier lifetimes. The large photogain obtained with this device can be attributed to the slow de-trapping process of the trapped hole carriers. As previously mentioned, MoS2-based phototransistors have been demonstrated to offer very large responsivities. A benchmark of previously reported MoS2 phototransistors responsivities based on monolayer and multilayer MoS2 are summarized11,16,28,29,30,32,49 in Fig. 5e. This study offers the highest responsivity in visible light at blue and red wavelengths (1.3 × 104 A W−1). The responsivity for the NIR wavelength at 850 nm was 13.2 A W−1.
## Conclusion
In summary, we developed a highly photosensitive MoS2 phototransistor using high-k metal oxide HfO2 dielectrics. Due to the valence band edge alignment of MoS2 with oxygen vacancies in HfO2, it enabled hole-trapping behavior in HfO2 resulting in generating a stronger photogating effect. By providing a valence band offset with the oxygen vacancies, a strong photoconductive behavior can be observed as in the WSe2 device case. In addition, we found providing a charge-tunneling blocking layer can help to suppress the hole-tunneling into HfO2 for MoS2 and the use of SiO2 dielectrics can also show photoconductive behavior for MoS2. For the MoS2/HfO2 device, it provided a very high responsivity of approximately 106 A W−1 and photogains in ~109. By scaling the dielectric thickness down to 10 nm, it enabled lower power operation and the ability to still optically detect thin flakes. Overall, this enhancement in the photosensitivity allows for better photodetection of weak light signals under low power operation.
## Methods
### Device fabrication
A bulk crystal of MoS2 was purchased from 2D semiconductors and the bulk crystal of WSe2 was purchased from GrapheneHQ. Heavily doped n-type silicon was used as the substrate and was cleaned by RCA pre-cleaning followed by HF etching. For deposition of the dielectric layer, 10 nm of ALD HfO2 (Picosun) from pre-cursors of tetrakis(ethylmethylamino)hafnium and H2O was deposited at 250 °C or 10 nm of SiO2 was thermally grown by thermal oxidation (Koyo Thermo Systems Co., LTD). Multi-layered MoS2 was obtained from exfoliating the bulk crystal using tape from Nitto Corporation. MoS2 was then transferred and patterned using a photolithography process with channel length dimensions of ~5 μm. Metal contacts of 5 nm Ti/50 nm Au was deposited by e-beam evaporation. Aluminum was deposited on the backside of silicon to provide better electrical contact for the back-gate. The final step was lift-off.
### Device characterization
All measurements were performed at room temperature and under ambient conditions. For the vacuum measurements, Nagase Techno Engineering Co., Ltd Grail-408-32-B was used as the probe station and Keithley 4200 SCS was used for device measurements. A commercial blue (460 nm) and red (630 nm) 0.5 W Mid-Power Flux light-emitting diode (LED) from LED Paradise (LP-5FCIHBCT) and NIR (850 nm) from Optosupply (OSI3XNE3E1E) was used as a light source with an LED lens where the distance between the LED and sample was ~6.5 cm. Ambient condition measurements were performed using Cascade probe system (Form Factor) and Agilent 4156C Precision Semiconductor Parameter Analyzer. In order to obtain a steady-state condition for illumination measurements, the LED was turned on for 1 min before the measurement was taken and was turned off for 3–5 min before subsequent measurements were made. Time response/noise measurements were made using Agilent 33500B series waveform generator to provide the light pulse waveforms. Ametek 7270 lock-in amplifier and Femto variable-gain low-noise current amplifier (DLPCA-200) was used for the noise current measurement.
## Data availability
The data supporting the findings of this study can be available from the corresponding author upon reasonable request.
## References
1. 1.
Radisavljevic, A. R., Brivio, J., Giacometti, V. & Kis, A. Single-layer MoS2 transistors. Nat. Nanotechnol. 6, 147–150 (2011).
2. 2.
Wang, H. et al. Integrated circuits based on bilayer MoS2 transistors. Nano Lett. 12, 4674–4680 (2012).
3. 3.
Bertolazzi, S., Krasnozhon, D. & Kis, A. Nonvolatile memory cells based on MoS2/graphene heterostructures. ACS Nano 7, 3246–3252 (2013).
4. 4.
Mouafo, L. D. N. et al. Tuning contact transport mechanisms in bilayer MoSe2 transistors up to fowler-nordheim regime. 2D Mater. 4, 015037 (2017).
5. 5.
Di Bartolomeo, A. et al. A WSe2 vertical field emission transistor. Nanoscale 11, 1538–1548 (2019).
6. 6.
Iqbal, M. W. et al. High-mobility and air-stable single-layer WS2 field-effect transistors sandwiched between chemical vapor deposition-grown hexagonal BN films. Sci. Rep. 5, 10699 (2015).
7. 7.
Di Bartolomeo, A. et al. Pressure-tunable ambipolar conduction and hysteresis in thin palladium diselenide field effect transistors. Adv. Funct. Mater. 29, 1902483 (2019).
8. 8.
Zhang, Y. et al. Broadband high photoresponse from pure monolayer graphene photodetector. Nat. Commun. 4, 1811 (2013).
9. 9.
Xia, F., Mueller, T., Lin, Y.-M., Valdes-Garcia, A. & Avouris, P. Ultrafast graphene photodetector. Nat. Nanotechnol. 4, 839–843 (2009).
10. 10.
Urich, A., Unterrainer, K. & Mueller, T. Intrinsic response time of graphene photodetectors. Nano Lett. 11, 2804–2808 (2011).
11. 11.
Choi, W. et al. High-detectivity multilayer MoS2 phototransistors with spectral response from ultraviolet to infrared. Adv. Mater. 24, 5832–5836 (2012).
12. 12.
Lee, H. S. et al. MoS2 nanosheet phototransistors with thickness-modulated optical energy gap. Nano Lett. 12, 3695–3700 (2012).
13. 13.
Wang, Y. et al. Solution-processed MoS2/organoleadtrihalide perovskite photodetectors. Adv. Mater. 29, 1603995 (2017).
14. 14.
Han, P. et al. Highly sensitive MoS2 photodetectors with graphene contacts. Nanotechnology 29, 20LT01 (2018).
15. 15.
Lembke, D. & Kis, A. Breakdown of high-performance monolayer MoS2 transistors. ACS Nano 6, 10070–10075 (2012).
16. 16.
Wu, J.-Y. et al. Broadband MoS2 field-effect phototransistors: ultrasensitive visible-light photoresponse and negative infrared photoresponse. Adv. Mater. 30, 1705880 (2018).
17. 17.
Xiao, P. et al. Solution-processed 3d RGO-MoS2/pyramid Si heterojunction for ultrahigh detectivity and ultra-broadband photodetection. Adv. Mater. 30, 1801729 (2018).
18. 18.
Miller, B. et al. Photogating of mono- and few-layer MoS2. Appl. Phys. Lett. 106, 122103 (2015).
19. 19.
Zhang, K. et al. A substrate-enhanced MoS2 photodetector through a dual-photogating effect. Mater. Horiz. 6, 826 (2019).
20. 20.
Island, J. O., Blanter, S. I., Buscema, M., Van der Zant, H. S. J. & Castellanos-Gomez, A. Gate controlled photocurrent generation mechanisms in high-gain In2Se3 phototransistors. Nano Lett. 15, 7853–7858 (2015).
21. 21.
Yamamoto, M., Ueno, K. & Tsukagoshi, K. Pronounced photogating effect in atomically thin WSe2 with a self-limiting surface oxide layer. Appl. Phys. Lett. 112, 181902 (2018).
22. 22.
Xu, H. et al. High responsivity and gate tunable graphene-MoS2 hybrid phototransistor. Small 10, 2300–2306 (2014).
23. 23.
Huo, N. & Konstantatos, G. Ultrasensitive all-2d MoS2 phototransistors enabled by an out-of-plane MoS2 PN homojunction. Nat. Commun. 8, 572 (2017).
24. 24.
Tongay, S. et al. Broad-range modulation of light emission in two-dimensional semiconductors by molecular physisorption gating. Nano Lett. 13, 2831–2836 (2013).
25. 25.
Zou, X. et al. A comparative study on top-gated and bottom-gated multilayer MoS2 transistors with gate stacked dielectric of Al2O3/HfO2. Nanotechnology 29, 245201 (2018).
26. 26.
Kufer, D. & Konstantatos, G. Highly sensitive, encapsulated MoS2 photodetector with gate controllable gain and speed. Nano Lett. 15, 7307–7313 (2015).
27. 27.
Buscema, M. et al. Photocurrent generation with two-dimensional van der waals semiconductors. Chem. Soc. Rev. 44, 3691–3718 (2017).
28. 28.
Furchi, M. M., Polyushkin, D. K., Pospischil, A. & Mueller, T. Mechanisms of photoconductivity in atomically thin MoS2. Nano Lett. 14, 6165–6170 (2014).
29. 29.
Perea-Lopez, N. et al. CVD-grown monolayered MoS2 as an effective photosensor operating at low-voltage. 2D Mater. 1, 011004 (2014).
30. 30.
Lopez-Sanchez, O., Lembke, D., Kayci, M., Radenovic, A. & Kis, A. Ultrasensitive photodetectors based on monolayer MoS2. Nat. Nanotechnol. 8, 497–501 (2013).
31. 31.
Tran, M. D. et al. Role of hole trap sites in MoS2 for inconsistency in optical and electrical phenomena. ACS Appl. Mater. Interfaces 10, 10580–10586 (2018).
32. 32.
Yin, Z. et al. Single-layer MoS2 phototransistors. ACS Nano 6, 74–80 (2012).
33. 33.
Late, D. J., Liu, B., Ramakrishna Matte, H. S. S., Dravid, V. P. & Rao, C. N. R. Hysteresis in single-layer MoS2 field effect transistors. ACS Nano 6, 5635–5641 (2012).
34. 34.
Sze, S. M. & Ng, K. K. Physics of Semiconductor Devices 3rd edn (Wiley & Sons, 2007).
35. 35.
Fang, H. & Hu, W. Photogating in low dimensional photodetectors. Adv. Sci. 4, 1700323 (2017).
36. 36.
Gusev, E. P., D’Emic, C. D., Zafar, S. & Kumar, A. Charge trapping and detrapping in HfO2 high-k gate stacks. Microelectron. Eng. 72, 273–277 (2004).
37. 37.
Oates, A. S. Reliability issues for high-k gate dielectrics. In IEEE International Electron Devices Meeting 2003 38.2.1–38.2.4 (IEEE, 2003).
38. 38.
Zhu, W., Han, J.-P. & Ma, T. P. Mobility measurement and degradation mechanisms of MOSFETs made with ultrathin high-k dielectrics. IEEE Trans. Electron Devices 51, 98–105 (2004).
39. 39.
Ribes, G. et al. Review on high-k dielectrics reliability issues. IEEE Trans. Device Mater. Rel. 5, 5–19 (2005).
40. 40.
Zhu, W. J., Ma, T. P., Zafar, S. & Tamagawa, T. Charge trapping in ultrathin hafnium oxide. IEEE Electron Device Lett. 23, 597 (2002).
41. 41.
Zafar, S., Callegari, A., Gusev, E. & Fischetti, M. V. Charge trapping related threshold voltage instabilities in high permittivity gate dielectric stacks. J. Appl. Phys. 91, 9298–9303 (2003).
42. 42.
Fleetwood, D. M. & Schrimpf, R. D. Defects in Microelectronic Materials and Devices (CRC, 2008).
43. 43.
McIntyre, P. Bulk and interfacial oxygen defects in HfO2 gate dielectric stacks: a critical assessment. ECS Trans. 11, 235 (2007).
44. 44.
Gavartin, J. L. et al. Negative oxygen vacancies in HfO2 as charge traps in high-k stacks. IEEE Trans. Electron Devices 51, 98–105 (2004).
45. 45.
Wickramaratne, D., Zahid, F. & Lake, R. K. Electronic and thermoelectric properties of few-layer transition metal dichalcogenides. J. Chem. Phys. 140, 124710 (2014).
46. 46.
Adinolfi, V. & Sargent, E. H. Photovoltage field-effect transistors. Nature 542, 324–327 (2017).
47. 47.
Wu, Y.-C. et al. Extrinsic origin of persistent photoconductivity in monolayer MoS2 field effect transistors. Sci. Rep. 5, 11472 (2015).
48. 48.
Di Bartolomeo et al. Electrical transport and persistent photoconductivity in monolayer MoS2 phototransistors. Nanotechnology. 28, 214002 (2017).
49. 49.
Zhang, W. et al. High-gain phototransistors based on a CVD MoS2 monolayer. Adv. Mater. 25, 3456–3461 (2013).
Download references
## Acknowledgements
This work was partly commissioned by the New Energy and Industrial Technology Development Organization (NEDO) and partly supported by NIMS Joint Research Hub Program. This work was partly conducted at the Takeda Sentanchi Supercleanroom, The University of Tokyo, supported by “Nanotechnology Platform Program” of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, Grant Number JPMXP09F-20-UT-0021. R.N. is supported by the Japanese Government Monbukagakusho (MEXT) scholarship. The authors would like to thank Z. Zhao and H. Tang of the University of Tokyo for their technical assistance.
## Author information
Authors
### Contributions
R.N. conceived and designed the research, performed the fabrication, characterization, and measurements. K.To. assisted with the fabrication. T.T. assisted with the experiments. R.N. and S.T. analyzed the mechanism. R.N. wrote the manuscript with comments from all the authors. M.T., K.Te., and S.T. supervised the project.
### Corresponding author
Correspondence to Roda Nur.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
## Additional information
Peer review information Primary handling editor: John Plummer.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
## About this article
### Cite this article
Nur, R., Tsuchiya, T., Toprasertpong, K. et al. High responsivity in MoS2 phototransistors based on charge trapping HfO2 dielectrics. Commun Mater 1, 103 (2020). https://doi.org/10.1038/s43246-020-00103-0
Download citation
• Received:
• Accepted:
• Published:
|
{}
|
# Integration with other libraries
Integration of other libraries or just your own custom functions, typically involves .call().
Lets take the shell integration library sh as an example. This library adds a function like interface to shell callouts like this: ifconfig(), sed('s/^/>> /', _in='foo\nbar\baz'). This is problematic, as function call chains want callables, that get one input argument - in our case stdin, or the _in parameter. To support this library, you can manually curry what you need, or create a small adapter object, that does this currying:
#!/usr/bin/env python
import typing
import sh
import fluentpy as _
class SHWrapper(object):
def __getattr__(self, command):
def _prepare_stdin(stdin):
if isinstance(stdin, (typing.Text, sh.RunningCommand)):
return stdin # use immediately
elif isinstance(stdin, typing.Iterable):
return _(stdin).map(str).join('\n')._
else:
return str(stdin) # just assume the caller wants to process it as string
def command_wrapper(*args, **kwargs):
def command_with_arguments_wrapper(stdin):
return getattr(sh, command)(*args, **kwargs, _in=_prepare_stdin(stdin))
return command_with_arguments_wrapper
return command_wrapper
pipe = SHWrapper()
_(range(10)).call(pipe.sed('s/^/>> /')).call(pipe.sort('-r')).print()
This library is wrapped in the SHWrapperobject, that a) adapting the way stdin is handled, to adapt various input types to serve as stdin, as well as b) adapt the interface to create simple callables in two steps via currying, instead of requiring stdin in the same call that defines the arguments.
With that .call() can be used, to insert sh callouts in call chains:
_(range(10)).call(pipe.sed('s/^/>> /')).call(pipe.sort('-r')).print()
So to summarize: If you want to adapt your own libraries to serve inside of call chains:
• If the interface is allready plain callables, you are in luck, just use them.
• If not, you might need to adapt the interface of the library to single input functions.
|
{}
|
Title & Authors
Wang, Gendi; Zhang, Xiaohui; Chu, Yuming;
Abstract
In this paper the authors study the properties of the so-called exponent-quasiadditive functions and an application to the generalized $\small{Gr\ddot{o}tzsch}$ ring function of quasiconformal theory is specified.
Keywords
exponent-quasiadditive;upper bound;lower bound;generalized $\small{Gr\ddot{o}tzsch}$ ring function;quasiconformal theory;
Language
English
Cited by
References
1.
J. Aczel, On Applications and Theory of Functional Equations, Birkhauser Verlag, Basel, 1969
2.
G. D. Anderson, S.-L. Qiu, M. K. Vamanamurthy, and M. Vuorinen, Generalized elliptic integrals and modular equations, Pacific J. Math. 192 (2000), 1-37
3.
G. D. Anderson, M. K. Vamanamurthy, and M. Vuorinen, Conformal Invariants, Inequalities, and Quasiconformal Maps, John Wiley & Sons, New York, 1997
4.
J. M. Borwein and P. B. Borwein, Pi and the AGM, John Wiley & Sons, New York, 1987
5.
F. Bowman, Introduction to Elliptic Functions with Applications, Dove Publications, Inc., New York, 1961
6.
P. F. Byrd and M. D. Friedman, Handbook of Elliptic Integrals for Engineers and Physicists, 2nd ed., Die grundlehren Math. Wiss., 67, Springer-Verlag, Berlin-Gottingen-Heidelberg-New York, 1971
7.
M. Kuczma, Functional Equations in a Single Variable, PWN, Warezawa, 1968
8.
M. Kuczma, On the Schroder equation, Rozprawy Politech. Poznan 34 (1963), 1-500
9.
M. Kuczma, An Introduction to the Theory of Functional Equations and Inequalities, Panstwowe Wydawnictwo Naukowe, Warzawa-Kralow, 1985
10.
M. Kuczma, B. Choczewski, and R. Ger, Iterative Functional Equations, Encyclopedia of Mathematics and Its Applications 32, Cambridge University Press, 1990
11.
O. Lehto and K. I. Virtanen, Quasiconformal Mappings in the Plane, Springer-Verlag, New York, 1973
12.
S.-L. Qiu, Grotzsch ring and Ramanujan's modular equations, Acta Math. Sinica (Chinese) 43 (2000), 283-290
13.
S.-L. Qiu, Singular values, quasiconformal maps and the Schottky upper bound, Science in China 28 (1998), 1241-1247
14.
S.-L. Qiu and M. Vuorinen, Infinite products and normalized quotients of hypergeometric functions, SIAM J. Math. Anal. 30 (1999), 1057-1075
|
{}
|
Question about definition of Semi algebra
I am wondering if someone could help me with basic properties of semi algebra. We say that $S$ is a semi algebra of subsets of X if
1. $\emptyset \in S$
2. If $P_1$, $P_2 \in S$, then $P_1 \cap P_2 \in S$
3. If $P \in S$, then $X \backslash P$ can be written as a finite union of sets from $S$.
But I am finding that sometimes it is defined using the following 3' instead of 3.
3'. If $P \in S$, then $X \backslash P$ can be written as a disjoint finite union of sets from $S$.
My question is are these definitions equivalent? If so can someone please show me how we can obtain 3' from the first three conditions?
Thank you.
-
I am used to seeing semi-algebra defined with 2,3', without the first condition. – Braindead Jun 29 '13 at 0:51
This thread was confusing because a seemingly incorrect answer has been accepted. Other answers don't seem fully confident that their answers are correct. I would like a correct answer. I have therefore reposted this question here math.stackexchange.com/questions/1135203/… Future readers or answers should look to this new thread if they find this one unhelpful. Hopefully this new thread can come to an accepted answer to this question that is correct this time. – Stan Shunpike Feb 6 at 0:03
Certainly $3'$ implies $3$ so we just need to show $3$ implies $3'$. You can write $X\backslash P$ as
$$X\backslash P=\cup_i^n A_i$$
Define $B_1=A_1$, $B_2=A_2\backslash A_1$, $B_3=A_3\backslash A_2\backslash A_1$... Note that for example, $B_2=A_2\cap(X\backslash A_1)$. Ive written it in this form to show its in your semi algebra, in particular complements of sets are defined through taking them away from the whole space and writing them as a union of finite sets. Then
$$X\backslash P=\cup_i^n B_i,$$
where the $B_i$ are disjoint. This a common trick to get disjoint unions from nondisjoint unions.
-
Are you saying that, since the $A_i$'s are in $S$, so are the $B_i$'s? That seems to be needed to complete this argument, but I don't immediately see why it should be true. – Andreas Blass Dec 12 '12 at 0:20
@AndreasBlass: I've added some clarification. Thanks for your point. – Alex R. Dec 12 '12 at 1:14
Hi, I am just having trouble understanding why $B_2$ has to be in the semi algebra... Could you possibly explain it a bit more? – J Kasahara Dec 12 '12 at 17:30
@JKasahara: Take a look here for further detail: books.google.com/… – Alex R. Dec 12 '12 at 21:24
@Alex: I did the "accept the answer" because I'm sure your answer is correct. Thank you for the reference. I looked but I am still confused about one thing.. As you say we can write $X \backslash A_1$ as a finite union of sets in the semialgebra $S$, but $S$ is not closed under finite unions so I just can't see how intersecting it with $A_2 \in S$ gets us a set in $S$. Are we assuming extra condition here by any chance? I would greatly appreciate if you could possibly explain this minor detail. Thank you very much. – J Kasahara Dec 13 '12 at 16:14
My guess is that you cannot easily show this. Most good books I have seen, that use the concept of semi-algebra, take care to use your 1,2, and 3' - not 1,2, and 3 as its definition.
Answer 1 to your question is (as you have spotted yourself, I think) basically wrong: Alex has missed the fact that in his construction of the $B_i$ from the $A_i$, he is relying on complements being members of ${\cal S}$, which he is not entitled to do.
I don't know whether 3' can be deduced by 1,2, and 3. It's an interesting question. I suppose a disproof would be to exhibit a class of subsets of some set that satisfies 1, 2, and 3 but contains a member whose complement is not a disjoint union of members.
I would be interested if someone here could answer your conundrum one way or another.
-
Here is a counterexample showing that 1,2, and 3 do not prove 3'.
Let $X$ be the nodes of an infinite complete binary tree. Then for $x\in X$, let $L(x)$ denote all nodes in the left subtree from $x$, and similarly let $R(x)$ denote all nodes in the right subtree from $x$. Then let
$S = \{\{x\}| x\in X\} \cup \{\{x\}\cup L(x)| x\in X\} \cup \{\{x\}\cup R(x)| x\in X\} \cup \{\{\} \}$
In other words, S is comprised of all singletons, all singletons with their left subtrees, and all singletons with their right subtrees. One can check that this is a semi-algebra in the sense of 1,2,and 3. But we will never be able to write $X$ (the complement of the empty set) as a finite disjoint union of elements of $S$.
-
|
{}
|
過去の記録
2015年06月15日(月)
代数幾何学セミナー
15:30-17:00 数理科学研究科棟(駒場) 122号室
Christopher Hacon 氏 (University of Utah/RIMS)
Boundedness of the KSBA functor of
SLC models (English)
[ 講演概要 ]
Let $X$ be a canonically polarized smooth $n$-dimensional projective variety over $\mathbb C$ (so that $\omega _X$ is ample), then it is well-known that a fixed multiple of the canonical line bundle defines an embedding of $X$ in projective space. It then follows easily that if we fix certain invariants of $X$, then $X$ belongs to finitely many deformation types. Since canonical models are rarely smooth, it is important to generalize this result to canonically polarized $n$-dimensional projectivevarieties with canonical singularities. Moreover, since these varieties specialize to non-normal varieties it is also important to generalize this result to semi-log canonical pairs. In this talk we will explain a strong version of the above result that applies to semi-log canonical pairs.This is joint work with C. Xu and J. McKernan
[ 講演参考URL ]
http://www.math.utah.edu/~hacon/
複素解析幾何セミナー
10:30-12:00 数理科学研究科棟(駒場) 126号室
The Lyapunov-Schmidt reduction for the CR Yamabe equation on the Heisenberg group (Japanese)
[ 講演概要 ]
We will study CR Yamabe equation for a CR structure on the Heisenberg group which is deformed from the standard structure. By using Lyapunov-Schmidt reduction, it is shown that the perturbation of the standard CR Yamabe solution is a solution to the deformed CR Yamabe equation, under certain conditions of the deformation.
東京確率論セミナー
16:50-18:20 数理科学研究科棟(駒場) 128号室
ランダム媒質中の多次元拡散家庭の再帰性・非再帰性について
(田村要造氏,楠岡誠一郎氏との共同研究)
数値解析セミナー
16:30-18:00 数理科学研究科棟(駒場) 056号室
ハミルトン系に対する並列エネルギー保存解法 (日本語)
[ 講演概要 ]
2015年06月12日(金)
幾何コロキウム
10:00-11:30 数理科学研究科棟(駒場) 126号室
The nonuniqueness of tangent cone at infinity of Ricci-flat manifolds (Japanese)
[ 講演概要 ]
For a complete Riemannian manifold (M,g), the Gromov-Hausdorff limit of (M, r^2g) as r to 0 is called the tangent cone at infinity. By the Gromov's Compactness Theorem, there exists tangent cone at infinity for every complete Riemannian manifolds with nonnegative Ricci curvatures. Moreover, if it is Ricci-flat, with Euclidean volume growth and having at least one tangent cone at infinity with a smooth cross section, then it is uniquely determined by the result of Colding and Minicozzi. In this talk I will explain that the assumption of the volume growth is essential for their uniqueness theorem.
2015年06月11日(木)
応用解析セミナー
16:00-17:30 数理科学研究科棟(駒場) 128号室
[ 講演概要 ]
$u_t = \Delta u^m - \nabla \cdot (u^{q-1} \nabla v)$,
$v_t = \Delta v - v + u$.
ここで, $m \ge 1$, $q \ge 2$ とする. この問題に対する時間大域的弱解の存在については, 最初にSugiyama-Kunii (2006)によって $q \le m$ という条件が提示され, その後Ishida-Yokota (2012)によって最大正則性原理を用いたアプローチにより$q < m +2/N$ (Nは空間次元)という条件下で示された. しかし, これらの研究において, 解の時間大域的な挙動の解明という観点から重要である「解の有界性」は未解決のまま残されている. なお, $q < m +2/N$ という条件は, $m=1$, $q=2$のときに対応する通常のKeller-Segel系に対する研究から, 初期値の大きさに制限なく時間大域的弱解の存在が言える条件としては最良であると考えられる. 有界領域上のNeumann問題に対しては, Tao-Winkler (2012), Ishida-Seki-Yokota (2014)によって同様の条件の下で時間大域解の存在だけでなく解の有界性まで示されているが, Gagliardo-Nirenbergの補間不等式を繰り返し用いるために計算が複雑であり, 証明の見通しが良いとは言い難い. 本講演では, 特別な場合に対するSenba-Suzuki (2006)の方法を参考に, Ishida-Yokota (2012)による最大正則性原理を用いるアプローチに小さな修正を施すことによって, 解の有界性が容易に導かれることを示す.
2015年06月10日(水)
作用素環セミナー
16:45-18:15 数理科学研究科棟(駒場) 122号室
David Kerr 氏 (Texas A&M Univ.)
Dynamics, dimension, and $C^*$-algebras
2015年06月09日(火)
トポロジー火曜セミナー
17:00-18:30 数理科学研究科棟(駒場) 056号室
Tea : 16:30-17:00 Common Room
[ 講演概要 ]
この講演では完全ラグランジュはめ込みのdisplacementエネルギーと擬正則円盤
のシンプレクティック面積に関するある不等式を与える. 証明はChekanovが有理
ラグランジュ部分多様体のdisplacementエネルギーに関する不等式を示す際に用
いた技法を, ラグランジュはめ込みのFloerホモロジーに拡張して行う. また時
2015年06月08日(月)
複素解析幾何セミナー
10:30-12:00 数理科学研究科棟(駒場) 126号室
Mixed Hodge structures and Sullivan's minimal models of Sasakian manifolds (Japanese)
[ 講演概要 ]
By the result of Deligne, Griffiths, Morgan and Sullivan, the Malcev completion of the fundamental group of a compact Kahler manifold is quadratically presented. This fact gives good advances in "Kahler group problem" (Which groups can be the fundamental groups of compact Kahler manifolds?) In this talk, we consider the fundamental groups of compact Sasakian manifolds. We show that the Malcev Lie algebra of the fundamental group of a compact 2n+1-dimensional Sasakian manifold with n >= 2 admits a quadratic presentation by using Morgan's bigradings of Sullivan's minimal models of mixed-Hodge diagrams.
東京確率論セミナー
16:50-18:20 数理科学研究科棟(駒場) 128号室
On a stochastic Rayleigh-Plesset equation and a certain stochastic Navier-Stokes equation
2015年06月05日(金)
幾何コロキウム
10:00-11:30 数理科学研究科棟(駒場) 126号室
Veech groups of Veech surfaces and periodic points (日本語)
[ 講演概要 ]
統計数学セミナー
16:20-17:30 数理科学研究科棟(駒場) 056号室
A Note on Algorithmic Trading based on Some Personal Experience
[ 講演概要 ]
I overview a brief history of HFT based on my 14 years' personal experience of the algorithmic trading business at a wall-street company. Starting with descriptions about layers of the algo business, I mention a stochastic index arbitrage business that I employed in some detail. After reviewing some HFT specific issues such as super short-period alpha, I try to forecast what is going on with HFT in near future.
2015年06月03日(水)
作用素環セミナー
16:45-18:15 数理科学研究科棟(駒場) 122号室
The Furstenberg boundary and $C^*$-simplicity
数理人口学・数理生物学セミナー
14:55-16:40 数理科学研究科棟(駒場) 128演習室号室
[ 講演概要 ]
マ類、外洋性サメ類などが挙げられる.回遊により地域ごとに漁獲される水産資
ても未成魚を漁獲する地域と成魚を漁獲する地域が異なる場合が多い.しかし、
を考慮した個体群動態モデルを基本モデルとする.次にモデルから得られる個体
2015年06月01日(月)
代数幾何学セミナー
15:30-17:00 数理科学研究科棟(駒場) 122号室
Rank 2 weak Fano bundles on cubic 3-folds (日本語)
[ 講演概要 ]
A vector bundle on a projective variety is called weak Fano if its
projectivization is a weak Fano manifold. This is a generalization of
Fano bundles.
In this talk, we will obtain a classification of rank 2 weak Fano
bundles on a nonsingular cubic hypersurface in a projective 4-space.
Specifically, we will show that there exist rank 2 indecomposable weak
Fano bundles on it.
東京確率論セミナー
16:50-18:20 数理科学研究科棟(駒場) 128号室
[ 講演概要 ]
2015年05月28日(木)
東京無限可積分系セミナー
17:00-18:30 数理科学研究科棟(駒場) 002号室
Unitary spherical representations of Drinfeld doubles (JAPANESE)
[ 講演概要 ]
It is known that the Drinfeld double of the quantized
enveloping algebra of a semisimple Lie algebra looks similar to the
quantized enveloping algebra of the complexification of the Lie algebra.
In this talk, we investigate the unitary representation theory of such
Drinfeld double via its analogy to that of the complex Lie group.
We also talk on an application to operator algebras.
2015年05月27日(水)
作用素環セミナー
16:45-18:15 数理科学研究科棟(駒場) 122号室
John F. R. Duncan 氏 (Case Western Reserve Univ.)
Vertex operator algebras in umbral Moonshine
代数学コロキウム
17:00-18:00 数理科学研究科棟(駒場) 056号室
On a good reduction criterion for polycurves with sections (Japanese)
2015年05月26日(火)
Lie群論・表現論セミナー
17:00-18:30 数理科学研究科棟(駒場) 122号室
Clifford quartic forms の局所関数等式とhomaloidal EKP-polynomials
[ 講演概要 ]
トポロジー火曜セミナー
17:00-18:30 数理科学研究科棟(駒場) 056号室
Tea : 16:30-17:00 Common Room
Introduction to formalization of topology using a proof assistant. (JAPANESE)
[ 講演概要 ]
Although the program of formalization goes back to David
Hilbert, it is only recently that we can actually formalize
substantial theorems in modern mathematics. It is made possible by the
development of certain type theory and a computer software called a
proof assistant. We begin this talk by showing our formalization of
some basic geometric topology using a proof assistant COQ. Then we
introduce homotopy type theory (HoTT) of Voevodsky et al., which
interprets type theory from abstract homotopy theoretic perspective.
HoTT proposes "univalent" foundation of mathematics which is
particularly suited for computer formalization.
2015年05月25日(月)
複素解析幾何セミナー
10:30-12:00 数理科学研究科棟(駒場) 126号室
On uniform K-stability (Japanese)
[ 講演概要 ]
It is a joint work with Sébastien Boucksom and Mattias Jonsson. We first introduce functionals on the space of test configurations, as non-Archimedean analogues of classical functionals on the space of Kähler metrics. Then, uniform K-stability is defined as a counterpart of K-energy's coercivity condition. Finally, reproving and strengthening Y. Odaka's results, we study uniform K-stability of Kähler-Einstein manifolds.
代数幾何学セミナー
15:30-17:00 数理科学研究科棟(駒場) 122号室
Good reduction of K3 surfaces (日本語 or English)
[ 講演概要 ]
We consider degeneration of K3 surfaces over a 1-dimensional base scheme
of mixed characteristic (e.g. Spec of the p-adic integers).
Under the assumption of potential semistable reduction, we first prove
that a trivial monodromy action on the l-adic etale cohomology group
implies potential good reduction, where potential means that we allow a
finite base extension.
Moreover we show that a finite etale base change suffices.
The proof for the first part involves a mixed characteristic
3-dimensional MMP (Kawamata) and the classification of semistable
degeneration of K3 surfaces (Kulikov, Persson--Pinkham, Nakkajima).
For the second part, we consider flops and descent arguments. This is a joint work with Christian Liedtke.
[ 講演参考URL ]
https://www.ms.u-tokyo.ac.jp/~ymatsu/index_j.html
東京確率論セミナー
16:50-18:20 数理科学研究科棟(駒場) 128号室
A finite diameter theorem on RCD spaces
2015年05月21日(木)
講演会
16:00-17:00 数理科学研究科棟(駒場) 056号室
Tea:15:30~16:00 コモンルーム
Gunnar Carlsson 氏 (Stanford University, Ayasdi INC)
The Shape of Data
(ENGLISH)
[ 講演概要 ]
There is a tremendous amount of attention being paid to the notion of
"Big Data". In many situations, however, the problem is not so much the
size of the data but rather its complexity. This observation shows that
it is now important to find methods for representing complex data in a
compressed and understandable fashion. Representing data by shapes
turns out to be useful in many situations, and therefore topology, the
mathematical sub discipline which studies shape, becomes quite
relevant. There is now a collection of methods based on topology for
analyzing complex data, and in this talk we will discuss these methods,
with numerous examples.
[ 講演参考URL ]
http://faculty.ms.u-tokyo.ac.jp/Carlsson.html
|
{}
|
## Precalculus (6th Edition) Blitzer
$61.7\ mi$.
Step 1. Draw a diagram as shown in the figure. The first ship traveled from point C, $3(14)=42\ mi$, to point A. The second ship traveled from C, $3(10)=30\ mi$, to point B. The angles are given from the bearings. Step 2. In triangle ABC, we have the angle $C=12^\circ+90^\circ+(90^\circ-75^\circ)=117^\circ$. Step 3. Use the Law of Cosines. Letting $AB=c$, we have $c^2=42^2+30^2-2(42)(30)cos(117^\circ)\approx3808$, which gives $c\approx61.7\ mi$. That is, the two ships are about 61.7 miles apart after 3 hours.
|
{}
|
The web works by sending files to browsers. Some other things as well, but mainly sending files to browsers.
Different types of files have different data in them. When a browser gets a file, it looks at the type of file, and does what that type of file requires.
## Plain text files
Try it. Start your IDE. For this one time, you can use a simple editor, even Notepad.
Don't use Word, or another word processor. They add formatting characters to files. We want plain text.
In a new file, type some text. I typed:
Save the file somewhere on your computer. Call it dog.txt. Note the extension. That tells browsers what type of data is in the file. txt means plain text.
Now open that file in a browser. Try Ctrl+O to show the Open file dialog.
The browser loads the file, looks at the extension, and knows it should show the data with a plain font. Here's what I saw:
You can see exactly what data the browser read from the file. Try Ctrl+U. That's a shortcut for Show source. It shows what the browser read, before it displayed the data. BTW, rendered is another word for what a browser does when it displays data it has read.
What you did:
• You made a text file.
• You told the browser to open it.
• The browser read the file, and got the data it shows on the Show source page.
• The browser showed the contents of the file.
The browser doesn't do anything with data in a txt file. Just shows it. That's why the source and display pages are the same.
## HTML files
Start the editor again, with a new file. Copy-and-paste this:
<h1>Rosie</h1>
<p>Rosie is a good dog.</p>
<h1> is a heading. <p> is a paragraph. We'll talk about the tags later.
Save the file as dog.html. Notice the different extension. When a browser gets an HTML file, it knows it has to interpret the data before displaying it.
Open the file in a browser. You'll see something like:
Hit Ctrl+U again. Remember, that shows the data that the browser received, before displaying it.
The page source view shows the data that the browser received. Browsers interpret that data, depending on the file extension.
## Showing HTML as text
Let's trick the browser. You made a file called dog.html. The browser looked at the extension, and made an appropriate display for the user.
Now, make a copy of dog.html, and call it dog-html.txt. What's going to happen when you open the new file in a browser? Try to predict what's going to happen, before reading on.
OK, open dog-html.txt in a browser. Here's what I saw:
The browser didn't interpret the HTML tags. Why? Because the file extension was txt. Browsers just show the contents of txt files. They don't do any translation.
## An image file
Find an image file somewhere on your computer. I'm going to use the file rosie1.jpg:
Open the file in your browser, with Ctrl+O. The browser looks at the file extension, jpg. It says to itself, "Self, jpg means that the data in the file is an image." So that's what it shows.
Another common image extension is png. It's a different way of encoding the color data. jpg is usually used just for photos. png can show photos and drawings equally well.
How you name files matters. Most web servers run the operating system Linux. Linux file names are case-sensitive. So Dog.html and dog.html are different files.
It's annoying.
The URLs that access those files (URLs are covered later) are also different. So https://eligiblemonkeys.net/Dog.html and https://eligiblemonkeys.net/dog.html are different.
It's common to want to name a file with two words, like giant flea.html. That works… mostly. It causes problems sometimes.
Also annoying.
The solution? When you name files, use these two rules:
• Lowercase only
• To separate words, use dashes: -
So name your files like this:
• dog.html
• giant-flea.html
• evil-ant.jpg
It's good to start good file naming habits now. It will save you grief later.
## Summary
Browsers show files. The extension of a file tells the browser how to show the data in the file.
• txt – a text file. Show the data as-is.
• html – an HTML file. Interpret the data as HTML tags.
• jpg – an image file, stored using the JPEG format.
• png – an image file, stored using the PNG format.
For file names, lowercase, and dashes.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.