arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
## Abstract
The prevalence of adolescent obesity has increased dramatically over the past three decades, and research has documented that the number of television shows viewed during childhood is associated with greater risk for obesity. In particular, considerable evidence suggests that exposure to food marketing promotes eating habits that contribute to obesity. The present study examines neural responses to dynamic food commercials in overweight and healthy-weight adolescents using functional magnetic resonance imaging (fMRI). Compared with non-food commercials, food commercials more strongly engaged regions involved in attention and saliency detection (occipital lobe, precuneus, superior temporal gyri, and right insula) and in processing rewards [left and right nucleus accumbens (NAcc) and left orbitofrontal cortex (OFC)]. Activity in the left OFC and right insula further correlated with subjects' percent body fat at the time of the scan. Interestingly, this reward-related activity to food commercials was accompanied by the additional recruitment of mouth-specific somatosensory-motor cortices—a finding that suggests the intriguing possibility that higher-adiposity adolescents mentally simulate eating behaviors and offers a potential neural mechanism for the formation and reinforcement of unhealthy eating habits that may hamper an individual's ability lose weight later in life.
## Introduction
Obesity is a key public health problem in the United States of America and has become progressively more prevalent in the past 3 decades (Ogden et al. 2014; National Center for Health Statistics, 2012). At the same time, many aspects of food consumption have changed—greater use of prepared food products in the home and greater utilization of restaurants. Corporate fast food restaurants appeared in the 1950s, before the obesity epidemic but have expanded greatly since then. Today, dozens of national chains compete intensively on food price and portion size. Nowhere is this competition better illustrated than in television advertising for these products, where children view up to 13 food ads per hour of programming (Dembek et al. 2013).
Prior behavioral work has demonstrated relationships between adolescents' receptivity to food commercials and body mass index (BMI) (McClure et al. 2013) and the amount of snacking following food ad viewing (Halford et al. 2004). Neuroimaging studies exploring the relationship between food cue-reactivity and obesity in adults have consistently identified a putative network of reward regions including the ventral striatum, insula, and regions of the orbitofrontal cortex (OFC) (Rothemund et al. 2007; Stoeckel et al. 2008; Bruce et al. 2010; Stice et al. 2011; Dimitropoulos et al. 2012; Wagner et al. 2013) as well as regions related to visual attention (McCaffery et al. 2009; Martin et al. 2010), and somatosensory processing (Stice et al. 2011). Moreover, food cue-reactivity in reward and attention regions have been linked to future weight gain (Demos et al. 2012; Yokum et al. 2012), trait impulsivity (Kerr et al. 2014), self-reported craving (Kober et al. 2010), giving in to cravings (Lopez et al. 2014), diet violations (Demos et al. 2011), diet failures (Murdaugh et al. 2012), as well as dieting status and weight loss strategies (Bruce et al. 2012; Bruce et al. 2014). Although the majority of these studies have been conducted in adults, adolescence is often characterized by heightened sensitivity to reward cues, potentially leading to increases in risky behaviors (Fareri et al. 2008; Casey 2014), thus providing further motivation for investigating this relationship in an adolescent population.
Although much of this work has been conducted using static pictures of appetizing foods, recent work by Gearhardt et al. (2013) has extended food cue-reactivity findings in adolescents to dynamic food commercials. Because food commercials are specifically designed to entice consumption of the advertised product, these cues may be particularly powerful motivators of eating behavior. Additionally, dynamic reward cues like food commercials may capitalize on and reinforce well-established, automatic habits through mimicry and observational learning. Behavioral mimicry studies in adults have shown that the behavior of a confederate can impact the viewer's own behaviors (Chartrand and Bargh 1999), and other work has more directly tied this phenomenon to eating behaviors (Johnston 2002).
Brain imaging research on action observation has shown that observing others perform goal-directed actions recruits a putative action–observation network, which includes lateral frontal (IFG), motor, premotor, supplementary motor, somatosensory regions, the intraparietal lobule, the superior parietal lobule, and the intraparietal cortex (IPS) (Caspers et al. 2010).
Much like eating, smoking is another highly reinforced and automatic behavior, and our prior work has shown that smokers activate both reward (left OFC) and action-observation regions (left IPS and IFG) more so than nonsmokers when viewing dynamic depictions of others smoking (Wagner et al. 2011). The present study sought to examine the neural responses to food commercials in order to better understand the relationship between real-world food advertising and adolescent obesity. Based on our previous work, we hypothesized that adolescents would demonstrate greater activity for food commercials in regions involved in reward and that recruitment would correlate with individual differences in adiposity. An open question was whether high-adiposity adolescents would additionally recruit brain regions that are commonly activated in studies of action observation, a finding that might suggest high-adiposity adolescents are more likely to simulate eating when observing others eat.
## Methods
### Subjects
Forty right-handed adolescents (20 female and 20 male) between the ages of 12 and 17 (mean age = 14.3 years) were recruited locally through the Children's Hospital at Dartmouth Hitchcock Medical Center, based on their BMI percentiles. We obtained permission from the IRB to conduct a limited search of the electronic medical records to identify adolescents in the pediatric practice whose BMI was ≥95th percentile (obese) and whose BMI was between the 40th and the 59th percentile (healthy weight). An opt-out letter was sent to the parents of all of these adolescents from the physicians in the practice informing them of the study, after which we proactively called them to invite to participation. Participants were matched for age and gender. Enrolled adolescents and their parents were consented verbally, and participants were unaware that they had been recruited based on BMI. All procedures were approved by the Committee for the Protection of Human Subjects at Dartmouth College. Due to excessive movement in two subjects and technical problems with data collection in a third subject, 37 participants were included in the final analyses. For these subjects (20 female and 17 male), the mean age was 14.4 years (s.d. = 1.3 years; range = 12–16). Of these subjects, mean BMI for adolescents recruited as obese (n = 18) was 33.2 (s.d. = 2.51) and mean BMI for adolescents recruited as healthy weight (n = 19) was 20.15 (s.d. = 2.05) (Table 2). Across all subjects (n = 37), mean BMI was 26.49 (s.d. = 6.99) and the range was 16.5–37.7. One subject recruited as healthy weight met national criteria for being overweight (>85th percentile) on the day of the scan.
Although participants were recruited based on an obesity metric available to us prior to the scan session (BMI), percent body fat was collected on the day of the scan as an additional measure of individual adiposity. Mean percent body fat for adolescents recruited as obese was 42.54 (s.d. = 8.27), and mean percent body fat for adolescents recruited as healthy weight was 20.15 (s.d. = 9.83) (Table 2). Mean percent body fat across all subjects was 31.04 (s.d. = 14.47), and the range was 8.3–52.5%.
### Stimuli
Twelve food and 12 non-food high-resolution commercials were matched for length (mean food commercial = 28.4 s; mean control commercial = 28.9 s) (Table 1). Commercials were selected based on quality, relevance to the age group, and publication date. Non-food commercials were included as a comparison to account for low-level visual properties inherent to processing dynamic scenes. A separate cohort of 28 adolescents (mean age = 12.61 years; s.d. = 1.71 years) rated a randomized subset the commercials (mean number of commercials viewed = 6.07, s.d. = 0.99) on interest (“how interesting do you think this commercial is?”) and excitement (“how exciting do you think this commercial is?”) on a sliding scale from 0 to 1. Ratings of interest for the food commercials (mean = 0.395, s.d. = 0.202) and ratings of interest for the neutral commercials (mean = 0.403, s.d. = 0.154) did not significantly differ (t(27) = −0.548, P = 0.808). Similarly, ratings of excitement for the food commercials (mean = 0.375, s.d. = 0.191) and ratings of interest for the neutral commercials (mean = 0.390, s.d. = 0.134) did not significantly differ (t(27) = −0.410, P = 0.690). During scanning, commercials were presented in a pseudo-randomized order so that no more than two commercials in a condition or 2 commercials of the same brand appeared in subsequence. These commercials were embedded as four “commercial breaks” into an episode of a popular age-appropriate television show, The Big Bang Theory (Fig. 1). Each commercial break consisted of six commercials, or ∼2.8 min (67 TRs) of commercial time.
Table 1
Food and non-food commercials used in scanning paradigm
Brand Product Name/description Duration (s)
Food commercials
McDonald's Quarter pounder Made with 100% beef 15
McDonald's Double quarter pounder It adds character 30
McDonald's Angus third pounder Eyes on the road 30
McDonald's McRib McRib is back 30
McDonald's Chicken Nuggets Slams even dunkier 30
McDonald's Chipotle BBQ bacon angus Angus axiom #43 30
Wendy's 99-cent menu My 99: Drive through 30
Wendy's 99-cent menu My 99: Skate park 28
Wendy's Chicken sandwich Slap in the face 30
Dunkin Donuts Breakfast sandwiches Adventure runs on Dunkin 29
KFC $5 meal Today is a KFC day 30 Pizza Hut Big Italy pizza Big Italy 29 Non-food commercials Lowe's Store sale event Great American fix-up 15 Gillette Fusion ProGlide Styler Masters of Style 30 Quicken Loans Retail mortgage Who do you think I am? 30 Tide Tide laundry detergent Hoodies & Cargo shorts 31 Chevrolet Volt Volt owners: gas stations 31 Toyota Camry, Corolla, Priux #1 for everyone sales event 26 Gain Gain laundry detergent Revolving door 32 Sprint Cellular phone data plan Truly unlimited data 31 Simple Green All purpose cleaner I got that 30 Verizon 4G LTE Bad idea 30 Johnson's Head-to-toe wash Nice work 31 Farmer's Insurance University of Farmers: Maze 30 Brand Product Name/description Duration (s) Food commercials McDonald's Quarter pounder Made with 100% beef 15 McDonald's Double quarter pounder It adds character 30 McDonald's Angus third pounder Eyes on the road 30 McDonald's McRib McRib is back 30 McDonald's Chicken Nuggets Slams even dunkier 30 McDonald's Chipotle BBQ bacon angus Angus axiom #43 30 Wendy's 99-cent menu My 99: Drive through 30 Wendy's 99-cent menu My 99: Skate park 28 Wendy's Chicken sandwich Slap in the face 30 Dunkin Donuts Breakfast sandwiches Adventure runs on Dunkin 29 KFC$5 meal Today is a KFC day 30
Pizza Hut Big Italy pizza Big Italy 29
Non-food commercials
Lowe's Store sale event Great American fix-up 15
Gillette Fusion ProGlide Styler Masters of Style 30
Quicken Loans Retail mortgage Who do you think I am? 30
Tide Tide laundry detergent Hoodies & Cargo shorts 31
Chevrolet Volt Volt owners: gas stations 31
Toyota Camry, Corolla, Priux #1 for everyone sales event 26
Gain Gain laundry detergent Revolving door 32
Sprint Cellular phone data plan Truly unlimited data 31
Simple Green All purpose cleaner I got that 30
Verizon 4G LTE Bad idea 30
Johnson's Head-to-toe wash Nice work 31
Farmer's Insurance University of Farmers: Maze 30
Figure 1.
Study design. Subjects viewed episode of The Big Bang Theory with food and control (non-food) commercials embedded throughout as typical commercial breaks.
Figure 1.
Study design. Subjects viewed episode of The Big Bang Theory with food and control (non-food) commercials embedded throughout as typical commercial breaks.
### Procedure
Subjects were naïve to the purpose of the experiment and were simply told that the study was aimed at understanding the brain's response to viewing television shows. Subjects were asked not to eat food or to consume any caffeinated beverages for the 2 h prior to their study appointment. Before scanning, subjects were weighed using a Tanita scale (model TBF-300A Arlington Heights), which uses bioelectric impedance analysis to determine body composition and has been shown to be a reliable measure of body fat (Jebb et al. 2007). Consistent with our cover story, subjects were asked to report how many TV shows they watch per week on average.
During scanning, subjects watched a 19-min episode of The Big Bang Theory. Food and non-food commercials were pseudo-randomized and embedded into the natural commercial breaks of the episode. The TV show and commercials were presented with SuperLab 4.5 software (Cedrus Corporation). Participants were given no overt task instructions and were allowed to passively view the TV show and commercials. Echo-planar images (EPIs) were acquired during commercial presentations and reference scans, and structural images were acquired during the TV show presentation. In total, 12 food and 12 control (non-food) commercials were presented over four “commercial breaks.”
### Image Acquisition
All scanning was performed on a 3.0T Philips Achieva MRI fit with a 32-channel SENSE (Sensitivity Encoding) headcoil. Structural images were obtained using a T1-weighted MP-RAGE protocol (TR = 9.9 ms; TE = 4.6 ms; flip angle = 8°; 1 × 1 × 1 mm3 voxels). Functional images were acquired using a T2*-weighted EPI protocol (TR = 2500 ms; TE = 35 ms; flip angle = 90°; 3 × 3 × 3 mm3 voxels; sense factor of 2). Four functional runs were collected (67 TRs each) for each participant.
### Image Preprocessing
All imaging preprocessing and subsequent analyses were conducted in SPM8 (Wellcome Department of Cognitive Neurology) in conjunction with a suite of tools for preprocessing and analysis (https://github.com/ddwagner/SPM8w). Functional images were slice-time-corrected and realigned to account for temporal differences in slice acquisition and head motion, respectively. Resulting volumes were spatially normalized to the ICBM 152 template brain (Montreal Neurological Institute) and spatially smoothed using an 8-mm (FWHM) Gaussian kernel.
### Data Analysis
Task conditions and covariates of no interest were convolved with a canonical hemodynamic response function and included in a general linear model to determine neural responses to food and non-food commercials. Nuisance regressors included 6 motion parameters, the session mean, and a linear trend to account for low-frequency scanner drift. The resulting subject-level contrasts of FOOD > NON-FOOD commercials were entered into a second-level random-effects analysis. This produced a group-level statistical parametric map that represented the overall changes in neural activity for FOOD > NON-FOOD commercials across subjects. The group-level contrast map was thresholded at P < 0.005 and cluster corrected to account for multiple comparisons using 5000 Monte Carlo simulations. These simulations estimated a minimum cluster size of 173 voxels.
Given our a priori hypothesis that reward-processing regions would correlate with adiposity, region-of-interest (ROI) analyses were performed on the nucleus accumbens (NAcc), the OFC, and the insula. Left and right NAcc ROIs were defined anatomically using the automatic segmentation tool (aseg) in FreeSurfer (Fischl 2004) to create a probabilistic mask from anatomical MPRAGE scans collected on all subjects. Voxels that were present in at least 75% of all subjects' segmented NAcc regions were included in the ROI. The FOOD > NON-FOOD commercials contrast (P < 0.05, corrected based on a cluster extent threshold of 913 voxels estimated with 5000 Monte Carlo simulations) was used to identify cortical reward ROIs. ROI selection in this case is unbiased with respect to body fat (Kriegeskorte et al. 2009; Vul et al. 2009), as ROIs were defined using an independent contrast that did not correlate signal change with body fat. Three cortical reward ROIs (10-mm spheres centered on the peak activation) were identified from this contrast, the left OFC (−6, 42, −12), right OFC (27, 36, −24), and right insula (39, −6, 3). All ROIs were interrogated for outliers (i.e., individuals whose activity was > two standard deviations from the mean activation of the ROI), and the resulting correlations with percent body fat, BMI, and TV viewing were conducted on each ROI after removal of outliers.
In order to identify additional brain regions that were more active when viewing food commercials as a function of percent body fat, an exploratory whole-brain regression was performed. Each subject's FOOD > NON-FOOD contrast was entered into a regression analysis using individual body fat percentage as a covariate. Age and gender were included in this model to account for variance in body fat percentages for males and females of different ages (Rosner et al. 1998; Blaak 2001). Resulting statistical maps for the exploratory whole-brain analyses were thresholded using a more stringent threshold (P < 0.001) and were cluster corrected to a minimum extent of 74 voxels to account for whole-brain multiple comparison based on 5000 Monte Carlo simulations.
### Data Visualization
All fMRI results were visualized in Connectome Workbench Version 0.85 (Marcus et al. 2010; Marcus et al. 2011) available from http://www.humanconnectome.org/connectome/connectome-workbench.html. Cortical surface results were mapped onto the Conte69 mid-thickness surfaces (Van Essen et al. 2012).
## Results
### Behavioral Results
Adolescents reported watching an average of 5 h of TV shows per week (s.d. = 3.05, range = 1–13 h). The number of reported TV viewing significantly correlated with subjects' BMI (r = 0.49, P < 0.005) and percent body fat (r = 0.41, P < 0.05). BMI was also correlated with percent body fat (r = 0.85, P < 0.0001).
### Imaging Results
#### Food Versus Non-Food Commercials
A random-effects analysis identified several regions that showed greater activation during food commercials compared with non-food commercials (Fig. 2). In particular, the left OFC, occipital lobe, bilateral regions of the superior and middle temporal gyri, and the posterior cingulate gyrus all demonstrated significantly greater activation in response to FOOD commercials than to NON-FOOD commercials (P < 0.05, corrected; Table 3).
Table 2
Demographic characteristics of participants. Means and standard deviations for BMI and body fat percentages within obese and healthy-weight recruited groups and across all subjects
BMI (mean) BMI (s.d.) Body fat percent (mean) Body fat percent (s.d.)
Obese 33.20 2.51 42.54 8.27
Healthy weight 20.15 2.05 20.15 9.83
All subjects 26.49 6.99 31.04 14.47
BMI (mean) BMI (s.d.) Body fat percent (mean) Body fat percent (s.d.)
Obese 33.20 2.51 42.54 8.27
Healthy weight 20.15 2.05 20.15 9.83
All subjects 26.49 6.99 31.04 14.47
Figure 2.
Brain regions showing greater activity when viewing FOOD commercials than NON-FOOD commercials. Activations (P < 0.005, 173 contiguous voxels) are displayed on an inflated rendering of the cortical surface (Marcus et al. 2010; Marcus et al. 2011). Greater activation for FOOD commercials was observed in a number of occipital regions (A) extending from the occipital pole through the fusiform gyrus, the left superior and middle temporal gyrus (B), the precuneus (C and D), and the left orbital frontal cortex (E).
Figure 2.
Brain regions showing greater activity when viewing FOOD commercials than NON-FOOD commercials. Activations (P < 0.005, 173 contiguous voxels) are displayed on an inflated rendering of the cortical surface (Marcus et al. 2010; Marcus et al. 2011). Greater activation for FOOD commercials was observed in a number of occipital regions (A) extending from the occipital pole through the fusiform gyrus, the left superior and middle temporal gyrus (B), the precuneus (C and D), and the left orbital frontal cortex (E).
#### A priori ROI Analyses
Given our a priori hypothesis, we investigated the activation observed within anatomically defined left and right NAcc ROIs, and the left and right OFC and right insula ROIs identified in the FOOD > NON-FOOD contrast. Estimates of signal change for FOOD > NON-FOOD commercials within the left OFC and right insula correlated with participants' percent body fat (left OFC; r = 0.43, P < 0.01 right insula; r = 0.38, P < 0.05) (Fig. 3). Because body fat correlated with both BMI and TV viewing, we also correlated left OFC and right insula activity with these measures across subjects. Activity in left OFC and right insula also correlated with BMI (left OFC; r = 0.38, P < 0.05 right insula; r = 0.41, P < 0.05) but did not correlate with the amount of TV viewing (left OFC; r = 0.04, P = 0.84, right insula; r = 0.25, P = 0.14). Activity in the right OFC did not correlate with body fat (r = 0.25, P = 0.16), BMI (r = 0.24, P = 0.17), or TV watching (r = 0.01, P = 0.93).
Figure 3.
Regions correlating with percent body fat. (A) The magnitude of response to food commercials in a region of the left orbitofrontal cortex, defined by the FOOD > NON-FOOD contrast, correlated with percent body fat (r = 0.43, P < 0.01) and BMI (r = 0.38, P < 0.05). (B) The magnitude of response to food commercials in a region of the right insula, defined by the FOOD > NON-FOOD contrast, correlated with percent body fat (r = 0.38, P < 0.05) and BMI (r = 0.41, P < 0.05).
Figure 3.
Regions correlating with percent body fat. (A) The magnitude of response to food commercials in a region of the left orbitofrontal cortex, defined by the FOOD > NON-FOOD contrast, correlated with percent body fat (r = 0.43, P < 0.01) and BMI (r = 0.38, P < 0.05). (B) The magnitude of response to food commercials in a region of the right insula, defined by the FOOD > NON-FOOD contrast, correlated with percent body fat (r = 0.38, P < 0.05) and BMI (r = 0.41, P < 0.05).
Although activity in left and right NAcc was greater for FOOD than NON-FOOD commercials (left: t(35) = 3.9, P < 0.0005; right: t(35) = 3.2, P < 0.005), it did not correlate with body fat, BMI, or TV watching (P > 0.05). We did not observe significant activations in the left insula in the FOOD > NON-FOOD contrast, and so this region was not interrogated further.
#### Exploratory Whole-Brain Regression Analysis
To identify additional brain regions that correlated with percent body fat, an exploratory whole-brain regression analysis was conducted by correlating FOOD > NON-FOOD signal change with body fat on a voxel-by-voxel basis. Results revealed regions that significantly correlated between the FOOD > NON-FOOD commercials and percent body fat (P < 0.05, corrected; Fig. 4 and Table 4). Bilateral sensorimotor cortices along the pre- and post-central gyri and bilateral central sulci and a region of the right insula/posterior opercula demonstrated this correlation. Finally, a region in the posterior cerebellum demonstrated greater activation with increases in percent body fat.
Table 3
Regions that were significantly more active (P < 0.05, corrected) for FOOD > NON-FOOD commercials
Region Coordinates (MNI)
Volume (mm3Peak T
X Y Z
Occipital lobe 12 −102 12 52 785 13.19
R Medial temporal gyrus 69 −6 −6 3339 5.67
L Superior temporal gyrus −69 −18 2997 6.85
L Precuneus −6 −48 15 1188 4.67
L Orbitofrontal cortex −12 54 −27 747 4.29
Region Coordinates (MNI)
Volume (mm3Peak T
X Y Z
Occipital lobe 12 −102 12 52 785 13.19
R Medial temporal gyrus 69 −6 −6 3339 5.67
L Superior temporal gyrus −69 −18 2997 6.85
L Precuneus −6 −48 15 1188 4.67
L Orbitofrontal cortex −12 54 −27 747 4.29
Table 4
Regions that significantly correlated (P < 0.05, corrected) with increases in percent body fat in response to food commercials
Region Coordinates (MNI)
Volume (mm3Peak T
X Y Z
L Central sulcus −54 −6 33 2097 5.08
R Central sulcus 57 −12 30 1593 5.20
Cerebellum −6 −93 −33 936 4.51
R Insula 39 −12 891 5.15
Region Coordinates (MNI)
Volume (mm3Peak T
X Y Z
L Central sulcus −54 −6 33 2097 5.08
R Central sulcus 57 −12 30 1593 5.20
Cerebellum −6 −93 −33 936 4.51
R Insula 39 −12 891 5.15
Since BMI is a widely used metric for determining obesity status, whole-brain responses to food commercials were similarly regressed with BMI (again accounting for age and gender). In doing so, only the right sensorimotor region demonstrated this relationship at the threshold used for the percent body fat regression (P < 0.05, corrected). However, this region was recruited to a lower extent (101 voxels for BMI vs. 184 voxels for percent body fat). No other regions were significantly correlated with BMI.
## Discussion
The present study contributes to our growing understanding of the influence of naturalistic, dynamic food commercials on neural activity and eating behavior in adolescents. The extension of such content to the study of appetitive behaviors here and elsewhere (Gearhardt et al. 2013) may serve to better understand the full complement of activations associated with healthy and unhealthy eating habits. Across all subjects, food commercials more strongly activated the OFC, insula, and NAcc, regions consistently activated in reward processing and encoding valuation (Rothemund et al. 2007; Cloutier et al. 2008; Stoeckel et al. 2008; Bruce et al. 2010; Stice et al. 2011; Wagner et al. 2011; Demos et al. 2012; Dimitropoulos et al. 2012; Simmons, Rapuano, Ingeholm, et al. 2013). This finding supports our hypothesis that food commercials engage reward-related regions of the brain more strongly than non-food commercials and is consistent with previous studies (Gearhardt et al. 2013). Additionally, regions within the occipital lobe, the left and right superior and middle temporal gyrus, and the posterior cingulate were all significantly more active for the food commercials compared with non-food commercials. The greater activation of these regions may reflect greater attention and saliency detection for the food commercials, which is also consistent with earlier work (Gearhardt et al. 2013).
Of particular interest, the greater left OFC and right insula activity to food commercials additionally correlated with adolescent adiposity and was accompanied by the additional recruitment of sensorimotor regions in high-adiposity adolescents. Although BMI is commonly used as a proxy for obesity classification, as was used in Gearhardt et al. (2013), the present study capitalized on an additional measure (body fat) to characterize obesity status as this metric has been argued to provide a more accurate measure of physical health (Shah and Braverman 2012; Ahima and Lazar 2013), particularly in adolescents (Widhalm et al. 2001; Freedman et al. 2005). In the present study, the whole-brain regression revealed a more robust correlation between body fat and sensorimotor and insula activity than did BMI. No regions were significantly correlated with age and gender alone, suggesting the findings reported here are driven by percent body fat. Collectively, these findings suggest that correlating brain activity with body fat may offer a more complete picture of individual differences in neural activity and their relationship to obesity than BMI alone.
It is interesting to note that the peak voxels within these bilateral sensorimotor activations observed here have previously been reported in fMRI studies (within 6 mm) examining lip, tongue and jaw movements (Funk et al. 2008; Grabski et al. 2012), mastication (Takahashi et al. 2007), and swallowing (Lowell et al. 2008). Further, a PET study has demonstrated greater resting metabolic activity in oral somatosensory cortex in obese subjects relative to healthy-weight subjects (Wang et al. 2002). Figure 5 shows the overlap between our sensorimotor activations and subregions of sensorimotor systems as defined by resting-state functional connectivity MRI (rs-fcMRI; Power et al. 2011) (Fig. 5a). When overlaid in this fashion, correlated food-commercial activity with body fat was localized to the “mouth” sensorimotor network with little crossover into the “hand/body” sensorimotor network (Fig. 5b).
Figure 4.
Whole-brain response to FOOD commercials covaried with percent body fat, accounting for age and gender. Activations are overlayed on an inflated representation of the cortical surface (Marcus et al. 2010; Marcus et al. 2011). Activations were observed in bilateral regions of sensorimotor cortices along the pre- and post-central gyri and bilateral central sulci and a region of the right insula/posterior opercula.
Figure 4.
Whole-brain response to FOOD commercials covaried with percent body fat, accounting for age and gender. Activations are overlayed on an inflated representation of the cortical surface (Marcus et al. 2010; Marcus et al. 2011). Activations were observed in bilateral regions of sensorimotor cortices along the pre- and post-central gyri and bilateral central sulci and a region of the right insula/posterior opercula.
Figure 5.
(A) Functionally defined sensorimotor networks via resting-state connectivity (Power et al. 2011) provide evidence for separable mouth and hand (body) subnetworks. (B) Activity from (A) overlaid on network boundaries from Power et al. (2011) demonstrates specificity of activity to “mouth” sensorimotor network.
Figure 5.
(A) Functionally defined sensorimotor networks via resting-state connectivity (Power et al. 2011) provide evidence for separable mouth and hand (body) subnetworks. (B) Activity from (A) overlaid on network boundaries from Power et al. (2011) demonstrates specificity of activity to “mouth” sensorimotor network.
Additionally, the right insula demonstrated a similar correlation with body fat. This region spanned mid-insula and extended into posterior insular cortex. Previous studies have reported the mid-insula to be involved in gustatory processing (Veldhuizen et al. 2011; Simmons, Rapuano, Kallman, et al. 2013). When considered in this context, the present findings may also suggest that higher-adiposity adolescents activate taste representations when viewing food commercials. Moreover, the posterior insula is considered to be directly associated with processing somatosensory information (Ostrowsky 2002; Craig 2003), and the functional and structural connectivity of these regions have more recently been identified (Cauda et al. 2011; Jakab et al. 2012). The extension into posterior insula reported here suggests that this region may be representing an integration of mouth somatosensory and gustatory information that is more highly activated when high-adiposity adolescents view food commercials.
Collectively, these findings suggest that higher-adiposity adolescents more strongly recruit oral somatomotor and gustatory regions pertinent to eating behaviors while viewing food commercials, in comparison with their lower-adiposity counterparts. Previous studies investigating action observation have located neurons responsive both to the observation and execution of goal-directed actions, commonly termed mirror neurons (Gallese 1998). Such neurons have been defined in primate motor-related cortical areas in response to performing actions or viewing others perform an action (Kohler et al. 2002). Further, ingestive mirror neurons have been identified in similar motor regions in monkeys exhibiting eating behaviors or while watching other monkeys eat (Ferrari et al. 2003) and have more recently been identified in human somatosensory cortex in response to touch or viewing others being touched (Keysers et al. 2004). The greater recruitment of sensorimotor and insula cortices associated with eating in high-adiposity adolescents suggests the intriguing possibility that these individuals mentally simulate eating behavior in response to viewing food commercials, which may then contribute to the enactment of the behavior itself. Although speculative, dynamic reward cues such as food commercials, in addition to being evaluated as more rewarding in high-adiposity adolescents, may reinforce well-established, automatic eating habits through mimicry and observational learning. To the extent that such recruitment serves to establish eating habits and patterns, the present results offer a potential neural mechanism that may interfere with an overweight or obese adolescent's future attempts to eat less and to lose weight later in life. Perhaps, more encouragingly, the present findings may also provide clues for intervention strategies aimed at promoting healthy, long-term eating habits.
### Limitations
The adolescents in this study reported watching an average of 5 h of TV per week. This statistic is low compared with national survey data reporting up to 4 h of TV viewing a day (Rideout et al. 2010) and suggests that the present study may be underpowered in correlating reward cue-reactivity with TV watching (which was reported as non-significant herein). Future studies may aim to include participants that more closely represent the national average in terms of media use and other possible confounding variables (e.g., socioeconomic status).
The present study also utilized percent body fat as a measure of individual adiposity, determined via bioelectric impedance. The validity of this measure has previously been challenged (Talma et al. 2013) and should therefore be interpreted with some caution. However, others have suggested body fat measurements to be superior to BMI when examining individual differences (Ode et al. 2007; Shah and Braverman 2012; Ramel et al. 2013). Given that our findings were largely consistent across both BMI and body fat metrics, we believe that a complete reporting of both measurements is worthwhile while the field resolves these assessment methodologies. In adolescents, it is possible that pubertal status may influence percent body fat measurements, and this was not assessed in the current study. Future studies relating obesity metrics to neural responses may wish to consider alternative strategies for measuring adiposity and accounting for individual variability within this measure.
## Funding
This work was supported by the UMass/Dartmouth/Vermont Cancer Centers Collaborative Research Program Grant Initiative, the National Institute on Drug Abuse (grant number R01DA022582), the National Science Foundation Graduate Research Fellowship (grant number DGE-1313911 to K.M.R.) and the William H. Neukom 1964 Institute for Computational Science at Dartmouth College (to J.F.H.).
## Notes
The authors thank Courtney Rogers for assistance with recruiting and scanning participants. Conflict of Interest: None declared.
## References
Ahima
RS
,
Lazar
MA
.
2013
.
Physiology. The health risk of obesity--better metrics imperative
.
Science
.
341
:
856
858
.
Blaak
E
.
2001
.
Gender differences in fat metabolism
.
Curr Opin Clin Nutr Metab Care
.
4
:
499
502
.
Bruce
AS
,
Bruce
JM
,
Ness
AR
,
Lepping
RJ
,
Malley
S
,
Hancock
L
,
Powell
J
,
Patrician
TM
,
Breslin
FJ
,
Martin
LE
et al
.
2014
.
A comparison of functional brain changes associated with surgical versus behavioral weight loss
.
Obesity (Silver Spring)
.
22
:
337
343
.
Bruce
AS
,
Holsen
LM
,
Chambers
RJ
,
Martin
LE
,
Brooks
WM
,
Zarcone
JR
,
Butler
MG
,
Savage
CR
.
2010
.
Obese children show hyperactivation to food pictures in brain networks linked to motivation, reward and cognitive control
.
Int J Obes
.
34
:
1494
1500
.
Bruce
JM
,
Hancock
L
,
Bruce
A
,
Lepping
RJ
,
Martin
L
,
Lundgren
JD
,
Malley
S
,
Holsen
LM
,
Savage
CR
.
2012
.
Changes in brain activation to food pictures after adjustable gastric banding
.
Surg Obes Relat Dis
.
8
:
602
608
.
Casey
BJ
.
2014
.
Beyond simple models of self-control to circuit-based accounts of adolescent behavior
.
Annu Rev Psychol
.
66
:
295
319
.
Caspers
S
,
Zilles
K
,
Laird
AR
,
Eickhoff
SB
.
2010
.
ALE meta-analysis of action observation and imitation in the human brain
.
Neuroimage
.
50
:
1148
1167
.
Cauda
F
,
D'Agata
F
,
Sacco
K
,
Duca
S
,
Geminiani
G
,
Vercelli
A
.
2011
.
Functional connectivity of the insula in the resting brain
.
Neuroimage
.
55
:
8
23
.
Chartrand
TL
,
Bargh
JA
.
1999
.
The chameleon effect: the perception-behavior link and social interaction
.
J Pers Soc Psychol
.
76
:
893
910
.
Cloutier
J
,
Heatherton
TF
,
Whalen
PJ
,
Kelley
WM
.
2008
.
Are attractive people rewarding? Sex differences in the neural substrates of facial attractiveness
.
J Cogn Neurosci
.
20
:
941
951
.
Craig
.
2003
.
Interoception: the sense of the physiological condition of the body
.
Curr Opin Neurobiol
.
13
:
500
505
.
Demos
KE
,
Heatherton
TF
,
Kelley
WM
.
2012
.
Individual differences in nucleus accumbens activity to food and sexual images predict weight gain and sexual behavior
.
J Neurosci
.
32
:
5549
5552
.
Demos
KE
,
Kelley
WM
,
Heatherton
TF
.
2011
.
Dietary restraint violations influence reward responses in nucleus accumbens and amygdala
.
J Cogn Neurosci
.
23
:
1952
1963
.
Dembek
CR
,
Harris
JL
,
Schwartz
MB
.
2013
.
Rudd Report. Where children and adolescents view food and beverage ads on TV: Exposure by channel and program
.
Rudd Center for Food Policy & Obesity
.
Dimitropoulos
A
,
Tkach
J
,
Ho
A
,
Kennedy
J
.
2012
.
Greater corticolimbic activation to high-calorie food cues after eating in obese vs. normal-weight adults
.
Appetite
.
58
:
303
312
.
Fareri
DS
,
Martin
LN
,
MR
.
2008
.
Reward-related processing in the human brain: developmental considerations
.
Dev Psychopathol
.
20
:
1191
1211
.
Ferrari
PF
,
Gallese
V
,
Rizzolatti
G
,
Fogassi
L
.
2003
.
Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex
.
Eur J Neurosci
.
17
:
1703
1714
.
Fischl
B
.
2004
.
Automatically parcellating the human cerebral cortex
.
Cereb Cortex
.
14
:
11
22
.
Freedman
DS
,
Wang
J
,
Maynard
LM
,
Thornton
JC
,
Mei
Z
,
Pierson
RN
,
Dietz
WH
,
Horlick
M
.
2005
.
Relation of BMI to fat and fat-free mass among children and adolescents
.
Int J Obes (Lond)
.
29
:
1
8
.
Funk
M
,
Lutz
K
,
Hotz-Boendermaker
S
,
Roos
M
,
Summers
P
,
Brugger
P
,
Hepp-Reymond
M-C
,
Kollias
SS
.
2008
.
Sensorimotor tongue representation in individuals with unilateral upper limb amelia
.
Neuroimage
.
43
:
121
127
.
Gallese
V
.
1998
.
Mirror neurons and the simulation theory of mind-reading
.
Trends Cogn Sci
.
2
:
493
501
.
Gearhardt
AN
,
Yokum
S
,
Stice
E
,
Harris
JL
,
Brownell
KD
.
2013
.
Relation of obesity to neural activation in response to food commercials
.
Soc Cogn Affect Neurosci
.
9
(
7
):
932
938
.
Grabski
K
,
Lamalle
L
,
Vilain
C
,
Schwartz
J-L
,
Vallée
N
,
Tropres
I
,
Baciu
M
,
Le Bas
J-F
,
Sato
M
.
2012
.
Functional MRI assessment of orofacial articulators: neural correlates of lip, jaw, larynx, and tongue movements
.
Hum Brain Mapp
.
33
:
2306
2321
.
Halford
JC
,
Gillespie
J
,
Brown
V
,
Pontin
EE
,
Dovey
TM
.
2004
.
.
Appetite
.
42
(
2
):
221
225
.
Jakab
A
,
Molnár
PP
,
Bogner
P
,
Béres
M
,
Berényi
EL
.
2012
.
Connectivity-based parcellation reveals interhemispheric differences in the insula
.
Brain Topogr
.
25
:
264
271
.
Jebb
SA
,
Cole
TJ
,
Doman
D
,
Murgatroyd
PR
,
Prentice
AM
.
2007
.
Evaluation of the novel Tanita body-fat analyser to measure body composition by comparison with a four-compartment model
.
Br J Nutr
.
83
:
115
122
.
Johnston
L
.
2002
.
Behavioral mimicry and stigmatization
.
Soc Cogn
.
20
:
18
35
.
Kerr
KL
,
Avery
JA
,
Barcalow
JC
,
Moseman
SE
,
Bodurka
J
,
Bellgowan
PSF
,
Simmons
WK
.
2014
.
Trait impulsivity is related to ventral ACC and amygdala activity during primary reward anticipation
.
Soc Cogn Affect Neurosci
.
10
(
1
):
36
42
.
Keysers
C
,
Wicker
B
,
Gazzola
V
,
Anton
J-L
,
Fogassi
L
,
Gallese
V
.
2004
.
A touching sight
.
Neuron
.
42
:
335
346
.
Kober
H
,
Mende-Siedlecki
P
,
Kross
EF
,
Weber
J
,
Mischel
W
,
Hart
CL
,
Ochsner
KN
.
2010
.
Prefrontal-striatal pathway underlies cognitive regulation of craving
.
.
107
:
14811
6
.
Kohler
E
,
Keysers
C
,
Umiltà
MA
,
Fogassi
L
,
Gallese
V
,
Rizzolatti
G
.
2002
.
Hearing sounds, understanding actions: action representation in mirror neurons
.
Science
.
297
:
846
848
.
Kriegeskorte
N
,
Simmons
WK
,
Bellgowan
PSF
,
Baker
CI
.
2009
.
Circular analysis in systems neuroscience: the dangers of double dipping
.
Nat Neurosci
.
12
:
535
540
.
Lopez
RB
,
Hofmann
W
,
Wagner
DD
,
Kelley
WM
,
Heatherton
TF
.
2014
.
Neural predictors of giving in to temptation in daily life
.
Psychol Sci
.
25
:
1337
1344
.
Lowell
SY
,
Poletto
CJ
,
Knorr-Chung
BR
,
Reynolds
RC
,
Simonyan
K
,
Ludlow
CL
.
2008
.
Sensory stimulation activates both motor and sensory components of the swallowing system
.
Neuroimage
42
:
285
295
(.
Marcus
DS
,
Fotenos
AF
,
Csernansky
JG
,
Morris
JC
,
Buckner
RL
.
2010
.
Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults
.
J Cogn Neurosci
.
22
:
2677
2684
.
Marcus
DS
,
Harwell
J
,
Olsen
T
,
Hodge
M
,
Glasser
MF
,
Prior
F
,
Jenkinson
M
,
Laumann
T
,
Curtiss
SW
,
Van Essen
DC
.
2011
.
Informatics and data mining tools and strategies for the human connectome project
.
Front Neuroinform
.
5
:
4
.
Martin
LE
,
Holsen
LM
,
Chambers
RJ
,
Bruce
AS
,
Brooks
WM
,
Zarcone
JR
,
Butler
MG
,
Savage
CR
.
2010
.
Neural mechanisms associated with food motivation in obese and healthy weight adults
.
Obesity (Silver Spring)
.
18
:
254
260
.
McCaffery
JM
,
Haley
AP
,
Sweet
LH
,
Phelan
S
,
Raynor
HA
,
Del Parigi
A
,
Cohen
R
,
Wing
RR
.
2009
.
Differential functional magnetic resonance imaging response to food pictures in successful weight-loss maintainers relative to normal-weight and obese controls
.
Am J Clin Nutr
.
90
:
928
934
.
McClure
AC
,
Tanski
SE
,
Gilbert-Diamond
D
,
AM
,
Li
Z
,
Li
Z
,
Sargent
JD
.
2013
.
Receptivity to television fast-food restaurant marketing and obesity among U.S
.
Youth Am J Prev Med
.
45
:
560
568
.
Murdaugh
DL
,
Cox
JE
,
Cook
EW
,
Weller
RE
.
2012
.
fMRI reactivity to high-calorie food pictures predicts short- and long-term outcome in a weight-loss program
.
Neuroimage
.
59
:
2709
2721
.
National Center for Health Statistics
.
2012
.
Health, United States, 2011: with special features on socioeconomic status and health
.
Department of Health and Human Services
.
Ode
JJ
,
Pivarnik
JM
,
Reeves
MJ
,
Knous
JL
.
2007
.
Body mass index as a predictor of percent fat in college athletes and nonathletes
.
Med Sci Sports Exerc
.
39
:
403
409
.
Ogden
CL
,
Carroll
MD
,
Kit
BK
,
Flegal
KM
.
2014
.
Prevalence of childhood and adult obesity in the United States, 2011-2012
.
JAMA
.
311
:
806
814
.
Ostrowsky
K
.
2002
.
Representation of pain and somatic sensation in the human insula: a study of responses to direct electrical cortical stimulation
.
Cereb Cortex
.
12
:
376
385
.
Power
JD
,
Cohen
AL
,
Nelson
SM
,
Wig
GS
,
Barnes
KA
,
Church
JA
,
Vogel
AC
,
Laumann
TO
,
Miezin
FM
,
Schlaggar
BL
et al
.
2011
.
Functional network organization of the human brain
.
Neuron
.
72
:
665
678
.
Ramel
A
,
TI
,
EA
,
Martinez
JA
,
Kiely
M
,
Bandarra
NM
,
Thorsdottir
I
.
2013
.
Relationship between BMI and body fatness in three European countries
.
Eur J Clin Nutr
.
67
:
254
258
.
Rideout
VJ
,
Foehr
UG
,
Roberts
DF
.
2010
.
Generation M2: media in the lives of 8- to 18-year-olds
.
Henry J. Kaiser Family Foundation
.
Rosner
B
,
Prineas
R
,
Loggie
J
,
Daniels
SR
.
1998
.
Percentiles for body mass index in U.S. children 5 to 17 years of age
.
J Pediatr
.
132
:
211
222
.
Rothemund
Y
,
Preuschhof
C
,
Bohner
G
,
Bauknecht
H-C
,
Klingebiel
R
,
Flor
H
,
Klapp
BF
.
2007
.
Differential activation of the dorsal striatum by high-calorie visual food stimuli in obese individuals
.
Neuroimage
.
37
:
410
421
.
Shah
NR
,
Braverman
ER
.
2012
.
Measuring adiposity in patients: the utility of body mass index (BMI), percent body fat, and leptin
.
PLoS One
.
7
:
e33308
.
Simmons
WK
,
Rapuano
KM
,
Ingeholm
JE
,
Avery
J
,
Kallman
S
,
Hall
KD
,
Martin
A
.
2013
.
The ventral pallidum and orbitofrontal cortex support food pleasantness inferences
.
Brain Struct Funct
.
Simmons
WK
,
Rapuano
KM
,
Kallman
SJ
,
Ingeholm
JE
,
Miller
B
,
Gotts
SJ
,
Avery
JA
,
Hall
KD
,
Martin
A
.
2013
.
Category-specific integration of homeostatic signals in caudal but not rostral human insula
.
Nat Neurosci
.
16
:
1551
1552
.
Stice
E
,
Yokum
S
,
Burger
KS
,
Epstein
LH
,
Small
DM
.
2011
.
Youth at risk for obesity show greater activation of striatal and somatosensory regions to food
.
J Neurosci
.
31
:
4360
4366
.
Stoeckel
LE
,
Weller
RE
,
Cook
EW
,
Twieg
DB
,
Knowlton
RC
,
Cox
JE
.
2008
.
Widespread reward-system activation in obese women in response to pictures of high-calorie foods
.
Neuroimage
.
41
:
636
647
.
Takahashi
T
,
Miyamoto
T
,
Terao
A
,
Yokoyama
A
.
2007
.
Cerebral activation related to the control of mastication during changes in food hardness
.
Neuroscience
.
145
:
791
794
.
Talma
H
,
Chinapaw
MJM
,
Bakker
B
,
HiraSing
RA
,
Terwee
CB
,
Altenburg
TM
.
2013
.
Bioelectrical impedance analysis to estimate body composition in children and adolescents: a systematic review and evidence appraisal of validity, responsiveness, reliability and measurement error
.
Obes Rev
.
14
:
895
905
.
Van Essen
DC
,
Glasser
MF
,
Dierker
DL
,
Harwell
J
,
Coalson
T
.
2012
.
Parcellations and hemispheric asymmetries of human cerebral cortex analyzed on surface-based atlases
.
Cereb Cortex
.
22
:
2241
2262
.
Veldhuizen
MG
,
Albrecht
J
,
Zelano
C
,
Boesveldt
S
,
Breslin
P
,
Lundström
JN
.
2011
.
Identification of human gustatory cortex by activation likelihood estimation
.
Hum Brain Mapp
.
32
:
2256
2266
.
Vul
E
,
Harris
C
,
Winkielman
P
,
Pashler
H
.
2009
.
Puzzlingly high correlations in fmri studies of emotion, personality, and social cognition
.
Perspect Psychol Sci
.
4
:
274
290
.
Wagner
DD
,
Altman
M
,
Boswell
RG
,
Kelley
WM
,
Heatherton
TF
.
2013
.
Self-regulatory depletion enhances neural responses to rewards and impairs top-down control
.
Psychol Sci
.
24
:
2262
2271
.
Wagner
DD
,
Dal Cin
S
,
Sargent
JD
,
Kelley
WM
,
Heatherton
TF
.
2011
.
Spontaneous action representation in smokers when watching movie characters smoke
.
J Neurosci
.
31
:
894
898
.
Wang
G-J
,
Volkow
ND
,
Felder
C
,
Fowler
JS
,
Levy
AV
,
Pappas
NR
,
Wong
CT
,
Zhu
W
,
Netusil
N
.
2002
.
Enhanced resting activity of the oral somatosensory cortex in obese subjects
.
Neuroreport
.
13
:
1151
1155
.
Widhalm
K
,
Schönegger
K
,
Huemer
C
,
Auterith
A
.
2001
.
Does the BMI reflect body fat in obese children and adolescents? A study using the TOBEC method
.
Int J Obes Relat Metab Disord
.
25
:
279
285
.
Yokum
S
,
Ng
J
,
Stice
E
.
2012
.
Relation of regional gray and white matter volumes to current BMI and future increases in BMI: a prospective MRI study
.
Int J Obes (Lond)
.
36
:
656
664
.
|
|
# Sketch the region whose area is represented by the definite integral. Integral from -2 to 8 of...
## Question:
Sketch the region whose area is represented by the definite integral.
{eq}\int_{-2}^{8} \left | x-4 \right | \, \mathrm{d}x {/eq}
Use a geometric formula to evaluate the integral.
## Area:
Recall that the area under a positive curve {eq}y=f(x),\, a\leq x\leq b {/eq} is given by the definite integral {eq}\int_a^b f(x)\, dx {/eq}. This was our original motivation for defining the integral and it allows us to evaluate the area under curves without standard geometric formulas.
The region whose area is given by the integral {eq}\int_{-2}^8 \left| x-4\right| \, dx {/eq} is shown below
We also recall that the area of a...
Become a Study.com member to unlock this answer! Create your account
|
|
Question
# A slab of material of dielectric constant K has the same area as that of the plates of a parallel plate capacitor but has the thickness d/2, where d is the separation between the plates. Find out the expression for its capacitance when the slab is inserted between the plates of the capacitor .
Hard
Solution
Verified by Toppr
## Hint:- Write the expression of potential difference between the plates when the slab is inserted and hence find capacitance by using the formula,Step 1: Write the expression of capacitance before the slab is inserted. Initially, when there is a vacuum between the two plates, the capacitance of the capacitor is where A is the area of parallel plates Suppose that the capacitor is connected to a battery, an electric field is produced Step 2: Write the expression of potential difference after the slab is inserted between the plates of the capacitor if we insert the dielectric slab of thickness t=d/2 the electric field is reduced to E Now, the gap between plates is divided in two parts, for distance t=there is electric field E and for the remaining distance (d-t) the electric field is If V be the potential difference between the plates of the capacitor, then Step 3:Calculate capacitance after dielectric slab is inserted.
Video Explanation
Solve any question of Electrostatic Potential and Capacitance with:-
|
|
# Lie algebra: why does it have to be the tangent space at the IDENTITY of a Lie group?
Why is the identity element so important in this construction. I looked up some books and notes but still do not see why. How could the construction started from tangent space of a element other than identity possibly fail to get a Lie algebra so that people can only get it from the tangent space of identity?
• Depending on the definition you are using, because the conjugation action of the group on itself is not guaranteed to preserve any points other than the identity. However, if you are using the definition in terms of left-invariant vector fields, you could use the tangent space at any point. – Aaron Mar 20 '14 at 22:57
• If you do differential geometry (cannot tell) you will learn that multiplication, say on the left, by a group element, takes the identity to thsat element and the tangent space to the tangent space, in a canonical and reversible way. So, the identity is just most convenient... – Will Jagy Mar 20 '14 at 22:58
If $G$ is a Lie group and $g \in G$ then the map $L_g\colon G \to G$ defined by $x \mapsto gx$ (so, left multiplication by $g$) is an isomorphism (a topological isomorphism, it's not a group homomorphism). It's derivative gives an isomorphism between the tangent space $T_1G$ of $G$ at the identity and the tangent space $T_gG$ of $G$ at $g$.
So to answer your question, it's not special. The tangent spaces at all points of $G$ are isomorphic. So we just pick one to work with and the identity is the only element that every group is guaranteed to have, so we pick the identity.
• How do you define the tangent space of $G$ at the identity? $T_IG$? – kalmanIsAGameChanger Jun 6 '17 at 15:17
|
|
# LCM builtin in Python / Numpy
I can write a function to find LCM (lowest common multiple) of an array of integers, but I thought it must have been implemented in numpy or scipy and was expecting something like numpy.lcm() to do that. I'm surprised to find there is no such thing. Perhaps I'm searching in a wrong place. So, if you know any other library where this simple function is defined, please oblige.
Why not define it yourself, as suggest in the Rosetta Code examples here: http://rosettacode.org/wiki/Least_common_multiple#Python
For example, you could build on this (via this library):
import fractions
def lcm(a,b): return abs(a * b) / fractions.gcd(a,b) if a and b else 0
I am not surprised at all that it's not in Numpy. Numpy is focused on floating-point and array/matrix computation, not on number theoretic functions and operations on integers. I can understand it needing internally a gcd function for some exotic array/stride computation, but it's really not the main point of the library. Sage would be the first place I'd look, not numpy.
Indeed it has this function, and it looks very fast:
sage: %timeit lcm(range(1,1000))
100 loops, best of 3: 820 µs per loop
If you are doing number theoretical computations, I'd recommend you to move to Sage instead of pure Python. You'll find that generally it has more of the stuff you need already implemented.
In Numpy v1.17 (which is, as of writing, the non-release development version) there is an lcm function that can be used for two numbers with, e.g.:
import numpy as np
np.lcm(12, 20)
or for multiple numbers with, e.g.:
np.lcm.reduce([40, 12, 20])
There's also a gcd function.
If speed is an issue, you should look at this thread: https://stackoverflow.com/questions/15569429/numpy-gcd-function It explains how you can actually easely code a quicker version of gcd (than the gcd from the fractions module) in python and then of course a lcm function from it as in the answer by blochwave.
|
|
# If $|A| > \frac{|G|}{2}$ then $AA = G$ [closed]
I'v found this proposition.
If $G$ is a finite group , $A \subset G$ a subset and $|A| > \frac{|G|}{2}$ then $AA = G$.
Why this is true ?
• Found it where? Presumably, we're talking finite groups. – Thomas Andrews Feb 23 '14 at 22:37
• We had this before on math.stackexchange, but I don't know how to fed the search function. – Martin Brandenburg Feb 23 '14 at 22:44
let $A^{-1}=\{a^{-1}|a\in A\}$ then notice that $|A^{-1}|=|A|$ and let $x\in G$.
Now we can say that $|xA^{-1}|=|A|$ .Since $2|A|>|G|$ then $xA^{-1}$ and $A$ must intersect,otherwise we have contradiction.
Thus,we must have $xa_1^{-1}=a_2$ for some $a_1^{-1}\in A^{-1}$ and $a_2\in A\implies$ $x=a_1a_2\in AA$ we are done.
• In your very last implication, you're using the fact that $A$ is closed under miltiplication. Nevertheless $A$ is only a subset, so... – Gabriel Romon Mar 6 '14 at 11:38
• $A$ is not closed under multiplication. – mesel Mar 6 '14 at 12:10
• Oh it's $AA$! I'm sorry, $A$ prints as a blank character on my phone so the whole time I was considering $AA$ as $A$... – Gabriel Romon Mar 6 '14 at 12:24
|
|
# All Questions
105 views
### run length testing [closed]
Looking for guidance/references with/about calculations: I am designing a widget that creates N-bit binary sequences. My customers target spec for the sequence's minimum entropy is X bits/bit. I plan ...
261 views
### Simple proof of work example?
Can anyone show me a simple proof of work algorithm that I can use to stop spammers? I've looked at hashcat, but i think there's a bit too much specialized hardware for bitcoin mining. That is, the ...
92 views
### Composing and Inverting Symmetric Encryption Keys
Using a symmetric encryption algorithm $E$ on a message $M$, users usually send $E(M)$ to the recipient and then the recipient will compute $E^{-1}(E(M))=M$ to retrieve the message. For what symmetric ...
76 views
I'm trying to analyze a login protocol based on symmetric key cryptography. I'm just starting out, so this may very likely be a very bad idea, but nonetheless I'd like to hear your thoughts on it. ...
256 views
### Given a 'good' basis for a lattice, how can we solve the CVP?
I'm doing a little bit of reading about lattices. I read that if we can find a 'short' basis for our given lattice, we can solve CVP and SVP very efficiently. However, the paper didn't describe an ...
121 views
### Is this an acceptable All-or-Nothing Transform?
I was thinking about AONTs, and designed the one below, I call it CHANT for Chained-Hash All-or-Nothing Transform; it's my very first shot at something of the sort, and was hoping I could get your ...
747 views
### Rainbow table for DES with all-zero plaintext?
Consider the function $F$ from $\{0,1\}^{56}$ to $\{0,1\}^{64}$, mapping the operative bits of a DES key to the ciphertext for all-zero plaintext. How could we organize a rainbow table to invert that ...
355 views
I'm looking to do some research into how RSA is used in the wild. I've read through a few papers and it seems other researchers have had no trouble collecting millions of keys to perform analysis on. ...
254 views
### Encrypting or HMACing password digests
Assuming I'm using bcrypt to digest passwords, is any additional security gained by either encrypting or HMACing the resulting digests? By requiring a key to compare password hashes, I would expect ...
205 views
### What are good combinations of public key algorithms or primitives for long term security?
There has been talk in literature about doing multiple encryption with different block ciphers or stream ciphers several times before, and the benefits and risks of such efforts. There may be hidden ...
2k views
### How can rainbow tables be used for a dictionary attack?
I'm putting together a password policy for my company. I very much want to avoid requiring complex passwords, and would much rather require length. The maximum length I can enforce is 14 characters. ...
498 views
### Challenge-response based on public-key decryption, why send public key
Quoting the handbook of applied cryptography, chapter 10.3.3 (i): Identification based on PK decryption and witness. Consider the following protocol: $A \leftarrow B: h(r), B, P_A(r,B)$ ...
86 views
### Encrypted files in an encrypted partition or folder
Recently I started wondering about multiple encryption and found pages upon pages of posts and debates about the security of cascades. Now, don’t worry – I'm not looking for someone to discuss a ...
389 views
### Public key Issue - same key pair as existing one? [duplicate]
My question is a bit naive, but what if someone generates the same RSA key pair as someone else? This person would have the same private key and so would be able to decrypt messages not intended to ...
457 views
### Test Vectors for ciphers
While implementing ciphers (/hash functions, ...), I often face this problem: Where to find test vectors for it; so that I can guarantee my program is correct. It is generally a tedious job to find ...
112 views
### OTT service using FPE
Would it be possible to create an Over-The-Top communication utility that will encrypt voice using format preserving encryption (voice clear-text to audio encrypted stream) and send that over an ...
270 views
### PAK Diffie-Hellman vs. sharing high-entropy key
In order to setup an authenticated shared secret key between two clients, I am faced with the choice between two possibilities: Let users set a (low-entropy) shared password and then perform some ...
418 views
### What current authenticated key exchange standards exist?
If neither of the 'big two' of TLS Handshake and IKE are appropriate in a given situation, what alternative Authenticated Key Exchange (AKE) standards exist and are recommended? Many protocols have ...
537 views
### Is it secure to use Diffie-Hellman key agreement to generate a nonce?
I have a system, using AES, in one of the modes that uses a nonce and authentication. We have a pre-shared key, and to agree about initial nonce we could use Diffie-Hellman, using the resulting ...
6k views
### How does a rolling code work?
I have general questions regarding rolling codes. Basically there is a sender and a receiver. Both have a sequence generator. The receiver checks if the received sequence matches the newly generated. ...
200 views
|
|
# How do you add a figure to an APA paper?
## How do you add a figure to an APA paper?
Placement of Figures in a Paper There are two options for the placement of figures (and tables) in a paper. The first is to embed figures in the text after each is first mentioned (or called out); the second is to place each figure on a separate page after the reference list.
## Where do tables and figures go in APA?
FiguresPlace each figure on a separate page at the end of your manuscript, after any tables (or after the reference list, if there are no tables).Place a caption below each figure describing its contents and defining any abbreviations used in the figure.
What is the difference between a table and a figure in APA?
Any image or illustration in APA is treated as either a Table or a Figure. Tables are numerical values or text displayed in rows and columns. A Figure is any type of illustration (chart, graph,photograph, drawing maps …) other than a table.
### How do you label a chart in APA?
APA does not require a title within the graph itself (except in research papers for classes). BUT all figures need be numbered and have a title in a caption below the graph. The text in a figure should be in a san serif font (such as Calibri, Helvetica, Arial, or Futura).
### How do you cite figures in APA 6th edition?
below the table must include the following: Title of Work, by Author, date, retrieved from Date of Copyright by Copyright Holder. The figure # is as it would appear, numbered consecutively, in your paper – not the figure # assigned to it in its original resource. All figures must be mentioned in text.
|
|
# find the eigen function and eigen value of differential operator
I have an operator, defined in the cylindrical coordinate system with cylindrical symmetry, given by: $$\frac{\partial^2}{\partial r^2}+ \frac{\partial}{r\partial r}$$
I would like to find the eigenfunction and their eigenvalues for this operator. I found that it will be Bessel function of the first kind. But I would like to know the derivation.
|
|
Last edited by Yogal
Tuesday, October 13, 2020 | History
2 edition of Kinetic Theory (Student Monographs in Physics) found in the catalog.
Kinetic Theory (Student Monographs in Physics)
J.M Pendlebury
# Kinetic Theory (Student Monographs in Physics)
## by J.M Pendlebury
Written in English
Subjects:
• Classical mechanics,
• Gas technology,
• Particle & high-energy physics,
• Science/Mathematics,
• Science,
• General,
• Science / Mathematical Physics,
• Mathematical Physics
• The Physical Object
FormatPaperback
Number of Pages64
ID Numbers
Open LibraryOL9663661M
ISBN 100852747969
ISBN 109780852747964
An ideal gas is a gas that exactly follows the statements of the kinetic theory. Unfortunately, real gases are not ideal. Many gases deviate slightly from agreeing perfectly with the kinetic theory of gases. However, most gases adhere to the statements so well that the kinetic theory of gases is well accepted by the scientific community. Useful for advanced undergraduates in physics and engineering who have some familiarity with calculus, this text is an edition of An Introduction to Thermodynamics, Kinetic Theory, and Statistical Mechanics, written by Francis Sears/5.
Basics of Kinetic Theory It says that the molecules of gas are in random motion and are continuously colliding with each other and with the walls of the container. All the collisions involved are elastic in nature due to which the total kinetic energy and the total momentum both are conserved. Additional Physical Format: Online version: Greenberg, W. (William), Boundary value problems in abstract kinetic theory. Basel ; Boston: Birkhäuser Verlag,
This is “Kinetic Theory of Gases”, section from the book Beginning Chemistry (v. ). For details on it (including licensing), click here. This book is licensed under a Creative Commons by-nc-sa license. Theory of Relativity: It explains the in-variance in nature and also the motion of particles which travel with velocities close to that of light. 3. Thermodynamics: It is the theory of heat, temperature and conversion of heat into work and vice-versa.
You might also like
Education and economic crisis
Education and economic crisis
High school curriculum in the fine arts for able students.
High school curriculum in the fine arts for able students.
carnival
carnival
Cassells fashion workcards
Cassells fashion workcards
Agriculture, trade reform and poverty reduction
Agriculture, trade reform and poverty reduction
Byzantine thought and art
Byzantine thought and art
Casebook
Casebook
Jake finds out
Jake finds out
Decreasing behaviors of persons with severe retardation and autism
Decreasing behaviors of persons with severe retardation and autism
Population and society.
Population and society.
Peter Pan
Peter Pan
2007 Standards for Long Term Care (SLTC)
2007 Standards for Long Term Care (SLTC)
The happy waterman, or, Honesty the best policy
The happy waterman, or, Honesty the best policy
My father didnt think this way
My father didnt think this way
Approved profit sharing schemes
Approved profit sharing schemes
Research handbook for health care professionals
Research handbook for health care professionals
Australian historic records register thesaurus.
Australian historic records register thesaurus.
### Kinetic Theory (Student Monographs in Physics) by J.M Pendlebury Download PDF EPUB FB2
Kinetic Theory Of Gases. This book covers the following Kinetic Theory book Foundations Of The Hypothesis, Pressure Of Gases, Maxwell's Law, Ideal And Actual Gases, Molecular And Atomic Energy, Molecular Free Paths, Viscosity Of Gases, Diffusin Of Gases and Conduction Of Heat.
Kinetic Theory Books. This section contains free e-books and guides on Kinetic Theory, some of the resources in this section can be viewed online and some of them can be downloaded.
Fundamental Kinetic Processes. This note will discuss the development of basic kinetic approaches to more complex and contemporary systems. Topics covered includes. This item: Kinetic Theory of Gases (Dover Books on Chemistry) by Walter Kauzmann Paperback $Only 1 left in stock (more on the way). Ships from and sold by Influence of Sea Power Upon History, (Illustrated) by Alfred Thayer Mahan Paperback$Cited by: This book should be of interest to graduate students and others undertaking research in kinetic theory. Show less Kinetic Theory, Volume 3: The Chapman-Enskog Solution of the Transport Equation for Moderately Dense Gases describes the Chapman-Enskog solution of the transport equation for moderately dense gases.
Kinetic Theory, Volume I: The Nature of Gases and of Heat covers the developments in area of kinetic theory, statistical mechanics, and thermodynamics. This book is. Purchase Kinetic Theory - 1st Edition. Print Book & E-Book. ISBNBook Edition: 1. Kinetic theory is the atomic description of gases as well as liquids and solids.
It models the properties of matter in terms of Kinetic Theory book random motion of molecules. The ideal gas law can be expressed in terms of the mass of the gas’s molecules and $$\bar{v^2}$$, the average of the molecular speed squared, instead of the temperature.
Crash course on kinetic molecular theory. This is ice, the solid form of a water molecule. When water freezes it turns into ice. Ice molecules don't ha. The basic assumptions of kinetic theory are: A gas consist of particles called molecules which move randomly in all directions.
The volume of molecule is very small in comparison to the volume occupied by gas i.e., the size of molecule is infinitesimally small. kinetic theory of gases Download kinetic theory of gases or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get kinetic theory of gases book now.
This site is like a library, Use search box in the widget to get ebook that you want. But, because atomic theory was not fully embraced in the early 20 th century, it was not until Albert Einstein published one of his seminal works describing Brownian motion (Einstein, ) in which he modeled matter using a kinetic theory of molecules that the idea of an atomic (or molecular) picture really took hold in the scientific community.
Kinetic theory of gases supposes that a gaseous compound is stored in a close container. This causes the atoms in the gas to strike the walls of its container, which, in turn, leads to the formation of kinetic energy. Book your FREE Online counselling session. Kinetic theory provides the essential material for an introductory course on plasma physics as well as the basis for advanced kinetic theory.
It offers a wide-range coverage of the field. Plasma kinetics deals with the relationship between velocity and forces and the study of Cited by: 1. Introduction to Temperature, Kinetic Theory, and the Gas Laws; Temperature; Thermal Expansion of Solids and Liquids; The Ideal Gas Law; Kinetic Theory: Atomic and Molecular Explanation of Pressure and Temperature; Phase Changes; Humidity, Evaporation, and Boiling; Glossary; Section Summary; Conceptual Questions.
The presentation is clear. The book includes also tables and figures that make more pleasant its reading. this book gives a standard presentation of kinetic theory; one of its qualities is to provide clear calculations and a good overview of classical methods.". The content of this book is divided into the different art forms: pictures, sculptures, electronic systems, and finally a selection of several discussions and articles on other subjects related to kinetic art, like the use of sound, or holography for example.
Highly recommended if you want to know more about this artistic movement.5/5(4). The temperature of a gas depends on its average kinetic energy avg(1/2mv 2) = 3/2kT. In other words, the energy of an ideal gas is entirely kinetic.
The amazing thing about the kinetic molecular theory is that it can be used to derive the ideal gas law. Kinetic theory or kinetic theory of gases attempts to explain overall properties of gases, such as pressure, temperature, or volume, by considering their molecular composition and theory basically states that pressure is not caused by molecules pushing each other away, like earlier scientists d, pressure is caused by the molecules colliding with each other and their.
kinetic theory of gases: The kinetic theory of gases describes a gas as a large number of small particles (atoms or molecules), all of which are in constant, random motion.
Newtonian mechanics: Early classical mechanics as propounded by Isaac Newton, especially that based on his laws of motion and theory of gravity. Prelude to The Kinetic Theory of Gases Gases are literally all around us—the air that we breathe is a mixture of gases.
Other gases include those that make breads and cakes soft, those that make drinks fizzy, and those that burn to heat many homes. Engines and refrigerators depend on the behaviors of gases, as we will see in later chapters. Kinetic Theory of Matter.
by Ron Kurtus (revised 29 December ) The Kinetic Theory of Matter states that matter is composed of a large number of small particles—individual atoms or molecules—that are in constant motion. This theory is also called the Kinetic-Molecular Theory of Matter and the Kinetic Theory of Gases.
By making some simple assumptions, such as the idea that .The Kinetic Molecular Theory of Gas (part 2) – YouTube Uses the kinetic theory of gases to explain properties of gases (expandability, compressibility, etc.) Show Sources Boundless vets and curates high-quality, openly licensed content from around the Internet.EBOOK SYNOPSIS: Worked Problems in Heat, Thermodynamics and Kinetic Theory for Physics Students is a complementary to textbooks in physics.
This book is a collection of exercise problems that have been part of tutorial classes in heat and thermodynamics at the University of London.
|
|
#jsDisabledContent { display:none; } My Account | Register | Help
Flag as Inappropriate This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email this Article Email Address:
# Thermodynamic potential
Article Id: WHEBN0000255446
Reproduction Date:
Title: Thermodynamic potential Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
### Thermodynamic potential
A thermodynamic potential is a scalar quantity used to represent the thermodynamic state of a system. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. One main thermodynamic potential that has a physical interpretation is the internal energy U. It is the energy of configuration of a given system of conservative forces (that is why it is a potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for U. In thermodynamics, certain forces, such as gravity, are typically disregarded when formulating expressions for potentials. For example, while all the working fluid in a steam engine may have higher energy due to gravity while sitting on top of Mount Everest than it would at the bottom of the Mariana Trench, the gravitational potential energy term in the formula for the internal energy would usually be ignored because changes in gravitational potential within the engine during operation would be negligible.
## Contents
• Description and interpretation 1
• Natural variables 2
• The fundamental equations 3
• The equations of state 4
• The Maxwell relations 5
• Euler integrals 6
• The Gibbs–Duhem relation 7
• Chemical reactions 8
• See also 9
• Notes 10
• References 11
• Further reading 12
• External links 13
## Description and interpretation
Five common thermodynamic potentials are:[1]
Name Symbol Formula Natural variables
Internal energy U \int ( T \text{d}S - p \text{d}V + \sum_i \mu_i \text{d}N_i ) S, V, \{N_i\}
Helmholtz free energy F U-TS T, V, \{N_i\}
Enthalpy H U+pV S, p, \{N_i\}
Gibbs free energy G U+pV-TS T, p, \{N_i\}
Landau Potential (Grand potential) \Omega, \Phi_\text{G} U - T S -\sum_i\,\mu_i N_i T, V, \{\mu_i\}
where T = temperature, S = entropy, p = pressure, V = volume. The Helmholtz free energy is often denoted by the symbol F, but the use of A is preferred by IUPAC,[2] ISO and IEC.[3] Ni is the number of particles of type i in the system and μi is the chemical potential for an i-type particle. For the sake of completeness, the set of all Ni are also included as natural variables, although they are sometimes ignored.
These five common potentials are all energy potentials, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials.
Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. Internal energy (U ) is the capacity to do work plus the capacity to release heat. Gibbs energy is the capacity to do non-mechanical work. Enthalpy is the capacity to do non-mechanical work plus the capacity to release heat. Helmholtz free energy is the capacity to do mechanical work (useful work). From these definitions we can say that ΔU is the energy added to the system, ΔF is the total work done on it, ΔG is the non-mechanical work done on it, and ΔH is the sum of non-mechanical work done on the system and the heat given to it. Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some simple constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards lower values of potential and at equilibrium, under these constraints, the potential will take on an unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint.
In particular: (see principle of minimum energy for a derivation)[4]
• When the entropy (S ) and "external parameters" (e.g. volume) of a closed system are held constant, the internal energy (U ) decreases and reaches a minimum value at equilibrium. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy. The following three statements are directly derivable from this principle.
• When the temperature (T ) and external parameters of a closed system are held constant, the Helmholtz free energy (F ) decreases and reaches a minimum value at equilibrium.
• When the pressure (p) and external parameters of a closed system are held constant, the enthalpy (H ) decreases and reaches a minimum value at equilibrium.
• When the temperature (T ), pressure (p) and external parameters of a closed system are held constant, the Gibbs free energy (G ) decreases and reaches a minimum value at equilibrium.
## Natural variables
The variables that are held constant in this process are termed the natural variables of that potential.[5] The natural variables are important not only for the above-mentioned reason, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. On the converse, if a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system.
Notice that the set of natural variables for the above four potentials are formed from every combination of the T-S and P-V variables, excluding any pairs of conjugate variables. There is no reason to ignore the Niμi conjugate pairs, and in fact we may define four additional potentials for each species.[6] Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have:
Formula Natural variables U[\mu_j]=U-\mu_jN_j\, ~~~~~S,V,\{N_{i\ne j}\},\mu_j\, F[\mu_j]=U-TS-\mu_jN_j\, ~~~~~T,V,\{N_{i\ne j}\},\mu_j\, H[\mu_j]=U+pV-\mu_jN_j\, ~~~~~S,p,\{N_{i\ne j}\},\mu_j\, G[\mu_j]=U+pV-TS-\mu_jN_j\, ~~~~~T,p,\{N_{i\ne j}\},\mu_j\,
If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as U[\mu_1,\mu_2] = U-\mu_1 N_1-\mu_2 N_2 and so on. If there are D dimensions to the thermodynamic space, then there are 2D unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials.
## The fundamental equations
The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow.[7] (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics, any differential change in the internal energy U of a system can be written as the sum of heat flowing into the system and work done by the system on the environment, along with any change due to the addition of new particles to the system:
\mathrm{d}U = \delta Q - \delta W+\sum_i \mu_i\,\mathrm{d}N_i
where δQ is the infinitesimal heat flow into the system, and δW is the infinitesimal work done by the system, μi is the chemical potential of particle type i and Ni is the number of type i particles. (Note that neither δQ nor δW are exact differentials. Small changes in these variables are, therefore, represented with δ rather than d.)
By the second law of thermodynamics, we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have:
\delta Q = T\,\mathrm{d}S\,
\delta W = p\,\mathrm{d}V\,
where
T is temperature,
S is entropy,
p is pressure,
and V is volume, and the equality holds for reversible processes.
This leads to the standard differential form of the internal energy in case of a quasistatic reversible change:
\mathrm{d}U = T\mathrm{d}S - p\mathrm{d}V+\sum_i \mu_i\,\mathrm{d}N_i\,
Since U, S and V are thermodynamic functions of state, the above relation holds also for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:
dU = T \, dS - \sum_i X_i \, dx_{i} + \sum_j \mu_j \, dN_j\,
Here the Xi are the generalized forces corresponding to the external variables xi.
Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials:
\mathrm{d}U\, \!\!=\!\! T\mathrm{d}S\, -\, p\mathrm{d}V\, +\sum_i \mu_i \,\mathrm{d}N_i\, \mathrm{d}F\, \!\!=\!\! -\, S\,\mathrm{d}T\, -\, p\mathrm{d}V\, +\sum_i \mu_i \,\mathrm{d}N_i\, \mathrm{d}H\, \!\!=\!\! T\,\mathrm{d}S\, +\, V\mathrm{d}p\, +\sum_i \mu_i \,\mathrm{d}N_i\, \mathrm{d}G\, \!\!=\!\! -\, S\,\mathrm{d}T\, +\, V\mathrm{d}p\, +\sum_i \mu_i \,\mathrm{d}N_i\,
Note that the infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of 2D fundamental equations.
The differences between the four thermodynamic potentials can be summarized as follows:
\mathrm{d}(pV) = \mathrm{d}H - \mathrm{d}U = \mathrm{d}G - \mathrm{d}F
\mathrm{d}(TS) = \mathrm{d}U - \mathrm{d}F = \mathrm{d}H - \mathrm{d}G
## The equations of state
We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define Φ to stand for any of the thermodynamic potentials, then the above equations are of the form:
\mathrm{d}\Phi=\sum_i x_i\,\mathrm{d}y_i\,
where xi and yi are conjugate pairs, and the yi are the natural variables of the potential Φ. From the chain rule it follows that:
x_j=\left(\frac{\partial \Phi}{\partial y_j}\right)_{\{y_{i\ne j}\}}
Where yi ≠ j is the set of all natural variables of Φ except yi . This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state.[8] If we restrict ourselves to the potentials U, F, H and G, then we have:
+T=\left(\frac{\partial U}{\partial S}\right)_{V,\{N_i\}} =\left(\frac{\partial H}{\partial S}\right)_{p,\{N_i\}}
-p=\left(\frac{\partial U}{\partial V}\right)_{S,\{N_i\}} =\left(\frac{\partial F}{\partial V}\right)_{T,\{N_i\}}
+V=\left(\frac{\partial H}{\partial p}\right)_{S,\{N_i\}} =\left(\frac{\partial G}{\partial p}\right)_{T,\{N_i\}}
-S=\left(\frac{\partial G}{\partial T}\right)_{p,\{N_i\}} =\left(\frac{\partial F}{\partial T}\right)_{V,\{N_i\}}
~\mu_j= \left(\frac{\partial \phi}{\partial N_j}\right)_{X,Y,\{N_{i\ne j}\}}
where, in the last equation, ϕ is any of the thermodynamic potentials U, F, H, G and {X,Y,\{N_{j\ne i}\}} are the set of natural variables for that potential, excluding Ni . If we use all potentials, then we will have more equations of state such as
-N_j=\left(\frac{\partial U[\mu_j]}{\partial \mu_j}\right)_{S,V,\{N_{i\ne j}\}}
and so on. In all, there will be D equations for each potential, resulting in a total of D 2D equations of state. If the D equations of state for a particular potential are known, then the fundamental equation for that potential can be determined. This means that all thermodynamic information about the system will be known, and that the fundamental equations for any other potential can be found, along with the corresponding equations of state.
## The Maxwell relations
Again, define xi and yi to be conjugate pairs, and the yi to be the natural variables of some potential Φ. We may take the "cross differentials" of the state equations, which obey the following relationship:
\left(\frac{\partial}{\partial y_j} \left(\frac{\partial \Phi}{\partial y_k} \right)_{\{y_{i\ne k}\}} \right)_{\{y_{i\ne j}\}} = \left(\frac{\partial}{\partial y_k} \left(\frac{\partial \Phi}{\partial y_j} \right)_{\{y_{i\ne j}\}} \right)_{\{y_{i\ne k}\}}
From these we get the Maxwell relations.[1][9] There will be (D − 1)/2 of them for each potential giving a total of D(D − 1)/2 equations in all. If we restrict ourselves the U, F, H, G
\left(\frac{\partial T}{\partial V}\right)_{S,\{N_i\}} = -\left(\frac{\partial p}{\partial S}\right)_{V,\{N_i\}}
\left(\frac{\partial T}{\partial p}\right)_{S,\{N_i\}} = +\left(\frac{\partial V}{\partial S}\right)_{p,\{N_i\}}
\left(\frac{\partial S}{\partial V}\right)_{T,\{N_i\}} = +\left(\frac{\partial p}{\partial T}\right)_{V,\{N_i\}}
\left(\frac{\partial S}{\partial p}\right)_{T,\{N_i\}} = -\left(\frac{\partial V}{\partial T}\right)_{p,\{N_i\}}
Using the equations of state involving the chemical potential we get equations such as:
\left(\frac{\partial T}{\partial N_j}\right)_{V,S,\{N_{i\ne j}\}} = \left(\frac{\partial \mu_j}{\partial S}\right)_{V,\{N_i\}}
and using the other potentials we can get equations such as:
\left(\frac{\partial N_j}{\partial V}\right)_{S,\mu_j,\{N_{i\ne j}\}} = -\left(\frac{\partial p}{\partial \mu_j}\right)_{S,V\{N_{i\ne j}\}}
\left(\frac{\partial N_j}{\partial N_k}\right)_{S,V,\mu_j,\{N_{i\ne j,k}\}} = -\left(\frac{\partial \mu_k}{\partial \mu_j}\right)_{S,V\{N_{i\ne j}\}}
## Euler integrals
Again, define xi and yi to be conjugate pairs, and the yi to be the natural variables of the internal energy. Since all of the natural variables of the internal energy U are extensive quantities
U(\{\alpha y_i\})=\alpha U(\{y_i\})\,
it follows from Euler's homogeneous function theorem that the internal energy can be written as:
U(\{y_i\})=\sum_j y_j\left(\frac{\partial U}{\partial y_j}\right)_{\{y_{i\ne j}\}}
From the equations of state, we then have:
U=TS-pV+\sum_i \mu_i N_i\,
Substituting into the expressions for the other main potentials we have:
F= -pV+\sum_i \mu_i N_i\,
H=TS +\sum_i \mu_i N_i\,
G= \sum_i \mu_i N_i\,
As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Note that the Euler integrals are sometimes also referred to as fundamental equations.
## The Gibbs–Duhem relation
Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward.[7][10][11] Equating any thermodynamic potential definition with its Euler integral expression yields:
U=TS-PV+\sum_i \mu_i N_i\,
Differentiating, and using the second law:
dU=TdS-PdV+\sum_i\mu_i\,dN_i
yields:
0=SdT-VdP+\sum_i N_i d\mu_i\,
Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with I components, there will be I + 1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem.
## Chemical reactions
Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. Δ denotes the change in the potential and at equilibrium the change will be zero.
Constant V Constant p
Constant S ΔU ΔH
Constant T ΔF ΔG
Most commonly one considers reactions at constant p and T, so the Gibbs free energy is the most useful potential in studies of chemical reactions.
## Notes
1. ^ a b Alberty (2001) p. 1353
2. ^ Alberty (2001) p. 1376
3. ^ ISO/IEC 80000-5:2007, item 5-20.4
4. ^ Callen (1985) p. 153
5. ^ Alberty (2001) p. 1352
6. ^ Alberty (2001) p. 1355
7. ^ a b Alberty (2001) p. 1354
8. ^ Callen (1985) p. 37
9. ^ Callen (1985) p. 181
10. ^ Moran & Shapiro, p. 538
11. ^ Callen (1985) p. 60
## References
• Alberty, R. A. (2001). "Use of Legendre transforms in chemical thermodynamics" (PDF). Pure Appl. Chem. 73 (8): 1349–1380.
• Moran, Michael J.; Shapiro, Howard N. (1996). Fundamentals of Engineering Thermodynamics (3rd ed.). New York ; Toronto: J. Wiley & Sons.
## Further reading
• McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3
• Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009, ISBN 9781420073683
• Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971, ISBN 0-356-03736-3
• Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974, ISBN 0-201-05229-6
• Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN 9780471566588
Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.
|
|
# The number of hurricanes that will hit certain house in the next 10 years is Poisson distributed with mean 4.
###### Question:
The number of hurricanes that will hit certain house in the next 10 years is Poisson distributed with mean 4. Each hurricane results in a loss that is exponentially distributed with mean 100. Losses are mutually independent and independent of the number of hurricanes. Calculate the variance of the total loss due to hurricanes hitting this house in the next 10 years:
#### Similar Solved Questions
##### 7pohts LarApCak10 3.4.022.Find the dimensions of the largest rectangle that can be Inscribed in semicircle of radius (See figure below:)71 points LarApCalc10 3.4.026.Find the point on the graph of the function that is closest to the given point. f{x) = (x - 1)2, (-9, 2)(xy) =
7pohts LarApCak10 3.4.022. Find the dimensions of the largest rectangle that can be Inscribed in semicircle of radius (See figure below:) 71 points LarApCalc10 3.4.026. Find the point on the graph of the function that is closest to the given point. f{x) = (x - 1)2, (-9, 2) (xy) =...
##### The structured products division of your investment bank is planning to create a set of productsrepresenting claims on its portfolio of commercial loans. The value of each of the loanshas an annual variance σ2 and any two loans have a correlation of Ï (0 < Ï < 1).The bank plans to offer three types of claims on the value of the loan portfolio at time T. Theclaims differ in the seniority of their claims to the value of the underlying portfolio. Tranche1 pays the value of the underlyin
The structured products division of your investment bank is planning to create a set of productsrepresenting claims on its portfolio of commercial loans. The value of each of the loanshas an annual variance σ2 and any two loans have a correlation of Ï (0 < Ï < 1).The bank plans ...
##### 4. Predict the products of the following reactions, including physical states. Then balance each reaction. A....
4. Predict the products of the following reactions, including physical states. Then balance each reaction. A. ____ Pb(NO3)2(aq) + ____ KCl(aq) → B. ____ H2SO4(aq) + ____ LiOH(aq) → C. ____ Na2S(aq) + ____ FeI2(aq) → D. ____ Fe(C2H3O2)3(aq) + ____ NH4OH(aq) →...
##### A molecule or ion that donates the hydrogen in a hydrogen bond is hydrogen bond donor A molecule Or ion that binds to this hydrogen is a hydrogen bond acceptor. Specify the hydrogen bonding behavior of the 4 species below by selecting: donor for species that act as donors only acceptor for species that act as acceptors only both for species that act as both donors and acceptors neither for species that act neither as donors nor as acceptors_HzCCHjHydrogen bonding behaviorNHz CHa H;CHydrogen bond
A molecule or ion that donates the hydrogen in a hydrogen bond is hydrogen bond donor A molecule Or ion that binds to this hydrogen is a hydrogen bond acceptor. Specify the hydrogen bonding behavior of the 4 species below by selecting: donor for species that act as donors only acceptor for species t...
##### Of the charge $Q$ initially on a tiny sphere, a portion $q$ is to be transferred to a second, nearby sphere. Both spheres can be treated as particles and are fixed with a certain separation. For what value of $q / Q$ will the electrostatic force between the two spheres be maximized?
Of the charge $Q$ initially on a tiny sphere, a portion $q$ is to be transferred to a second, nearby sphere. Both spheres can be treated as particles and are fixed with a certain separation. For what value of $q / Q$ will the electrostatic force between the two spheres be maximized?...
##### Constant Part A A conduetng spherical shell wth inner rndiue a and outer radus has a...
Constant Part A A conduetng spherical shell wth inner rndiue a and outer radus has a postive poine charge located at its center The total charge on the shel is-3Q, and it is insulated fom its surroundinge (Figure Express your answer in terms of some or all of the variables Q,a b, and appropriabe con...
##### [30 m uton Iahud Glor) Ivulu FuCHEM 3534Ji-CNNR HandoiAnswcr the Following QucstionsIdentify cach compound from moleculn fonmula and its "C NMR specirum_ CuFAOIprrn /Identily eich compound from its molecular fonula and its "C NMR spectrum CoHuzFol(moini[nquemyUX
[30 m uton Iahud Glor) Ivulu Fu CHEM 3534 Ji-CNNR Handoi Answcr the Following Qucstions Identify cach compound from moleculn fonmula and its "C NMR specirum_ CuFAO Iprrn / Identily eich compound from its molecular fonula and its "C NMR spectrum CoHuz Fol(moini [nquemy UX...
##### How many peaks would be observed for each of the circled protons in the compounds below?HOHIHHCIHCIVAnswers: =8; Il =4; I=3;IV =3I=7;/=4Ie3;Iv =41=7;II =3; I =3;Iv =31=7;/=4;I=2;IV=4
How many peaks would be observed for each of the circled protons in the compounds below? H OH IH H CI HC IV Answers: =8; Il =4; I=3;IV =3 I=7;/=4Ie3;Iv =4 1=7;II =3; I =3;Iv =3 1=7;/=4;I=2;IV=4...
##### Simple as Pie Break-Even Analysis Rob and Joan Russell started baking pies from their home eight...
Simple as Pie Break-Even Analysis Rob and Joan Russell started baking pies from their home eight years ago. Their business idea was simple: There were no top-quality pies sold commercially in their mid-sized New England town. They produced no plan, no budget, and gave no thought to how much capital ...
##### Determine A; 9,and AE at 298 andpressure for the reaction123NHa(g) wlth excess HCI(9) produce NH4CI(s). (See Thermodynamic Properties:)Please use the mass0079 and the massNas 14.007. Use the values the Thermodynamic Properties determine the expected numbersignificant figures
Determine A; 9, and AE at 298 and pressure for the reaction 123 NHa(g) wlth excess HCI(9) produce NH4CI(s). (See Thermodynamic Properties:) Please use the mass 0079 and the mass Nas 14.007. Use the values the Thermodynamic Properties determine the expected number significant figures...
##### In the following exercises, use appropriate substitutions to express the trigonometric integrals in terms of compositions with logarithms. $\int x \csc \left(x^{2}\right) d x$
In the following exercises, use appropriate substitutions to express the trigonometric integrals in terms of compositions with logarithms. $\int x \csc \left(x^{2}\right) d x$...
##### Help me find the molarity of Sn2+ Redox Titration in Acidic Medium 504 Total 2. Push...
help me find the molarity of Sn2+ Redox Titration in Acidic Medium 504 Total 2. Push Slider Up to Add a Volume of K2Cr2O7 Volume of K2Cr2O7 Oxidizing Agent KMnO4 K2Cr2O7 Reducing Agent Fe2+ Sn2+ S2032- Reaction 1. Select the 40.52 ml 40.5 ml 3. After Titration, Calculate and Enter Molarity of Sn2+ M...
##### Part 3: Production costs and supply and market dynamics questions (5 marks) – Week 3 (Total...
Part 3: Production costs and supply and market dynamics questions (5 marks) – Week 3 (Total word count – 200 words) Complete the following worksheet by typing where possible. Where you are required to draw on the diagram, please either use the insert shapes or draw tools in Word. If you ...
##### Need serious help on these 3! Please help! Which of the following should have the largest...
Need serious help on these 3! Please help! Which of the following should have the largest value of S degree? a) SF_6 (g) at 298K b) H_2O (l) at 298K c) He(g) at 298K d) CH_3OH (l) at 298K Which of these changes in the distribution of the nine particles shows an decrease in entropy? Which of these...
##### A 2.9 kg block is dropped from height of 4.5 m (from place it is dropped to the spring) onto a spring of spring constant 3955 N/m attached to the floor: Find the speed of the block when the compression of the spring is 15.0 cm_ m/sFigure 7-19
A 2.9 kg block is dropped from height of 4.5 m (from place it is dropped to the spring) onto a spring of spring constant 3955 N/m attached to the floor: Find the speed of the block when the compression of the spring is 15.0 cm_ m/s Figure 7-19...
##### The electric potential in a region of space is V = (250 V middot m)/squareroot x^2...
The electric potential in a region of space is V = (250 V middot m)/squareroot x^2 + y^2, where x and y are in meters. Part A What is the strength of the electric field at (x, y) = (2.3 m, 1.8 m)? Express your answer using two significant figures. E = ______________ Part B What is the direction of t...
##### Point) Solve the following differential equation:(tan(z) 5 5 sin(z) sin(y) )dx + 5 cos(z) cos(y)dy = 0.constant: help (formulas)
point) Solve the following differential equation: (tan(z) 5 5 sin(z) sin(y) )dx + 5 cos(z) cos(y)dy = 0. constant: help (formulas)...
##### 1) Which of the following are true statements aboutcatalysts (choose all that apply)?Group of answer choices:a) A catalyst helps keep reactants stable at highertemperaturesb) A catalyst is a substance that increases the rate ofa chemical reactionc) A catalyst is completely consumed in areactiond) A catalyst effectively creates an extra path for thereaction to travel that is higher in energye) A catalyst works by lowering the energy barrier to areactionf) An enzyme is a biological catalyst
1) Which of the following are true statements about catalysts (choose all that apply)? Group of answer choices: a) A catalyst helps keep reactants stable at higher temperatures b) A catalyst is a substance that increases the rate of a chemical reaction c) A catalyst is completely consumed in a react...
##### In a survey of 7800 T.V. viewers, 40% said they watch networknews programs. Find the margin of error for this survey if we want95% confidence in our estimate of the percent of T.V. viewers whowatch network news programs.
In a survey of 7800 T.V. viewers, 40% said they watch network news programs. Find the margin of error for this survey if we want 95% confidence in our estimate of the percent of T.V. viewers who watch network news programs....
##### What is the equation of the line that has a slope of m= \frac { 2} { 9} and goes through the point ( 5,2)?
What is the equation of the line that has a slope of m= \frac { 2} { 9} and goes through the point ( 5,2)?...
##### )Question 211 ptsTwo identical blocks are tied together by a massless rope. One block is on an incline that makes an angle of 30" with the horizontal. The other is hung from a massless, frictionless pulley at the top of the incline. The first block is pulled up the incline at constant velocity by the second block when this second block falls. If the incline has an angle of 309, what is the coefficient of friction between the first block and the incline?0.5808*00.680.240.37Question 22
) Question 21 1 pts Two identical blocks are tied together by a massless rope. One block is on an incline that makes an angle of 30" with the horizontal. The other is hung from a massless, frictionless pulley at the top of the incline. The first block is pulled up the incline at constant veloci...
##### Q3. If a solution ofhydrofluoric acid (HF; Ka = 6.8xl0-) has pH of 2.12, calculate the initial concentration of hydrofluoric acidHF (aq) = Htlag) + F-(ag)
Q3. If a solution ofhydrofluoric acid (HF; Ka = 6.8xl0-) has pH of 2.12, calculate the initial concentration of hydrofluoric acid HF (aq) = Htlag) + F-(ag)...
##### 2. Consider the reactions 1 and 2 shown below. Indicate how the reaction rate for each of the reactions will change...
2. Consider the reactions 1 and 2 shown below. Indicate how the reaction rate for each of the reactions will change your options are: increase, decrease, does NOT change) when the following modifications to the reactions and conditions are introduced: reaction 1 reaction 2 Br CH30 Na CH3OH CH OH as ...
##### Factor expression completely. If an expression is prime, so indicate.$-2 x^{3}+54$
Factor expression completely. If an expression is prime, so indicate. $-2 x^{3}+54$...
##### If f = {(4, 2), (6, 1), (8, 4), (10, 2), (12, 5)}, what is the range?
If f = {(4, 2), (6, 1), (8, 4), (10, 2), (12, 5)}, what is the range?...
##### Problem I: Use Laplace transforms to solve the following IVPs:y" + 9y = 9(t) y(0) = 1, Y(0) = 3sin 3tif t < } if & < tg(t)1 + sin( 3t
Problem I: Use Laplace transforms to solve the following IVPs: y" + 9y = 9(t) y(0) = 1, Y(0) = 3 sin 3t if t < } if & < t g(t) 1 + sin( 3t...
##### An amoeba is 0.325 cm away from the 0.301 cm focal length objective lens of a...
An amoeba is 0.325 cm away from the 0.301 cm focal length objective lens of a microscope. Where is the image formed by the objective lens? Equation: 1 = 1/3 + 1...
##### Given the following equations and Ho values given below, determine the heat of reaction at 298...
Given the following equations and Ho values given below, determine the heat of reaction at 298 K for the reaction: 2 N2(g) + 5 O2(g) 2 N2O5(g) 2 H2(g) + O2(g) 2 H2O(l) Ho/kJ = -571.6 N2O5(g) + H2O(l) 2 HNO3(l) Ho/kJ = -73.7 N2(g) + 3 O2(g) + H2(g) 2 HNO3(l) Ho/kJ ...
##### For each of the following separate situations, determine the associated cost of inflation. (1) shoe-leather costs;...
For each of the following separate situations, determine the associated cost of inflation. (1) shoe-leather costs; (2) money illusion; (3) menu costs; (4) future price level uncertainty; (5) wealth redistribution; (6) price confusion; or (7) tax distortions. (Explanations are not required) Wages of...
##### Calculate the pH for a 0.0183 M pyrophosphoric acid solution (HaPz0-). Pyrophosphoric acid has three dissociable protons with Ka1 = 3.00x10-2, Ka2 4.40x10-3,Ka3 = 2.10x10-7_0 2251.890 1.812.072.55
Calculate the pH for a 0.0183 M pyrophosphoric acid solution (HaPz0-). Pyrophosphoric acid has three dissociable protons with Ka1 = 3.00x10-2, Ka2 4.40x10-3,Ka3 = 2.10x10-7_ 0 225 1.89 0 1.81 2.07 2.55...
##### Let {Kz:n € N} be a collection of compact sets. Prove that 0Rz1 Kn is compact. Will UR=1 Kn always be compact?
Let {Kz:n € N} be a collection of compact sets. Prove that 0Rz1 Kn is compact. Will UR=1 Kn always be compact?...
##### What two failure modes are of concern for a beam supporting an instrument laboratory?
What two failure modes are of concern for a beam supporting an instrument laboratory?...
##### Find the general solution U(I;y) to the second order PDEUzy = 0
Find the general solution U(I;y) to the second order PDE Uzy = 0...
PLS ANSWER ASAP!! THANKS!! Jyntula AssignmentsessionLo... Dividing Partnership Income Tyler Hawes and Piper Albright formed a partnership, investing $62,500 and$187,500, respectively. Determine their participation in the year's net income of \$285,000 under each of the following independent a...
|
|
Coupling topology and inflation in NCG cosmology
# The coupling of topology and inflation in noncommutative cosmology
Matilde Marcolli Elena Pierpaoli and Kevin Teh Department of Mathematics
California Institute of Technology
Department of Physics and Astronomy
University of Southern California
Los Angeles, CA 90089, USA
###### Abstract.
We show that, in a model of modified gravity based on the spectral action functional, there is a nontrivial coupling between cosmic topology and inflation, in the sense that the shape of the possible slow-roll inflation potentials obtained in the model from the nonperturbative form of the spectral action are sensitive not only to the geometry (flat or positively curved) of the universe, but also to the different possible non-simply connected topologies. We show this by explicitly computing the nonperturbative spectral action for some candidate flat cosmic topologies given by Bieberbach manifolds and showing that the resulting inflation potential differs from that of the flat torus by a multiplicative factor, similarly to what happens in the case of the spectral action of the spherical forms in relation to the case of the -sphere. We then show that, while the slow-roll parameters differ between the spherical and flat manifolds but do not distinguish different topologies within each class, the power spectra detect the different scalings of the slow-roll potential and therefore distinguish between the various topologies, both in the spherical and in the flat case.
## 1. Introduction
Noncommutative cosmology is a new and rapidly developing area of research, which aims at building cosmological models based on a “modified gravity” action functional which arises naturally in the context of noncommutative geometry, the spectral action functional of [4]. As we discuss more in detail in §2 below, this functional recovers the usual Einstein–Hilbert action, with additional terms, such as a conformal gravity, Weyl curvature term. It also has the advantage of allowing for interesting couplings of gravity to matter, when extended from manifolds to “almost commutative geometries” as in [7] and later models [6], [3]. Thus, this approach makes it possible to recover from the same spectral action functional, in addition to the gravitational terms, the full Lagrangian of various particle physics models, ranging from the Minimal Standard Model of [7], to the extension with right handed neutrinos and Majorana mass terms of [6], and to supersymmetric QCD as in [3]. The study of cosmological models derived from the spectral action gave rise to early universe models as in [15] and [11], which present various possible inflation scenarios, as well as effects on primordial black holes evaporation and gravitational wave propagation. Effects on gravitational waves, as well as inflation scenarios coming from the spectral action functional, were also recently studied in [17], [18], [19].
Our previous work [16] showed that, when one considers the nonperturbative form of the spectral action, as in [5], one obtains a slow-roll potential for inflation. We compared some of the more likely candidates for cosmic topologies (the quaternionic and dodecahedral cosmology, and the flat tori) and we showed that, in the spherical cases (quaternionic and dodecahedral), the nonperturbative spectral action is just a multiple of the spectral action of the sphere , and consequently the inflation potential only differs from the one of the sphere case by a constant scaling factor, which cancels out in the computation of the slow-roll parameters, which are therefore the same as in the case of a simply connected topology and do not distinguish the different cosmic topologies with the same spherical geometry.
This result for spherical space forms was further confirmed and extended in [25], where the nonperturbative spectral action is computed explicitly for all the spherical space forms and it is shown to be always a multiple of the spectral action of , with a proportionality factor that depends explicitly on the 3-manifold. Thus, different candidate cosmic topologies with the same positively curved geometry yield the same values of the slow-roll parameters and of the power-law indices and tensor-to-scalar ratio, which are computed from these parameters.
In [16], however, we showed that the inflation potential obtained from the nonperturbative spectral action is different in the case of the flat tori, and not just by a scalar dilation factor. Thus, we know already that the possible inflation scenarios in noncommutative cosmology depend on the underlying geometry (flat or positively curved) of the universe, and the slow–roll parameters are different for these two classes.
The slow-roll parameters alone only distinguish, in our model, between the flat and spherical geometries but not between different topologies within each class. However, in the present paper we show that, when one looks at the amplitudes for the power spectra for density perturbations and gravitational waves (scalar and tensor perturbations), these detect the different scaling factors in the slow-roll potentials we obtain for the different spherical and flat topologies, hence we obtain genuinely different inflation scenarios for different cosmic topologies.
We achieve this result by relying on the computations of the nonperturbative spectral action, which in the spherical cases are obtained in [16] and [25], and by deriving in this paper the analogous explicit computation of the nonperturbative spectral action for the flat Bieberbach manifolds. A similar computation of the spectral action for Bieberbach manifolds was simultaneously independently obtained by Piotr Olczykowski and Andrzej Sitarz in [20].
Thus, the main conclusion of this paper is that a modified gravity model based on the spectral action functional predicts a coupling between cosmic topology and inflation potential, with different scalings in the power spectra that distinguish between different topologies, and slow-roll parameters that distinguish between the spherical and flat cases.
The paper is organized as follows. We first describe in §1.1 the broader context in which the problem we consider here falls, namely the cosmological results relating inflation, the geometry of the universe, and the background radiation, and the problem of cosmic topology. We then review briefly in §2 the use of the spectral action as a modified gravity functional and the important distinction between its asymptotic expansion at large energies and the nonperturbative form given in terms of Dirac spectra. In §4 we present the main mathematical result of this paper, which gives an explicit calculation of the nonperturbative spectral action for certain Bieberbach manifolds, using the Dirac spectra of [21] and a Poisson summation technique similar to that introduced in [5], and used in [16] and [25].
Finally, in §6 we compare the resulting slow-roll inflation potentials, power spectra for density perturbations and slow-roll parameters, for all the different possible cosmic topologies.
### 1.1. Inflation, geometry, and topology
It is well known that the mechanism of cosmic inflation, first proposed by Alan Guth and Andrei Linde, naturally leads to a flat or almost flat geometry of the universe (see for instance §1.7 of [14]). It was then shown in [10] that the geometry of the universe can be read in the cosmic microwave background radiation (CMB), by showing that the anisotropies of the CMB depend primarily upon the geometry of the universe (flat, positively or negatively curved) and that this information can be detected through the fact that the location of the first Doppler peak changes for different values of the curvature and is largely unaffected by other parameters. This theoretical result made it possible to devise an observational test that could confirm the inflationary theory and its prediction for a flat or nearly flat geometry. The experimental confirmation of the nearly flat geometry of the universe came in [2] through the Boomerang experiment. Thus, the geometry of the universe leaves a measurable trace in the CMB, and measurements confirmed the flat geometry predicted by inflationary models.
The cosmic topology problem instead concentrates not on the question about the curvature and the geometry of the universe, but on the possible existence, for a given geometry, of a non-simply connected topology, that is, of whether the spatial sections of spacetime can be compact 3-manifolds which are either quotients of the 3-sphere (spherical space forms) in the positively curved case, quotients of 3-dimensional Euclidean space (flat tori or Bieberbach manifolds) in the flat case, or quotients of the 3-dimensional hyperbolic space (hyperbolic 3-manifolds) in the negatively curved space. A general introduction to the problem of cosmic topology is given in [12].
Since the cosmological observations prefer a flat or nearly flat positively curved geometry to a nearly flat negatively curved geometry (see [2], [27]), most of the work in trying to identify the most likely candidates for a non-trivial cosmic topology concentrate on the flat spaces and the spherical space forms. Various methods have been devised to try to detect signatures of cosmic topology in the CMB, in particular through a detailed analysis of simulated CMB skies for various candidate cosmic topologies (see [22] for the flat cases). It is believed that perhaps some puzzling features of the CMB such as the very low quadrupole, the very planar octupole, and the quadrupole–octupole alignment may find an explanation in the possible presence of a non-simply connected topology, but no conclusive results to that effect have yet been obtained.
The recent results of [16] show that a modified gravity model based on the spectral action functional imposes constraints on the form of the possible inflation slow-roll potentials, which depend on the geometry and topology of the universe, as shown in [16]. While the resulting slow-roll parameters and spectral index and tensor-to-scalar ratio distinguish the even very slightly positively curved case from the flat case, these parameters alone do not distinguish between the different spherical topologies, as shown in [25]. As we show in this paper, the situation is similar for the flat manifolds: these same parameters alone do not distinguish between the various Bieberbach manifolds (quotients of the flat torus), but they do distinguish these from the spherical quotients. However, if one considers, in addition to the slow-roll parameters, also the power spectra for the density fluctuations, one can see that, in our model based on the spectral action as a modified gravity functional, the resulting slow-roll potentials give different power spectra that distinguish between all the different topologies.
### 1.2. Slow-roll potential and power spectra of fluctuations
We first need to recall here some well known facts about slow-roll inflation potentials, slow-roll parameters, and the power spectra for density perturbations and gravitational waves. We refer the reader to [23] and to [13], [24], as well as to the survey of inflationary cosmology [1].
Consider an expanding universe, which is topologically a cylinder , for a -manifold , with a Lorentzian metric of the usual Friedmann form
(1.1) ds2=−dt2+a(t)2ds2Y
where is the Riemannian metric on the -manifold .
In models of inflation based on a single scalar field slow-roll potential , the dynamics of the scale factor in the Friedmann metric (1.1) is related to the scalar field dynamics through the acceleration equation
(1.2) ¨aa=H2(1−ϵ),
where is the Hubble parameter, which is related to the scalar field and the inflation potential by
(1.3) H2=13(12˙ϕ2+V(ϕ)), and% ¨ϕ+3H˙ϕ+V′(ϕ)=0,
and is the slow-roll parameter, which depends on the potential as described in (1.11) below, see [1] for more details.
It is customary to decompose perturbations of the metric of (1.1) into scalar and tensor perturbations, which correspond, respectively, to density fluctuations and gravitational waves. One typically neglects the remaining vector components of the perturbation, assuming that these are not generated by inflation and decay with the expansion of the universe, see §9.2 of [1]. Thus, one writes scalar and tensor perturbations in the form
(1.4) ds2=−(1+2Φ)dt2+2a(t)dBdt+a(t)2((1−2Ψ)gij+2ΔE+hij)dxidxj
with and , and where the give the tensor part of the perturbation, satisfying and . The tensor perturbations have two polarization modes, which correspond to the two polarizations of the gravitational waves.
One considers then the intrinsic curvature perturbation
(1.5) R=Ψ−H˙ϕδϕ,
which measures the spatial curvature of a comoving hypersurface, that is, a hypersurface with constant . After expanding in Fourier modes in the form
(1.6) R=∫d3k(2π)3/2Rkeikx,
one obtains the power spectrum for the density fluctuations (scalar perturbations of the metric) from the two-point correlation function,
(1.7) ⟨RkRk′⟩=(2π2)3Ps(k)δ3(k+k′).
In the case of a Gaussian distribution, the power spectrum describes the complete statistical information on the perturbations, while the higher order correlations functions contain the information on the possible presence of non-Gaussianity phenomena. The power spectrum for the tensor perturbations is similarly obtained by expanding the tensor fluctiations in Fourier modes and computing the two-point correlation function
(1.8) ⟨hkhk′⟩=(2π2)3Pt(k)δ3(k+k′).
See [1], §9.3, and [24] for more details.
In slow-roll inflation models, the power spectra and are related to the slow-roll potential through the leading order expression (see [23])
(1.9) Ps(k)∼1M6PlV3(V′)2 and Pt(k)∼VM4Pl,
up to a constant proportionality factor, and with the Planck mass. Here the potential and its derivative are to be evaluated at where the corresponding scale leaves the horizon during inflation. These can be expressed a power law as ([23])
(1.10)
where the spectral parameters , , , and depend on the slow–roll potential in the following way. In the slow-roll approximation, the slow-roll parameters are given by the expressions
(1.11)
Notice that we follow here a different convention with respect to the one we used in [16] on the form of the slow-roll parameters. The spectral parameters are then obtained from these as
(1.12)
while the tensor-to-scalar ratio is given by
(1.13) r=PtPs=16ϵ.
From the point of view of our model, the following observation will be useful when we compare the slow-roll potentials that we obtain for different cosmic topologies and how they affect the power spectra.
###### Lemma 1.1.
Given a slow-roll potential and the corresponding power spectra and as in (1.9) and (1.10). If the potential is rescaled by a constant factor , then the power spectra and are also rescaled by the same factor , while in the power law (1.10) the exponents are unchanged.
###### Proof.
This is an immediate consequence of (1.9), (1.11), (1.12), and (1.10). In fact, from (1.9), we see that maps , and also , since it transforms . On the other hand, the expressions and and in the slow-roll parameters (1.11) are left unchanged by , so that the slow-roll parameters and all the resulting spectral parameters of (1.12) are unchanged. Thus, the power law (1.10) only changes by a multiplicative factor and , with unchanged exponents. ∎
## 2. Noncommutative cosmology
### 2.1. The spectral action as a modified gravity model
In its nonperturbative form, the spectral action is defined in terms of the spectrum of the Dirac operator, on a spin manifold or more generally on a noncommutative space (a spectral triple), as the functional , where is a smooth test function and is an energy scale that makes dimensionless.
The reason why this can be regarded as an action functional for gravity (or gravity coupled to matter in the noncommutative case) lies in the fact that, for large energies it has an asymptotic expansion (see [4]) of the form
(2.1) Tr(f(D/Λ))∼∑k∈DimSp+fkΛk∫−|D|−k+f(0)ζD(0)+o(1),
with and with the integrations
∫−|D|−k
given by residues of zeta function at the positive points of the dimension spectrum of the spectral triple, that is, the set of poles of the zeta functions. In the case of a 4-dimensional spin manifold, these, in turn, are expressed in terms of integrals of curvature terms. These include the usual Einstein–Hilbert action
12κ20∫R√gd4x
and a cosmological term
γ0∫√gd4x,
but it also contains some additional terms, like a non-dynamical topological term
τ0∫R∗R∗√gd4x,
where denotes the form that represents the Pontrjagin class and integrates to a multiple of the Euler characteristic of the manifold, as well as a conformal gravity term
α0∫CμνρσCμνρσ√gd4x,
which is given in terms of the Weyl curvature tensor. We do not give any more details here and we refer the reader to Chapter 1 of [CoMa] for a more complete treatment.
The presence of conformal gravity terms along with the Einstein–Hilbert and cosmological terms give then a modified gravity action functional. When one considers the nonperturbative form of the spectral action, rather than its asymptotic expansion at large energies, one can find additional nonperturbative correction terms. One of these was identified in [5], in the case of the 3-sphere, as a potential for a scalar field, which was interpreted in [16] as a potential for a cosmological slow-roll inflation scenario, and computed for other, non-simply connected cosmic topologies.
## 3. Geometry, topology and inflation: spherical forms
The nonperturbative spectral action for the spherical space forms was computed recently by one of the authors [25]. It turns out that, although the Dirac spectra can be significantly different for different spin structures, the spectral action itself is independent of the choice of the spin structure, and it is always equal to a constant multiple of the spectral action for the 3-sphere , where the multiple is just dividing by the order of the group . This is exactly what one expects by looking at the asymptotic expansion of the spectral action for large energies , and the only significant nonperturbative effect arises in the form of a slow-roll potential, as in [5], [16].
###### Theorem 3.1.
(Teh, [25]) For all the spherical space forms with the round metric induced from , and for all choices of spin structure, the nonperturbative spectral action on is equal to
(3.1) Tr(f(DY/Λ))=1#Γ(Λ3ˆf(2)(0)−14Λˆf(0))=1#ΓTr(f(DS3/Λ)),
up to order .
Correspondingly, as explained in §5 of [16], one obtains a slow-roll potential by considering the variation of the spectral action as in [5]. More precisely, one considers a Euclidean compactification of the 4-dimensional spacetime to a compact Riemannian manifold with the compactification of size . One then computes the spectral action on this compactification and its variation
(3.2) Tr(h((D2Y×S1+ϕ2)/Λ2))−Tr(h(D2Y×S1/Λ2))=VY(ϕ),
up to terms of order , where the potential is given by the following.
###### Proposition 3.2.
Let be a spherical space form with the induced round metric. Let be the radius of the sphere and the size of the circle in the Euclidean compactification . Then the slow-roll potential in (3.2) is of the form
(3.3) VY(ϕ)=πΛ4βa3VY(ϕ2Λ2)+π2Λ2βaWY(ϕ2Λ2),
where
(3.4) VY(x)=λYVS3(x) and WY(x)=λYWS3(x),
with
(3.5) λY=1#Γ for Y=S3/Γ,
and with
(3.6) VS3(x)=∫∞0u(h(u+x)−h(u))du and% WS3(x)=∫x0h(u)du.
###### Proof.
The statement follows directly from the result of Theorem 7 of [5] and §5 of [16]. ∎
In particular, for the different spherical forms, the potential has the same form as that of the 3-sphere case, but it is scaled by the factor ,
(3.7) VY(ϕ)=λYVS3(ϕ)=VS3(ϕ)#Γ.
Notice, moreover, that in the potential one has an overall factor of that multiplies the term and a factor of that multiplies the term. As we observed already in [16], when one Wick rotates back to the Minkowskian model with the Friedmann metric, both the scale factor and the energy scale evolve with the expansion of the universe, but in such a way that so that the product . In [16] we did not need to analyze the behavior of the factor, since we only looked at the slow-roll parameters (1.11) where that factor cancels out. In the spectral action model of cosmology, the choice of the scale of the Euclidean compactification is an artifact of the model, which allows one to compute the spectral action in terms of the spectrum of the Dirac operator on the compact Riemannian 4-manifold . Eventually, the physically significant quantities derived from the spectral action functional are Wick rotated back to the Minkowskian signature case. Since in its nonperturbative form the spectral action functional is supposed to give a modified gravity action functional that works at all scales, not just in the asymptotic expansion for large , it seems therefore natural to set the choice of the length in the model so that the product remains constant.
Another reason for making the assumption that is the interpretation given in [5] of the parameter in the Euclidean compactification as an inverse temperature. Then, up to a universal constant, that behaves like the inverse of an energy scale and, when rotating back to the Minkowskian signature, one knows that, in the expansion of the universe the scale factor is inversely proportional to the temperature, so that the assumption is justified.
With this setting, the slow-roll potential one obtains in the case of the 3-sphere is of the form
(3.8) VS3(ϕ)=π∫∞0u(h(u+x)−h(u))du+π2∫x0h(u)du.
Then one has the following result for the power spectra for the various cosmic topology candidates given by spherical space forms.
###### Proposition 3.3.
Let and denote the power spectra for the density fluctuations and the gravitational waves, computed as in (1.9), for the slow-roll potential . Then they satisfy the power law
(3.9)
where for and the spectral parameters , , , are computed as in (1.12) from the slow-roll parameters (1.11), which satisfy , , .
To see explicitly the effect on the slow-roll potential of the scaling by , we consider the same test functions used in [5] to approximate smoothly a cutoff function. These are given by
hn(x)=n∑k=0(πx)kk!e−πx.
Figure 1 shows the graph of when . We use this test function to compute the slow-roll potential using the function , after setting the factors and , and up to an overall multiplicative factor of . We then see in Figure 2 the different curves of the slow-roll potential for the three cases where with the binary tetrahedral, binary octahedral, or binary icosahedral group, respectively given by the top, middle, and bottom curve.
## 4. The spectral action for Bieberbach manifolds
We now consider the case of candidate cosmic topologies that are flat 3-manifolds. The simplest case is the flat torus , which we have already discussed in [16]. There are then the Bieberbach manifolds, which are obtained as quotients of the torus by a finite group action. In this section we give an explicit computation of the nonperturbative spectral action for the Bieberbach manifolds (with the exception of which requires a different technique and will be analyzed elsewhere), and in the next section we then derive the analog of Proposition 3.3 for the case of these flat geometries.
Calculations of the spectral action for Bieberbach manifolds were simultaneously independently obtained in [20].
The Dirac spectrum of Bieberbach manifolds is computed in [21] for each of the six affine equivalence classes of three-dimensional orientable Bieberbach manifolds, and for each possible choice of spin structure and choice of flat metric. These classes are labeled through , with simply being the flat 3-torus.
In general, the Dirac spectrum for each space depends on the choice of spin structure. However, as in the case of the spherical manifolds, we show here that the nonperturbative spectral action is independent of the spin structure.
We follow the notation of [21], according to which the different possibilities for the Dirac spectra are indicated by a letter (e.g. ). Note that it is possible for several spin structures to yield the same Dirac spectrum.
The nonperturbative spectral action for was computed in [16]. We recall here the result for that case and then we restrict our discussion to the spaces through .
### 4.1. The structure of Dirac spectra of Bieberbach manifold
The spectrum of the Bieberbach manifolds generally consists of a symmetric component and an asymmetric component as computed in [21]. The symmetric components are parametrized by subsets , such that the eigenvalues are given by some formula , , and the multiplicity of each eigenvalue, , is some constant times the number of such that . In the case of , , , the constant is , while in the case the constant is .
The approach we use here to compute the spectral action nonperturbatively consists of using the symmetries of as a function of to almost cover all of the points in and then apply the Poisson summation formula as used in [5]. By “almost cover”, it is meant that it is perfectly acceptable if two-, one-, or zero-dimensional lattices through the origin are covered multiple times, or not at all.
The asymmetric component of the spectrum appears only some of the time. The appearance of the asymmetric component depends on the choice of spin structure. For those cases where it appears, the eigenvalues in the asymmetric component consist of the set
B={2π1H(kμ+c)|μ∈Z},
where is a constant depending on the spin structure, and is given in the following table:
Bieberbach manifold k G2 2 G3 3 G4 4 G5 6
For no choice of spin structure does have an asymmetric component to its spectrum. Each of the eigenvalues in has multiplicity . Using the Poisson summation formula as in [5], we see that the asymmetric component of the spectrum contributes to the spectral action
(4.1) ΛHπk∫Rf(u2)du.
The approach described here is effective for computing the nonperturbative spectral action for the manifolds labeled in [21] as , but not for . Therefore, we do not consider the case in this paper: it will be discussed elsewhere.
### 4.2. Recalling the torus case
We gave in Theorem 8.1 of [16] the explicit computation of the non-perturbative spectral action for the torus. We recall here the statement for later use.
###### Theorem 4.1.
Let be the flat torus with an arbitrary choice of spin structure. The nonperturbative spectral action is of the form
(4.2) Tr(f(D2/Λ2))=Λ34π3∫R3f(u2+v2+w2)dudvdw,
up to terms of order .
### 4.3. The spectral action for G2
The Bieberbach manifold is the one that is described as “half-turn space” in the cosmic topology setting in [22], because the identifications of the faces of the fundamental domain is achieved by introducing a -rotation about the -axis. It is obtained by considering a lattice with basis , , and , with and , and then taking the quotient of by the group generated by the commuting translations along these basis vectors and an additional generator with relations
(4.3) α2=t1, αt2α−1=t−12, αt3α−1=t−13.
Like the torus , the Bieberbach manifold has eight different spin structures, parameterized by three signs , see Theorem 3.3 of [21]. Correspondingly, as shown in Theorem 5.7 of [21], there are four different Dirac spectra, denoted , , , and , respectively associated to the the spin structures
We give the computation of the nonperturbative spectral action separately for each different spectrum and we will see that the result is independent of the spin structure and always a multiple of the spectral action of the torus.
#### 4.3.1. The case of G2(a)
In this first case, we go through the computation in full detail. The symmetric component of the spectrum is given by the data ([21])
I={(k,l,m)|k,l,m∈Z,m≥1}∪{(k,l,m)|k,l∈Z,l≥1,m=0}
λ±klm=±2π√1H2(k+12)2+1L2l2+1S2(m−TLl)2,
We make the assumption that . Set . Then we have equivalently:
I={(k,l,p)|k,l,p∈Z,p>−l}∪{(k,l,p)|k,l∈Z,l≥1,p=−l}=:I1∪I2
λ±klp=±2π√1H2(k+12)2+1L2l2+1S2p2.
###### Theorem 4.2.
Let be the Bieberbach manifold , with and with a spin structure with . The nonperturbative spectral action of the manifold is of the form
(4.4) Tr(f(D2/Λ2))=HSL(Λ2π)3∫R3f(u2+v2+w2)dudvdw,
up to terms of order .
###### Proof.
We compute the contribution to the spectral action due to . Since is invariant under the transformation and , we see that
∑Z3f(λ2klp/Λ2)=2∑I1f(λ2klp/Λ2)+∑p=−lf(λ2klp/Λ2).
The decomposition of used to compute this contribution to the spectral action is displayed in figure 4. Applying the Poisson summation formula we get a contribution to the spectral action of
HSL(Λ2π)3∫R3f(u2+v2+w2)−HLS√L2+S2(Λ2π)2∫R2f(u2+v2),
plus possible terms of order .
As for we again use the fact that the spectrum is invariant under the transformation , to see that
∑Z2f(λ2kl(−l)/Λ2)=2∑I2f(λ2klp/Λ2)+∑p=l=0f(λ2klp/Λ2).
The decomposition for this contribution to the spectral action is displayed in figure 3. We get a contribution to the spectral action of
HLS√L2+S2(Λ2π)2∫R2f(u2+v2)−H(Λ2π)∫Rf(u2)
plus possible terms of order .
When we include the contribution (4.1) due to the asymmetric component we see that the spectral action of the space G2-(a) is equal to
Trf(D2/Λ2)=HSL(Λ2π)3∫R3f(u2+v2+w2)dudvdw
again up to possible terms of order . ∎
#### 4.3.2. The case of G2(b) and G2(d)
The spectra of and have no asymmetric component. The symmetric component is given by
I={(k,l,m)|k,l,m∈Z,l≥0}
λ±klm=±2π√1H2(k+12)2+1L2(l+12)2+1S2(m+c−TL(l+12))2.
Let us once again assume that .
###### Theorem 4.3.
Let and be the Bieberbach manifolds , with and with a spin structure with and , respectively. The nonperturbative spectral action of the manifolds and is again of the form
(4.5) Tr(f(D2/Λ2))=HSL(Λ2π)3∫R3f(u2+v2+w2)dudvdw,
up to terms of order .
###### Proof.
With the assumption that and letting , we can describe the spectrum equivalently by
I={(k,l,p)|k,l,p∈Z,l≥0}
λ±klp=±2π√1H2(k+12)2+1L2(l+12)2+1S2(p+c+12)2.
Using the symmetry
l↦−1−l,
we cover exactly, (see figure 5) and we obtain the spectral action
Tr(f(D2/Λ2))=HSL(Λ2π)3∫R3f(u2+v2+w2)dudvdw+O(Λ−∞).
#### 4.3.3. The case of G2(c)
In this case, the symmetric component of the spectrum is given by
I={(k,l,m)|k,l,m∈Z,m≥0}
λ±klm=±2π√1H2(k+12)2+1L2l2+1S2((m+1/2)−TLl)2.
Again, we assume .
###### Theorem 4.4.
Let be the Bieberbach manifolds , with and with a spin structure with . The nonperturbative spectral action of the manifold is again of the form
(4.6) Tr(f(D2/Λ2))=HSL(Λ2π)3∫R3f(u2+v2+w2)dudvdw,
up to terms of order .
###### Proof.
If we substitute , we see that we may equivalently express the symmetric component with
I={(k,l,p)|k,l,p∈Z,p≥−l}
λ±klp=±2π√1H2(k+12)2+1L2l2+1S2((p+1/2)2.
Using the symmetry
l↦−lp↦1−p,
we cover exactly (see figure 6), and so the spectral action is again given by
Trf(D2/Λ2)=HSL(Λ2π)3∫R3f(u2+v2+w2)dudvdw+O(Λ−∞).
### 4.4. The spectral action for G3
The Bieberbach manifold is the one that, in the cosmic topology setting of [22] is described as the “third-turn space”. One considers the hexagonal lattice generated by vectors , and , for and in , and one then takes the quotient of by the group generated by commuting translations along the vectors and an additional generator with relations
(4.7) α3=t1, αt2α−1=t3, αt3α−1=t−12t−13.
This has the effect of producing an identification of the faces of the fundamental domain with a turn by an angle of about the -axis, hence the “third-turn space” terminology.
As shown in Theorem 3.3 of [21], the Bieberbach manifold has two different spin structures, parameterized by one sign . It is then shown in Theorem 5.7 of [21] that these two spin structures have different Dirac spectra, which are denoted as and . We compute below the nonperturbative spectral action in both cases and we show that, despite the spectra being different, they give the same result for the nonperturbative spectral action, which is again a multiple of the action for the torus.
#### 4.4.1. The case of G3(a) and G3(b)
The symmetric component of the spectrum is given by
(4.8) I={(k,l,m)|k,l,m∈Z,l≥1,m=0,…,l−1},
(4.9) λ±klm=±2π√1H2(k+c)2+1L2l2+13L2(l−2m)2,
with for the spin structure and for the spin structure .
The manifold is unusual in that the multiplicity of is equal to twice the number of elements in which map to it.
###### Theorem 4.5.
On the manifold with an arbitrary choice of spin structure, the non-perturbative spectral action is given by
(4.10) Tr(f(D2/Λ2))=1√3(Λ2π)3HL2∫R3f(u2+v2+t2)dudvdt
plus possible terms of order .
###### Proof.
Notice that is invariant under the linear transformations , given by
R(l) =−l R(m) =−m
S(l) =m S(m) =l
T(l) =l−m T(m) =−m
Let .
Then we may decompose as (see figure 7)
(4.11) Z3=I⊔R(I)⊔S(I)⊔RS(I)⊔T(~I)⊔RT(~I)⊔{l=m}.
Therefore, we have
∑Z3f(λ2klm/Λ2) =4∑If(λ2klm/Λ2) +2⎛⎝∑If(λ2klm/Λ2)−∑m=0, l≥1f(λ2klm/Λ2)⎞⎠ +∑l=mf(λ2klm/Λ2) =6∑If(λ2klm/Λ2)−∑m=0f(λ2klm/Λ2) +∑m=0, l=0f(λ2klm/Λ2)+∑l=mf(λ2klm/Λ2) ∑If(λ2klm/Λ2) =16⎛⎝∑Z3f(λ2klm/Λ2)+∑m=0f(λ2klm/Λ2)⎞⎠ −16⎛⎝∑m=0, l=0f(λ2klm/Λ2)−∑l=mf(λ2klm/Λ2)⎞⎠
Therefore the symmetric component of the spectrum contributes to the spectral action
46((Λ2π)3HL2∫R3f(u2+v2+13(v−2w)2) +(Λ2π)2HL∫R2f(u2+43v2)−(Λ2π)H∫Rf(u2) −(Λ2π)2HL∫R2f(u2+43v2))+O(Λ−∞) +O(Λ−∞)
Combining this with the asymmetric contribution (4.1), we see that the spectral action of spaces and is equal to
23(Λ2π)3HL2∫R3f(u2+v2+13(v−2w)2)dudvdw+O(Λ−∞).
Now, if one makes the change of variables , where
t=2w−v√3,
then the spectral action becomes
1√3(Λ2π)3HL2∫R3f(u2+v2+t2)dudvdt+O(Λ−∞).
Notice that, a priori, one might have expected a possibly different result in this case, because the Bieberbach manifold is obtained starting from a hexagonal lattice rather than the square lattice, but up to a simple change of variables in the integral, this gives again the same result, up to a multiplicative constant, as in the case of the standard flat torus.
### 4.5. The spectral action for G4
The Bieberbach manifold is referred to in [22] as the “quarter-turn space”. It is obtained by considering a lattice generated by the vectors , , and , with , and taking the quotient of by the group generated by the commuting translations along the vectors and an additional generator with the relations
(4.12) α4=t1, αt2α−1=t3, αt3α−1=t−12.
This produces an identification of the sides of a fundamental domain with a rotation by an angle of about the -axis. Theorem 3.3 of [21] shows that the manifold has four different spin structures parameterized by two signs . There are correspondingly two different forms of the Dirac spectrum, as shown in Theorem 5.7 of [21], one for , the other for , denoted by and .
Again the nonperturbative spectral action is independent of the spin structure and equal in both cases to the same multiple of the spectral action for the torus.
#### 4.5.1. The case of G4(a)
###### Theorem 4.6.
On the manifold with a spin structure with , the non-perturbative spectral action is given by
(4.13) Tr(f(D2/Λ2))=12(Λ2π)3HL2∫R3f(u2+v2+w2)dudvdw
plus possible terms of order .
###### Proof.
The symmetric component of the spectrum is given by
I={(k,l,m)|k,l,m∈Z,l≥1,m=0,…,2l−1}
λ±klm=±2π√1H2(k+12)2+1L2(l2+(m−l))2,
First, we make the change of variables . Then we use the symmetries
l↦−l l↦pp↦l l↦pp↦−l
to cover all of except for the one-dimensional lattice . This decomposition is depicted in figure 8. In the figure one sees that the points such that are covered twice, and the points such that are not covered at all, but via the transformation , this is the same as covering each of the points , once. Observations like this will be suppressed in the sequel. Then we see that the contribution from the symmetric component of the spectrum to the spectral action is
(4.14) 12(Λ2π)3HL2∫R3
|
|
photon energy
TheInfoList
OR:
Photon energy is the
energy In physics, energy (from Ancient Greek: wikt:ἐνέργεια#Ancient_Greek, ἐνέργεια, ''enérgeia'', “activity”) is the physical quantity, quantitative physical property, property that is #Energy transfer, transferred to a phy ...
carried by a single
photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are Massless particle, massless ...
. The amount of energy is directly proportional to the photon's electromagnetic
frequency Frequency is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as ''temporal frequency'' for clarity, and is distinct from ''angular frequency''. Frequency is measured in Hertz (unit), hertz (H ...
and thus, equivalently, is inversely proportional to the
wavelength In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. It is the distance between consecutive corresponding points of the same phase (waves), phase on the wave, such as two adjac ...
. The higher the photon's frequency, the higher its energy. Equivalently, the longer the photon's wavelength, the lower its energy. Photon energy can be expressed using any
unit of energy Energy is defined via Mechanical work, work, so the SI unit of energy is the same as the unit of work – the joule (J), named in honour of James Prescott Joule and his experiments on the mechanical equivalent of heat. In slightly more fundame ...
. Among the units commonly used to denote photon energy are the
electronvolt In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an Voltage, electric potential difference of one volt i ...
(eV) and the
joule The joule ( , ; symbol: J) is the unit of energy in the International System of Units, International System of Units (SI). It is equal to the amount of Work (physics), work done when a force of 1 Newton (unit), newton displaces a mass through ...
(as well as its multiples, such as the microjoule). As one joule equals 6.24 × 1018 eV, the larger units may be more useful in denoting the energy of photons with higher frequency and higher energy, such as
gamma ray A gamma ray, also known as gamma radiation (symbol γ or \gamma), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nucleus, atomic nuclei. It consists of the shortest wavelength electromagnetic wav ...
s, as opposed to lower energy photons as in the optical and
radio frequency Radio frequency (RF) is the oscillation rate of an Alternating current, alternating electric current or voltage or of a Magnetic field, magnetic, electric or electromagnetic field or mechanical system in the frequency range from around to around ...
regions of the
electromagnetic spectrum The electromagnetic spectrum is the range of frequencies Frequency is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as ''temporal frequency'' for clarity, and is distinct from ''angular ...
.
# Formulas
## Physics
Photon energy is directly proportional to frequency. $E = hf$ where *$E$ is energy *$h$ is the
Planck constant The Planck constant, or Planck's constant, is a fundamental physical constant of foundational importance in quantum mechanics. The constant gives the relationship between the energy of a photon and its frequency, and by the mass-energy equivalenc ...
*$f$ is frequency This equation is known as the
Planck–Einstein relation The Planck relationFrench & Taylor (1978), pp. 24, 55.Cohen-Tannoudji, Diu & Laloë (1973/1977), pp. 10–11. (referred to as Planck's energy–frequency relation,Schwinger (2001), p. 203. the Planck relation, Planck equation, and Planck formula, ...
. Additionally, $E = \frac$ where *''E'' is photon energy *''λ'' is the photon's wavelength *''c'' is the
speed of light The speed of light in vacuum, commonly denoted , is a universal physical constant that is important in many areas of physics. The speed of light is exactly equal to ). According to the special relativity, special theory of relativity, is ...
in vacuum *''h'' is the
Planck constant The Planck constant, or Planck's constant, is a fundamental physical constant of foundational importance in quantum mechanics. The constant gives the relationship between the energy of a photon and its frequency, and by the mass-energy equivalenc ...
The photon energy at 1 Hz is equal to 6.62607015 × 10−34 J That is equal to 4.135667697 × 10−15 eV
## Electronvolt
Energy is often measured in electronvolts. To find the photon energy in
electronvolt In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an Voltage, electric potential difference of one volt i ...
s using the wavelength in
micrometre The micrometre (American and British English spelling differences#-re, -er, international spelling as used by the International Bureau of Weights and Measures; SI symbol: μm) or micrometer (American and British English spelling differences# ...
s, the equation is approximately :$E\text = \frac$ This equation only holds if the wavelength is measured in micrometers. The photon energy at 1 μm wavelength, the wavelength of near
infrared Infrared (IR), sometimes called infrared light, is electromagnetic radiation (EMR) with wavelengths longer than those of Light, visible light. It is therefore invisible to the human eye. IR is generally understood to encompass wavelengths from ...
radiation, is approximately 1.2398 eV.
## In chemistry Chemistry is the scientific study of the properties and behavior of matter. It is a natural science that covers the elements that make up matter to the compounds made of atoms, molecules and ions: their composition, structure, properties ..., quantum physics Quantum mechanics is a fundamental Scientific theory, theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including qua ... and optical engineering
See $E = h$ where *''E'' is photon energy (joules), *''h'' is the
Planck constant The Planck constant, or Planck's constant, is a fundamental physical constant of foundational importance in quantum mechanics. The constant gives the relationship between the energy of a photon and its frequency, and by the mass-energy equivalenc ...
*The
Greek letter The Greek alphabet has been used to write the Greek language since the late 9th or early 8th century BCE. It is derived from the earlier Phoenician alphabet, and was the earliest known alphabetic script to have distinct letters for vowels as ...
''ν'' ( nu) is the photon's
frequency Frequency is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as ''temporal frequency'' for clarity, and is distinct from ''angular frequency''. Frequency is measured in Hertz (unit), hertz (H ...
.
# Examples
An FM
radio Radio is the technology of signaling and telecommunication, communicating using radio waves. Radio waves are electromagnetic waves of frequency between 30 hertz (Hz) and 300 gigahertz (GHz). They are generated by an electronic device ...
station transmitting at 100 MHz emits photons with an energy of about 4.1357 × 10−7 eV. This minuscule amount of energy is approximately 8 × 10−13 times the
electron The electron ( or ) is a subatomic particle with a negative one elementary charge, elementary electric charge. Electrons belong to the first generation (particle physics), generation of the lepton particle family, and are generally thought t ...
's mass (via mass-energy equivalence). Very-high-energy gamma rays have photon energies of 100 GeV to over 1 PeV (1011 to 1015 electronvolts) or 16 nanojoules to 160 microjoules. This corresponds to frequencies of 2.42 × 1025 to 2.42 × 1029 Hz. During
photosynthesis Photosynthesis is a process used by plants and other organisms to Energy transformation, convert light energy into chemical energy that, through cellular respiration, can later be released to fuel the organism's activities. Some of this chemica ...
, specific
chlorophyll Chlorophyll (also chlorophyl) is any of several related green pigments found in cyanobacteria and in the chloroplasts of algae and plants. Its name is derived from the Ancient Greek, Greek words , ("pale green") and , ("leaf"). Chlorophyll al ...
molecules absorb red-light photons at a wavelength of 700 nm in the photosystem I, corresponding to an energy of each photon of ≈ 2 eV ≈ 3 × 10−19 J ≈ 75 ''k''B''T'', where ''k''B''T'' denotes the thermal energy. A minimum of 48 photons is needed for the synthesis of a single
glucose Glucose is a simple sugar with the Chemical formula#Molecular formula, molecular formula . Glucose is overall the most abundant monosaccharide, a subcategory of carbohydrates. Glucose is mainly made by plants and most algae during photosynthesis f ...
molecule from CO2 and water (chemical potential difference 5 × 10−18 J) with a maximal energy conversion efficiency of 35%.
*
Photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are Massless particle, massless ...
*
Electromagnetic radiation In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic field, electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. It includes radio waves, microwaves, inf ...
*
Electromagnetic spectrum The electromagnetic spectrum is the range of frequencies Frequency is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as ''temporal frequency'' for clarity, and is distinct from ''angular ...
*
Planck constant The Planck constant, or Planck's constant, is a fundamental physical constant of foundational importance in quantum mechanics. The constant gives the relationship between the energy of a photon and its frequency, and by the mass-energy equivalenc ...
*
Planck–Einstein relation The Planck relationFrench & Taylor (1978), pp. 24, 55.Cohen-Tannoudji, Diu & Laloë (1973/1977), pp. 10–11. (referred to as Planck's energy–frequency relation,Schwinger (2001), p. 203. the Planck relation, Planck equation, and Planck formula, ...
* Soft photon
# References
{{Reflist Foundational quantum physics Electromagnetic spectrum Photons
|
|
# What is the physical interpretation of “internal pressure”?
$$\require{begingroup} \begingroup \newcommand{\md}[0]{\mathrm{d}} \md U = \pi_T\,\md V + C_V \,\md T$$ where $\pi_T$ is the internal pressure and is given by $\displaystyle {\left(\partial U \over \partial V\right)_T}$.
• What does the internal pressure actually physically mean? I can see it is the slope of the $U$-$V$ graph at constant temperature but that does not clear things up.
• Also why is this quantity called the internal pressure and not something else? $\endgroup$
• I'd assume that the phrase is trying to acknowledge the wall effect. Pressure at the wall of the container isn't the same as pressure away from the wall. – MaxW Feb 11 '17 at 18:45
• @MaxW Outside the container or inside the conatiner ? – A---B Feb 11 '17 at 19:08
• Inside of course. what would the pressure outside of the container have to do with anything? – MaxW Feb 11 '17 at 19:12
• I also thought that but I did not understand where that pressure will be acting ? on other molecules ? – A---B Feb 11 '17 at 19:14
• @MaxW Yes sorry, my mistake I thought the force field is inside the sphere of gas, whereas it is the exact opposite. – A---B Feb 11 '17 at 19:47
See orthocresol's derivation of an internal pressure $\pi_T$ relation $(1)$ here.
$$\pi_T = T\left(\frac{\partial P}{\partial T}\right)_V - P\tag{1}$$
Internal pressure $\pi_T$ measures what you would expect from its definition:
$$\pi_T \equiv \left(\frac{\partial U}{\partial V}\right)_T.$$
It shows how internal energy changes when volume is changed and temperature is constant. For ideal gases the change is zero, something that also follows from the equipartition theorem.
$$\mathrm{d}U_{\text{ideal gas}}= \frac{\nu}{2}nR\mathrm{d}T \implies \left(\frac{\partial U}{\partial V}\right)_T = 0 \ \ \text{for an ideal gas}$$
In other words, the total energy an ideal gas has is not dependent on volume. There is no repulsion nor attraction amongst ideal gas particles.
As a first approximation to model real gases, the van der Waals equation $(2)$ is used.
$$\left(P+a\frac{n^2}{V^2}\right)\left(V-nb\right)=nRT\tag{2}$$
Hence,
$$P = \frac{nRT}{V-nb} - a\frac{n^2}{V^2}\tag{3}.$$
Using relations $(1)$ and $(3)$
$$\left(\frac{\partial P}{\partial T}\right)_V = \frac{nR}{V-nb} \overset{(3)}{=} \frac{1}{T}\left(P + a\frac{n^2}{V^2}\right).$$
Again via formula $(1)$
$$\pi_T = T \cdot \overbrace{\frac{1}{T}\left(P + a\frac{n^2}{V^2}\right)}^{(\partial P/\partial T)_V} - P = a\frac{n^2}{V^2}\tag{4}.$$
Result $(4)$ $\pi_T = a\frac{n^2}{V^2}$ demonstrates that for real gases internal energy will change when volume is decreased or increased. Real particles have interparticular forces that vary with distance. A hint as to why $\pi_T$ is called internal pressure will be given when $(4)$ is substituted into $(2)$:
$$\left(P+\pi_T\right)\left(V-nb\right)=nRT,$$
or even better, $(3)$:
$$P = \frac{nRT}{V-nb} - \pi_T.$$
• If $\pi_T > 0,$ then $P$ will be smaller. Therefore, attractive forces are dominant.
• If $\pi_T < 0,$ then $P$ will be bigger. Therefore, repulsive forces are dominant.
• I don't think this is what the OP is asking about. I think question is more about why "internal pressure" vs. just "pressure." In other words what does "internal" add? – MaxW Feb 11 '17 at 19:19
• @MaxW I did always think about it as 'a correction to pressure that derives from intermolecular forces'. As opposed to traditional (ideal gas) pressure which can be modelled as collisions with container walls. To recap, real gases occupy a finite volume, and if one decreases the volume of the system (via an external pressure), the molecules are forced more close together. How these particles respond is what internal pressure measures. This is, in turn, to be differentiated from the traditional pressure (of which internal pressure is a component). – Linear Christmas Feb 11 '17 at 19:35
• @MaxW Whether this is a good way of thinking about it, remains to be seen. :=) (Feel free to downvote accordingly if this is wrong.) – Linear Christmas Feb 11 '17 at 19:37
• @LinearChristmas Wow, this is easily one of the most complete answers I have ever got. Crystal clear. Thanks for the answer. – A---B Feb 11 '17 at 19:47
• That is pretty much the significance of $\pi_T$, it is a measure of intermolecular (IM) forces. As V changes, the average IM distance will change, and then the difference in the IM potential energy will affect U. For an ideal gas, there are no IM forces; hence there's no effect on U and $\pi_T = 0$. (And I realise I am just repeating what you said... whoops) – orthocresol Feb 11 '17 at 20:13
Assume that the gas is enclosed in a cylinder with a movable piston but the piston is not moving now. If the piston - wall interface is frictionless then the internal pressure (the one you are asking about) is the same as the external pressure (the one that can be controlled by mechanical means, e.g., by pushing or pulling on it.) If there is no friction and the process is reversible then the internal pressure is equal to the eternal pressure (the control variable) and is a state variable one that can be described by some other equilibrium parameters, such as $U$ or $T$ or $V$. If there is friction between the piston and the wall then depending on whether one pushes the piston in or pulls it out the internal pressure maybe larger or smaller than the external one and there is no such functional relationship among these parameters.
|
|
# Related Rates calculus problem
For some reason I keep getting this question wrong.
Suppose a 6 feet tall man is walking away from a 15 foot tall lamp post at 5ft/s. What is the rate at which the man's shadow is moving when he is 40 ft from the lamp post.
Here is what I did: $\dfrac{dx}{dt} = 5$ The man's speed at which he is walking. We want $\dfrac{ds}{dt}$. And we can use the chain rule to get: $\dfrac{dx}{dt} = \dfrac{dx}{ds}\cdot \dfrac{ds}{dt}$ Equation (1)
Let $s$ be the position of the tip of the man's shadow. Then $x$ and $s$ are related by similar triangles:
$\dfrac{15}{x+s} = \dfrac{6}{s} \iff 15s = 6x + 6s \iff \frac{3}{2}s = x$
Now we have: $\dfrac{dx}{dt}$ and $x$ in terms of $s$. $\dfrac{dx}{ds} =\dfrac{3}{2}$
Evaluating Equation (1): $5 = \dfrac{3}{2}\cdot \dfrac{ds}{dt}$ and hence $\dfrac{10}{3} = \dfrac{ds}{dt}$
The answer is supposed to be $\dfrac{25}{3}$ where did I go wrong?
-
At any given time the tip of the shadow is $x+s$ ft away from the lamp post. Therefore the speed value you are looking for is $$v=\frac{d(x+s)}{dt}=\frac{dx}{dt}+\frac{ds}{dt}=\frac{dx}{dt}+\frac{ds}{dx}\frac{dx}{dt}=\frac{dx}{dt}+\frac{\alpha}{1-\alpha}\frac{dx}{dt}=\frac{1}{1-\alpha}\frac{dx}{dt}$$ where $\alpha = \frac{h}{H}$ is the ratio of the heights of the man and the lamp post. Hence $$v=\frac{1}{1-\frac{2}{5}}5=\frac{25}{3}$$
$\dfrac{dx}{ds}$ is the rate of change in $x$ with respect to $s$ correct? – CodeKingPlusPlus Oct 29 '12 at 3:14
|
|
# People
## Employment & Education
• Associate Professor, Department of Mathematical Sciences, UNIST (2018/09 -)
• Assistant Professor, Department of Mathematical Sciences, UNIST (2014/08 - 2018/08)
• Krener Assistant Professor, Department of Mathematics, UC Davis (2012/07 – 2014/07)
• Research Associate, Center for Scientific Computation and Mathematical Modeling (CSCAMM), University of Maryland (2009/08 – 2012/06)
• Courant Institute of Mathematical Sciences, New York University, Ph.D. (2004/09 - 2009/05)
• Seoul National University, B.S. (1998/03 - 2004/08)
## Research Area: Partial Differential Equations (PDEs)
• Well-posedness and Regularity of Viscous Fluid Equations
• Transport Equations with Nonlocal Velocity
• Coupled system with Fluid Equations
• Free boundary problem
## Papers
1. H. Bae. Global well-posedness of the dissipative quasi-geostrophic equations in critical spaces, Proc. Amer. Math. Soc. 136 (2008), 257-261.
2. H. Bae. Solvability of the free boundary problem of the Navier-Stokes equations with surface tension, Discrete Contin. Dyn. Syst. 29 (2011), 769-801.
3. H. Bae. Global well-posedness for the critical quasi-geostrophic equations in L^{\infty}, Nonlinear Anal. 75 (2011), 1995-2002.
4. H. Bae. Global well-posedness for the Keller-Segel system of equations in ciritical spaces, Adv. in Differential Equations and Control Processes 7(2011), no.2, 93-112.
5. H. Bae, A. Biswas, E. Tadmor. Analyticity of the Navier-Stokes equations in critical Besov spaces, Arch. Ration. Mech. Anal. 205 (2012), no.3, 963-991.
6. D. Wei, E. Tadmor, H. Bae. Critical threshold in multi-dimensional Euler-Poisson equations with radial symmetry, Commun. Math. Sci. 10 (2012), no.1, 75-86.
7. H. Bae, K. Trivisa. On the Doi model for the suspensions of rod-like molecules in compressible fluids, Mathematical Models and Methods in Applied Sciences 22 (2012), no. 10, 39pp.
8. H. Bae, K. Trivisa. On the Doi model for the suspensions of rod-like molecules: global-in-time existence, Commun. Math. Sci. 11 (2013), no.3, 831-850.
9. H. Bae, R. Granero-Belinchon. Global existence for some transport equations with nonlocal velocity, Adv. Math. 269 (2015), 197-219.
10. H. Bae. Existence and Analyticity of Lei-Lin Solution to the Navier-Stokes Equations, Proc. Amer. Math. Soc. 143 (2015), no. 7, 2887–2892.
11. H. Bae, A. Biswas. Gevrey regularity for a class of dissipaive equations with analytic nonlinearity, Methods and Applications of Analysis 20 (2015), No.4, 377-408.
12. H. Bae, M. Cannone. Log-Lipschitz regularity of the Navier Stokes equations, Nonlinear Analysis 135 (2016), 223-235.
13. H. Bae, S. Ulusoy. Global well-posedness for nonlinear nonlocal Cauchy problems arising in elasticity, Electron. J. Differential Equations, Vol. 2017 (2017), No. 55, 1-7.
14. H. Bae, D. Chae, H.Okamoto. On the well-posedness of various one-dimensional model equations for fluid motion. Nonlinear Analysis 169 (2017), 25-43.
15. H. Bae, K. Kang, S. Kim. Uniqueness of solutions for Keller-Segel system of porous medium type coupled to fluid equations. Journal of Differential Equations 265 (2018) 5360-5387.
16. H. Bae, R. Granero-Belinchon, O. Lazar. Global existence of weak solutions to dissipative transport equations with nonlocal velocity. Nonlinearity 31 (2018) 1484-1515.
17. H. Bae. Analyticity of the inhomogeneous incompressible Navier-Stokes equations. Appl. Math. Lett. 83 (2018), 200–206.
18. H. Bae, K. Kang. Regularity condition of the incompressible Navier-Stokes equations in terms of one velocity component. Appl. Math. Lett. 94 (2019), 120-125.
19. H. Bae, R. Granero-Belinchon, O. Lazar. On the local and global existence of solutions to 1D transport equations with nonlocal velocity. Netw. Heterog. Media. 14 (2019), No. 3, 471-487.
20. H. Bae, W. Lee, J. Shin. A blow-up criterion for the inhomogeneous incompressible Euler equations. Nonlinear Anal. 196 (2020), 111774
21. H. Bae. Analyticity of solutions to the barotropic compressible Navier-Stokes Equations. Journal of Differential Equations 269 (2020) 1718-1743.
22. H. Bae, R. Granero-Belinchon. Global existence and exponential decay to equilibrium for DLSS-type equations. Journal of Dynamics and Differential Equations (2020).
23. H. Bae. Global existence of solutions to some equations modeling phase separation of self-propelled particles. SN Partial Differ. Equ. Appl. 1, 47 (2020).
24. H. Bae, J. Kelliher. Propagation of regularity of level sets for a class of active transport equations. J. Math. Anal. Appl. 497 (2021), no. 1, 124823.
25. H. Bae, K. Kang. On the existence of unique global-in-time solutions and temporal decay rates of solutions to some non-Newtonian incompressible fluids, Zeitschrift für angewandte Mathematik und Physik, 72, Article number: 55 (2021).
26. H. Bae. Blow-up conditions of the incompressible Navier-Stokes equations in terms of sequentially defined Besov spaces. Proc. Amer. Math. Soc. 149 (2021), 4379-4385.
27. H. Bae, W. Lee, J. Shin. Gevrey regularity and finite time singularities for the Kakutani-Matsuuchi model. Nonlinear Anal. Real World Appl. 63 (2022), 103415.
28. H. Bae, W. Lee. Existence, Gevrey regularity, and decay properties of solutions to Active models in critical spaces. J. Math. Anal. Appl. 506 (2022) 125700.
29. H. Bae, R. Granero-Belinchon. Singularity formation for the Serre-Green-Naghdi equations and applications to abcd-Boussinesq systems. Accepted to Monatshefte für Mathematik.
30. H. Bae, K. Kang. Local and Global existence of solutions of a Keller-Segel model coupled to the incompressible fluid equations, Submitted.
31. H. Bae, W. Lee, J. Shin. Global existence and decay rates of solutions to the viscous water-waves system. Submitted.
32. H. Bae, K. Kang. On the local and global existence, asymptotic behaviors, and decay rates of solutions of the $2\frac{1}{2}$ dimensional Hall equations. Submitted.
33. H. Bae, J. Shin. Weak-strong uniqueness for the incompressible Navier-Stokes equations in Fourier-Besov spaces. Submitted.
34. H. Bae. On the local and global existence of the Hall equations with fractional Laplacian and related equations. Submitted
## Conference Proceedings
1. H. Bae, K. Trivisa. On the Doi model for the suspensions of rod-like molecules in compressible fluids. Hyperbolic problems: theory, numerics, applications, 285–292, AIMS Ser. Appl. Math., 8, Am. Inst. Math. Sci. (AIMS), 2014.
|
|
# A poor man's backup system
Date 2014-12-07
It has been made clear that digital data is not safe. I once had a backup system, but the one time I needed a backup, my backup harddrive failed and I lost all my data.
At university, I learned about RAID systems, but they are too expensive and too cumbersome for my purposes. That's why I made my own little cheap backup system.
I understood that you need to save data in at least two different locations for them to be at least somewhat safe. I bought two external hard drives with which I built a makeshift RAID1 system. Here's what I did:
#### A filesystem
First things first. I plugged in my harddrives via USB, but they're not being mounted automatically, so need to locate them.
$lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ... sdc 8:48 0 931.5G 0 disk └─sdc1 8:49 0 931.5G 0 part sdd 8:64 0 931.5G 0 disk └─sdd1 8:65 0 931.5G 0 part Oh, right, I'm going to need sudo rights for this. Let's just get those rights right now, so I don't have to type sudo everywhere. $ sudo su
I don't like Windows' filesystems, but they're on most harddrives by default, so I'm wiping those off my drives.
$wipefs --all /dev/sdc$ wipefs --all /dev/sdd
Now that I have two clean disks, I need to partition them again, and add a new filesystem. I'm just going to make two single-partition drives. There's no need here to add any more partitions.
$fdisk /dev/sdc ...$ fdisk /dev/sdd
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-1953458175, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953458175, default 1953458175):
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Next, I need a fun file system. My friend recommended BTRFS, for multiple reasons.
• It's open source (GPL).
• It focusses on fault tolerance and repair.
Honestly, I just wanted to try something other than ext4 for a change.
$mkfs.btrfs /dev/sdc ...$ mkfs.btrfs /dev/sdd
Btrfs v3.17.1
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
fs created label (null) on /dev/sdd
nodesize 16384 leafsize 16384 sectorsize 4096 size 931.48GiB
Finally, I could mount my system for the first time. I chose a spot in the root directory, but that doesn't really matter.
$mkdir /backup$ mkdir /backup/hdd0
$mkdir /backup/hdd1$ mount /dev/sdc1 /backup/hdd0
\$ mount /dev/sdd1 /backup/hdd1
There, now I have two identical empty 1TB harddrives.
At this point, it's very easy to use these as a backup system. I just write to the first disk, and copy everything over to the second with rsync.
rsync --recursive /backup/hdd0 /backup/hdd1
I could have the drives mounted all the time, and put this command into a cronjob, but I don't use my backup system that much, so I just mount the drives whenever I'm making a backup, or restoring one.
If you liked this blog post, please consider becoming a supporter:
|
|
# 10.6 Lattice structures in crystalline solids (Page 2/29)
Page 2 / 29
In a simple cubic lattice, the unit cell that repeats in all directions is a cube defined by the centers of eight atoms, as shown in [link] . Atoms at adjacent corners of this unit cell contact each other, so the edge length of this cell is equal to two atomic radii, or one atomic diameter. A cubic unit cell contains only the parts of these atoms that are within it. Since an atom at a corner of a simple cubic unit cell is contained by a total of eight unit cells, only one-eighth of that atom is within a specific unit cell. And since each simple cubic unit cell has one atom at each of its eight “corners,” there is $8\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{1}{8}\phantom{\rule{0.2em}{0ex}}=1$ atom within one simple cubic unit cell.
## Calculation of atomic radius and density for metals, part 1
The edge length of the unit cell of alpha polonium is 336 pm.
(a) Determine the radius of a polonium atom.
(b) Determine the density of alpha polonium.
## Solution
Alpha polonium crystallizes in a simple cubic unit cell:
(a) Two adjacent Po atoms contact each other, so the edge length of this cell is equal to two Po atomic radii: l = 2 r . Therefore, the radius of Po is $r\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{\text{l}}{2}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{\text{336 pm}}{2}\phantom{\rule{0.2em}{0ex}}=\text{168 pm}.$
(b) Density is given by $\text{density}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{\text{mass}}{\text{volume}}.$ The density of polonium can be found by determining the density of its unit cell (the mass contained within a unit cell divided by the volume of the unit cell). Since a Po unit cell contains one-eighth of a Po atom at each of its eight corners, a unit cell contains one Po atom.
The mass of a Po unit cell can be found by:
$\text{1 Po unit cell}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{1 Po atom}}{\text{1 Po unit cell}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{1 mol Po}}{6.022\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{23}\phantom{\rule{0.2em}{0ex}}\text{Po atoms}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{208.998\phantom{\rule{0.2em}{0ex}}\text{g}}{\text{1 mol Po}}\phantom{\rule{0.2em}{0ex}}=3.47\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-22}\phantom{\rule{0.2em}{0ex}}\text{g}$
The volume of a Po unit cell can be found by:
$V={l}^{3}={\left(336\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-10}\phantom{\rule{0.2em}{0ex}}\text{cm}\right)}^{3}=3.79\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}{\text{cm}}^{3}$
(Note that the edge length was converted from pm to cm to get the usual volume units for density.)
Therefore, the density of $\text{Po}=\phantom{\rule{0.2em}{0ex}}\frac{3.471\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-22}\phantom{\rule{0.2em}{0ex}}\text{g}}{3.79\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}{\text{cm}}^{3}}\phantom{\rule{0.2em}{0ex}}={\text{9.16 g/cm}}^{3}$
The edge length of the unit cell for nickel is 0.3524 nm. The density of Ni is 8.90 g/cm 3 . Does nickel crystallize in a simple cubic structure? Explain.
No. If Ni was simple cubic, its density would be given by:
$\text{1 Ni atom}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{1 mol Ni}}{6.022\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{23}\phantom{\rule{0.2em}{0ex}}\text{Ni atoms}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{58.693\phantom{\rule{0.2em}{0ex}}\text{g}}{\text{1 mol Ni}}\phantom{\rule{0.2em}{0ex}}=9.746\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}\text{g}$
$V={l}^{3}={\left(3.524\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-8}\phantom{\rule{0.2em}{0ex}}\text{cm}\right)}^{3}=4.376\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}{\text{cm}}^{3}$
Then the density of Ni would be $=\phantom{\rule{0.2em}{0ex}}\frac{9.746\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}\text{g}}{4.376\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}{\text{cm}}^{3}}\phantom{\rule{0.2em}{0ex}}={\text{2.23 g/cm}}^{3}$
Since the actual density of Ni is not close to this, Ni does not form a simple cubic structure.
Most metal crystals are one of the four major types of unit cells. For now, we will focus on the three cubic unit cells: simple cubic (which we have already seen), body-centered cubic unit cell , and face-centered cubic unit cell —all of which are illustrated in [link] . (Note that there are actually seven different lattice systems, some of which have more than one type of lattice, for a total of 14 different types of unit cells. We leave the more complicated geometries for later in this module.)
what are the raw materials used during contact process
special Proteins who serve a Function as a Katalysator for chemical and biological reations
Sch
what is Enzymes
special Proteins who serve a Function as a Katalysator for chemical and biological reations
Sch
Does Ethers has hydrogen bonding with water?
ethers lack the hydroxyl groups of alcohols. when ether is placed in water both hydrogen atoms are replaced by alkyl or aryl groups. because ether lacks the O-H bond ether molecules cannot engage in hydrogen bonding with each other.
Jallal
however ethers do have nonbonding electron pairs on their oxygen atoms which allows them to form hydrogen bonds with other molecules such as alcohol raise or a amines that have O-H or N-H bonds.
Jallal
such as alcohols or amines* I always have. typo problem. I need to proofread before I send
Jallal
please what is the a difference between a compound and molecular formula
A compound is just a combination of different element showing their molecular formulation like h2o... A molecular formula is a formular showing the number if moles of each element in a compound
Its
a compound is made up of different atoms help together by ionic bonds in a fixed ratio. example formula is: NaCl or salt. a moleculer compound is consists of two or more atoms held together by covalent bonds. example formula is: H2, O2. the atoms can be the same or different such as CO2 or PCl5
Jallal
Yes, exactly what Its said is correct
Jallal
explain why the number of electron in the shells of elements making up the period of the periodic table following the pattern 2,8,18,32,50
these shells of elections emanating from the core on outward can hold only a certain amount of elections. the Pauli Exclusion rule states that no two elections can occupy the same space. an atom will always to fill all it's shells to maximum with the outer most shell called the valence shell.
Jallal
the first shell closet to the nucleus can hold only two elections. the second shell can hold 8 elections. the third shell can hold 18 elections. the fourth shell can hold 32 elections. the fifth shell can hold 50 elections. point people the atom will try to fully complete it's outer shells
Jallal
this cannot always be done though. the final outermost orbital she'll is called the valence shell and that is what determines it's reactivity. Hence the reason they're placed on the periodic table in that order
Jallal
Noble elements on the furthest right hand side of the periodic table of elements are also called inert because they don't react to other elements because they have all that shells already filled
Jallal
the valence shell also determines the chemicals properties of the atom.
Jallal
The general formula is that the (n)th shell can in principle hold up to 2(n^2) elections.
Jallal
The ISBN is going through
the ISBN is either a 9 digit number or 13 digit number. all textbooks have both those numbers. it's not a big deal anyway
Jallal
the ISBN didn't work. try looking for the 13 digit number instead
Jallal
thanks jallal
Faith
but pls am having problem with solubility curve
Faith
could you please detail what you're having trouble on exactly regarding solubility?
Jallal
how to find oxidation number of an element
Faith
for instance if you are given a graph to plot and answer the question below is kind of confusing
Faith
if there supposed to be a graph?
Jallal
write out the question to the problem
Jallal
yes
Faith
it's not showing.
Jallal
just write out the exact question
Jallal
u know wat just explain oxidation number seems like solubility curve is meant to be self explanatory
Faith
ok first the basic rules for oxidation
Jallal
1.) atoms in their elemental state have an oxidation number of zero. 2.) atoms in monatomic (single atom) ions have an oxidation number equal to their charge
Jallal
3.) in compounds: fluorine is assigned a -1 oxidation number, oxygen is usually assigned a -2 oxidation number (except in peroxide compounds where it is -1, and in binary compounds with fluorine where it is positive); and hydrogen is usually assigned a +1 oxidation number except when it exists
Jallal
okay
Faith
as the hydride ion, H^-,
Jallal
4.) in compounds, all other atoms are assigned an oxidation number so that the sum of the oxidation numbers on all the atoms in the species equals a charge on the species
Jallal
now let's try to determine oxidation state of H2 and H2O
Jallal
using rule 1 H2 is already in its elemental state so it's oxidation number is 0
Jallal
for H2O we we must get the sum of all the oxidation numbers using formula (oxidation # of O x # of O atoms) + (oxidation # of H x # of H atoms)
Jallal
already known oxidation number of oxygen and hydrogen the formula will be (-2x1)+(+1x2)=0
Jallal
0 is exactly what we would expect for a neutral molecule and water is neutral.
Jallal
would you like another example?
Jallal
in dis kind of question that says that what is the oxidation number of z in k3 zcl6 how am I to do it
Faith
k has an oxidation number of +1 and Cl is -1. this is because K has one unpaired electron and its outermost shell and is electropositive being a metal while Cl has one unpaired electron in its outermost shell and is electronegative being nonmetal.
Jallal
so to solve it: k3zcl6.3 times 1 + z + -1 times 6=0. 3-6 +z=0. -3+z=0 so z=3
Jallal
every element in the periodic table has an oxidation number unless it's ionized
Jallal
that is how you find those values and work from there
Jallal
thanks alot
Faith
excuse me I meant to write z=+3
Jallal
the oxidation number has to be positive or negative so you must indicate that
Jallal
I hope I was clear enough. also I want to help you with the solubility problem. if you can tell me the question of the problem I can find a solution and help explain it
Jallal
remember to always use the equation I wrote out before to get the sum of oxidation numbers when dealing with compounds
Jallal
What is collision theory
why would you not make potassium chloride from potassium and hydro caloric acids
If HCl and K were to react to form KCl, then they would have to liberate hydrogen gas, but because HCl is an acid it liberates hydrogen gas only if it reacts with very electropositive metals
Rahim
what is the hardest substance
with what scale one would be 💎
coland
diamond a 10
coland
1-10
coland
there is actually other elements that can cut diamonds and also withstand greater hear and temperature than diamonds.
Jallal
What ate the elements?, Jallal
Rahim
Sorry "ate" should be "are".Typing error.
Rahim
Pls Jallal ,can you name those elements because it is only diamond that I know to be the hardest substance for now.
hamidat
I thought I replied to this already. My apologies for the delay. Diamonds while very hard are not the hardest material on Earth. There is boron nitride but out of the 4 forms it can take it is the Wurtzite form (w-BN) that is harder than diamond.
Jallal
Next is Lonsdaleite which is also called hexagonal diamond because of its crystal structure but it is an allotrope of carbon with a hexagonal lattice. the lattice structure is packed even more closely than that of diamonds making Lonsdaleite 58% harder than diamond
Jallal
Then there's fullerene which is also an allotrope of carbon whose molecule consists of carbon atoms connected by single and double bonds so as to form a closed or partially closed mesh was fused rings of 5 to 7 atoms
Jallal
the molecule may be a hollow sphere an ellipsoid a tube are many other shapes and sizes. graphene which is a flat mesh of regular hexagonal rings can be seen as an extreme member of the family. because of their typical soccer balls like shape there are more commonly referred to as buckyballs.
Jallal
fullerenes which were discovered by accident greatly expanded known allotropes of carbon. this discovery gave rise to carbon nanotubes which are even harder than diamonds. each nanotubes is between 2 and 4 nanometers yeah it is incredibly strong and tough. weighing only 10% of the weight of steel
Jallal
but has hundreds of times the strength of steel
Jallal
then there is graphing which is a hexagonal carbon lattice that's only a single atom thick which is arguably the most revolutionary material to be developed and utilize in 21st century. the basic structural element of graphene are carbon nanotubes. in proportion to its thickness it is the strongest
Jallal
material known
Jallal
then there is beta carbon nitride which is a super hard material predicted to to have a hardness equal or above diamond.
Jallal
Thanks,but I have to go and research more to know better.
hamidat
lastly there is linear acetylenic carbon also called carbyne which is also an allotrope of carbon that has a chemical structure as a repeating chain with alternating single and triple bonds. it would thus be the ultimate member of the polyyne family.
Jallal
T
hamidat
the this polymeric carbyne is a considerable interest in Android technology as its Young's modulus is 32.7 TPa which is 40 times that of a diamond.
Jallal
I thought amongst the allotropes of carbon,diamond is octahedral and graphite is hexagonal?
hamidat
carbyne chains have been claimed to be the strongest material known for density. calculations indicate that carbyne's specific tensile strength beats graphene, carbon nanotubes, and diamond.
Jallal
There you have it Rahim
Jallal
So far so good, you have mentioned about 6 to 7 other compounds that are harder than diamond, are these real?
hamidat
yes they are. I would never make up lies just to sound smart. please check them yourself as it's never good to just take someone's word without doing your own research
Jallal
there are other materials too but they are disputable and upon further testing on them they proved to fall short
Jallal
No that's not what I mean.
hamidat
hamidat
the allotropes of carbon, diamond is tetrahedral lattice and graphite is bonded l together in sheets of a hexagonal lattice.
Jallal
I'm happy to help anytime I can. also what did you mean that's not what you meant? I'd like to fully answer your questions to your content
Jallal
Now I no better, thanks.
hamidat
you're welcome and anytime
Jallal
Sorry it was not meant for you,I was answering another question .
hamidat
okay Sir
hamidat
I can't reply through the other side,it filled up
hamidat
you can just create a new thread
Jallal
Diamond
Its
Diamond, while still one of the hardest substances, it has lost its top spot thanks to new discoveries
Jallal
I wish someone had more questions for me to answer. I like helping others and I love knowledge. Please everyone ask away no matter how trivial. Any question is always a good question. it's much better to be inquisitive than silent.
And I like asking people questions, so wait and collect yours.
hamidat
How can a mixture of petroleum be separated from kerosene?, which process is called the "Haber process", How is methane prepared laboratoryly.Thank u
hamidat
An organic compound decolorized acidified KMnO4 solution but failed to react with ammonical silver nitrate solution. the organic compound is likely to be?
Faith
hamidat. mixtures of two miscible liquids having a difference in their boiling points more than 25°C can be separated by the method of distillation.
Jallal
a mixture of kerosene in petroleum is taken in the distillation flask with a thermometer fit it in. you also need a beaker a water condenser in a Bunsen burner. once you have the apparatus set up then start to heat the mixture slowly. the thermometer should be watched simultaneously.
Jallal
kerosene will vaporize and condense in the water condenser. the condensed kerosene is collected from the condenser outlet whereas the petroleum is left behind and distillation flask.
Jallal
missing is prepared and laboratory by the protonation of methyl lithium and methyl magnesium iodide
Jallal
I'm not sure what you meant when you said which process is called the haber process. but I can say that the hub Haber-Bosch process is an artificial nitrogen fixation process and is the main industrial procedure for the production of ammonia today.
Jallal
the process converts atmospheric nitrogen (N2) to ammonia (NH3) buy a reaction with hydrogen (H2) using a metal catalyst under high temperatures and pressures.
Jallal
methane* is produced buy protonation of methyl lithium and muscle magnesium iodide. please excuse the typos
Jallal
ok I wanted to confirm because am having contradiction between fractional distillation and distillation itself.
hamidat
Jallal
ok well I'll always be happy to assist
Jallal
The reason why I ask for the Haber process is because some one told me is a process where hydrogen and nitrogen react to give ammonia and I was not sure.
hamidat
For methane, it is the laboratory preparation that am asking for.
hamidat
I see I gave you the laboratory synthesis but I think what you're looking for really is the industrial route synthesis. am I correct?
Jallal
You see,for distillation it has to do with two immiscible liquids so I normally misuse it for fractional distillation.
hamidat
The Haber process equation is : N2 + 3H2 ---> 2NH3 ∆H°=-91.8 kJ/mol
Jallal
Oh the one you gave above is the laboratory preparation/synthesis?
hamidat
yes, the one I gave was the laboratory synthesis not the industrial process I think that's what you we're looking for. I'll explain it right now
Jallal
Yes I have seen something like that: N2+3H2 _2NH3
hamidat
Sorry for the misunderstanding
hamidat
I only wish all the symbols could be displayed in the equation to avoid any confusion
Jallal
no worries. I like kitchen concepts
Jallal
Like seriously even I used to have that problem of symbols
hamidat
Methane (CH4) is one carbon in four atoms of hydrogen. it is a group 14 hydroid in the simplest alkane and is the main constituent of natural gas. through its relative abundance on Earth it is an attractive fuel however trying to capture and store it poses it's challenges due to its gaseous state
Jallal
under normal conditions for temperature and pressure.
Jallal
nothing is a tetrahedral molecule with four equivalent C-H bonds. it's electronic structure is described by four bonding molecular orbitals resulting from the overlap of the valence orbitals on C and H.
Jallal
it is also a component of petroleum gas right?
hamidat
at room temperature and standard pressure methane is a colorless odorless gas. The familiar smell of natural gas as used in homes is a chief with the addition of an odorant usually blends containing tert-butythoil as a safety measure.
Jallal
no it is a component of natural gas not petroleum gas
Jallal
OK
hamidat
nothing has a boiling point of -164 °C (-257.8 °F) at a pressure of 1 atmosphere. as a gas it is flammable over a range of concentrations (5.4-17%) in air at standard pressure.
Jallal
solid methane exist currently in 9 known modifications. cooling Messina normal pressure results in the formation of methane I. this substance crystallizes in the cubic system (space group Fm3m). the positions of the hydrogen atoms are not fixed in methane I. i.e. methane molecules may rotate freely.
Jallal
therefore it is a plastic Crystal
Jallal
the primary chemical reactions in methane or combustion, steam reforming to syngas and halogenation. in general medicine reactions are difficult to control.
Jallal
OK am with you
hamidat
hamidat
yes halogenation is a chemical reaction that involves the addition of one or more halogens to a compound or material
Jallal
okay
hamidat
I hope you're right because I don't feel blessed but it's my nature to help in any way I can
Jallal
there are different types of halogenation of organic compounds by reaction type so you must be mindful of that. there are free radical halogenation, ketone halogenation, electrophilic halogenation, and halogen addition reaction. the structure of the substrate is one factor that determine the pathway
Jallal
okay now that we have all that information out of the way we are going to now move on to the industrial synthesis
Jallal
You don't talk like that because the Lord has so many ways of blessing people, you don't know what you have now until you lose it.So ways feel blessed okay
hamidat
You don't talk like that because the Lord has so many ways of blessing people, you don't know what you have now until you lose it.So always feel blessed okay
hamidat
okay
hamidat
all I have is my knowledge and my brain. I think about it everyday about what I have and while not much at all things could be magnitudes worse so you're right I should be happy with I currently have. maybe fate will smile down on me one day. thanks for the pep talk
Jallal
So are you still in secondary school preparing to write WAEC?
hamidat
no I'm finished with my university studies where I majored in the Sciences. I just love to keep expanding my knowledge
Jallal
No Wonder you sound different from normal school student
hamidat
now let's begin the industrial process. there is little incentive to produce methane industrially. methane is produced by hydrogenating carbon dioxide through the Sabatier process
Jallal
Alright
hamidat
the Sabatier process dissolve the reaction of hydrogen and carbon dioxide at elevated temperatures (optimally 300-400 °C) and pressures in the presence of a nickel catalyst to produce methane and water.
Jallal
okay
hamidat
Hello are you still there
hamidat
optimally ruthenium on alumina (aluminum oxide) make some more efficient catalyst. this process is described by the following exothermic reaction: CO2 + 4H2 ----> CH4 + 2H2O ∆H= -165.0 kJ/mol. above the arrow symbol there should be 400 °C and below it should say pressure
Jallal
sorry I had to talk to my family over something. don't worry I'll see this to the end and make sure you understand my explanation
Jallal
No problem sir,thanks for your time
hamidat
whether the CO2 methanation occurs by first associatively adsorbing an adatom hydrogen and forming oxygen intermediates before hydrogenation or dissociating and forming a carbonyl before being hydrogenated we are then given the next equation: CO + 3H2 ---> CH4 + H2O. ∆H= -206 kJ/mol
Jallal
CO methanation is believed to occur through a dissociative mechanism where the carbon oxygen bond is broken before hydrogenation with an associative mechanism only being observed at a high H2 concentrations
Jallal
messing is also a side product of the hydrogenation of carbon monoxide in the Fischer-Tropish process, which is practiced on a large scale to produce longer-chain molecules than methane
Jallal
the Fisher-Tropsch process involves a series of chemical reactions that produce a variety of hydrocarbons, ideally haven't formula (CnH2n+2). the more useful reactions produce alkanes as follows: (2n + 1) H2 + n CO ---> CnH2n+2 + n H2O
Jallal
okay but what do you mean by "The Fischer-Tropish" process
hamidat
where n is typically 10-20. the formation of methane (n=1) is unwanted. most of the alkanes produced tend to be straight-chain, suitable as diesel fuel. in addition to alkane formation, computing reactions give small amounts of alkenes, as well as alcohols and other oxygenated hydrocarbons.
Jallal
what I mean is that there are two ways to produce methane on an industrial scale. one is by hydrogenating carbon dioxide through the Sabatier process and the other as a side product of the Fischer-Tropsh process.
Jallal
as a side product of the hydrogenation of carbon monoxide in the Fisher-Tropsch process
Jallal
now I understand
hamidat
example of large-scale coal-to- methane gasification is the Great Plains Synfuels plant, in North Dakota as a way to develop abundant local resources of low-grade lignite, a resource that is otherwise difficult to transport for its weight, ash content, low calorific value and propensity
Jallal
just spontaneous combustion during storage and transport.
Jallal
power to methane is a technology that uses electrical power to produce hydrogen from water by electrolysis and uses the Sabatier reaction to combine hydrogen with carbon dioxide to produce methane.
Jallal
you see all these things are not in this secondary school textbooks that am using
hamidat
it depends on the books and authors of the books because I've seen academic books tremendously different from each other with completely different concepts
Jallal
also the edition and year of publication. my experience is from University. are yours from high school?
Jallal
it's through
hamidat
through?
Jallal
I'm almost done with my explanation. only about 5 sentences left to wrap it all up.
Jallal
Sorry what I meant is"true"
hamidat
okay sir
hamidat
also secondary school books compared to university school books offer much more information. they limit the amount of knowledge your exposed to deliberately so as you progress you learn more newer previously unknown concepts
Jallal
as of 2016 this is mostly under development not in large-scale use. theoretically, the process could be used as a buffer for excess and off-peak power generated by highly fluctuating wind generators and solar arrays.
Jallal
To be sincere, all what you have been teaching me is not there
hamidat
however, as currently very large amounts of natural gas are used in power plants to produce electric energy, the losses and efficiency are not acceptable.
Jallal
this ends the lesson
Jallal
you mean in your text book?
Jallal
Yes
hamidat
Faith. the answer is "an alkenes"
Jallal
could you please tell me the name of the book, author, and edition? if you can find the ISBN that would help me find it faster
Jallal
I answered. both your questions and Faith's. I went back to everything I wrote I apologise for all the typos
Jallal
Thanks so much Sir ,I have learnt a lot today. I really appreciate.
hamidat
you're always welcome. I enjoying teaching. if you ever have any other questions or need help please reach out anytime. although I'm not sure how one does that in this app
Jallal
Jallal
Essential chemistry, Odesina I .A.
hamidat
ISBN:978-8089-44-5.For edition he wrote first, second, third and fourth edition here .
hamidat
thanks Sir.
hamidat
my pleasure
Jallal
sir hope the ISBN appeared
hamidat
the ISBN didn't appear and it's proving difficult to find that book and author.
Jallal
Jallal
I tried sending it but it won't go through
hamidat
Jallal
Gibbs free energy continued
when ∆G>0, the process is endergonic and not spontaneous in the forward direction. instead it will proceed spontaneously in the reverse direction to make more starting materials
Jallal
when ∆G=0, the system is in equilibrium and the concentrations of the products and reactants will remain constant
Jallal
when the system is in equilibrium that means the forward reaction and the reverse reaction are occurring at the same rate
Jallal
Although ∆G is temperature dependent, it's generally okay to assume that the ∆H and ∆S values are independent of temperature as long as the reaction does not involve a phase change. that means that if we know ∆H and ∆S we can use those values to calculate ∆G at any temperature.
Jallal
calculating ∆H and ∆S can be done using tables of standard values among other methods
Jallal
when the process occurs under standard conditions (all gases at 1 bar pressure, all concentrations are 1 M, and T=25°C), we can also calculate ∆G using the standard free energy of formation, ∆fG°
Jallal
be sure to pay close attention two units when ∆G from ∆H and ∆S because ∆H is given in kJ/mol-reaction while ∆S is given as J/mol-reaction • K, which is a difference factor of 1000
Jallal
when is ∆G negative? the equation of ∆Gsystem depends on 3 values. using ∆Gsystem=∆Hsystem-T∆Ssystem, the temperature in this equation is always positive or zero because it has units of K there for the second term in our equation T∆Ssystem will always have the same sign as ∆Ssystem
Jallal
now we can make the following conclusions about when processes will have a negative ∆Gsystem
Jallal
when the process is exothermic (∆Hsystem < 0), and the entropy of the system increases (∆Ssystem > 0), the sign of ∆Gsystem is negative at all temperatures. thus the process is always spontaneous
Jallal
when the process is endothermic, ∆Hsystem > 0, and the entropy of the system decreases, ∆Ssystem < 0, the sign of ∆G is positive at all temperatures. thus the process is never spontaneous
Jallal
for other combinations of ∆Hsystem and ∆Ssystem, the spontaneity of a process depends on the temperature
Jallal
exothermic reactions (∆Hsystem < 0) that decrease the entropy of the system (∆Ssystem < 0) are spontaneous at low temperatures
Jallal
endothermic reactions (∆Hsystem > 0) The increased entropy of the system (∆Ssystem > 0) or spontaneous at high temperatures
Jallal
thermodynamics is also connected to concepts in other areas of chemistry for example, in chemical equilibrium we can relate ∆G with the equilibrium constant, K
Jallal
in electrochemistry, ∆G is related to the cell voltage, Ecell
Jallal
lastly depending on the signs of ∆H and ∆S, the spontaneity of a process can change at different temperatures
Jallal
if any part of this explanation of the concept of Gibbs free energy is not clear please let me know so I may clarify
Jallal
That is good my brother,continue.
hamidat
Jallal, I like the ways you answer questions, you always break it down to the simplest, I which I can get somebody like you that will be teaching me.
hamidat
I could have broken it down further by giving detailed sample problems but stopped short assuming everyone understood as I wasn't receiving feedback. if you'd like I'd be happy to teach you whenever you like or need. we could setup email correspondence or video chats
Jallal
OK whenever am ready I will let you know
hamidat
know what?
Jallal
I can skip to the very crucial process instead of giving a complete breakdown of how the process works
Jallal
You see am preparing for WAEC and I am a little nervous
hamidat
just keep studying and reviewing when you're not 100% on and go in feeling completely confident
Jallal
sorry for the typos
Jallal
let's continue
Jallal
Thank you so much,God will continue to bless you
hamidat
Why's Ph not good for consumption?
A given amount of gas occupies 10.0dm5 at 4atm and 273°C. The number of moles of the gas present is? [ Molar volume of gas at s.t.p= 22.4dm3]
you must use the Ideal Gas Law equation: pV=nRT, where p is the pressure, V is the volume, n is the number of moles, R is the gas constant, and T is the temperature. You have everything you need to calculate the answer you just need to do some algebra first to rearrange the equation to find n
Jallal
Jallal
for the future always remember that the pressure must be pascals. the volume must be converted to cubic meters, the gas constant R is 8.31441 J K^-1 mol^-1, the temperature must always be in Kelvin so if you are given degrees in Celsius make sure to add 273 to it
Jallal
solubility
Shehu
solubility
Abubakkar
yes tell me more on solubility
Shehu
solubility is the property of a solid, liquid, or a gaseous chemical substance called solute to dissolve in a solid, liquid or gaseous solvent. the solubility of a substance fundamentally depends on the physical and chemical properties of the solute and solvent as well as temperature, pressure and
Jallal
presence of other chemicals (including changes in pH) of the solution.
Jallal
the extent of the solubility of a substance in a specific solvent is measured as the saturation concentration, we're adding more solute does not increase the concentration of the solution and begins to precipitate the exorcism out of solute.
Jallal
precipitate of the excess amount of solute collects at the bottom in solid form
Jallal
this is because the solvent it's completely saturated you cannot dissolve any more solute
Jallal
the solvent most commonly used is water which is why it is also called the universal solvent as it can most readily mix with mostly anything
Jallal
excuse the few typos
Jallal
hi
Affum
welcome affum
Shehu
But Jallal,sometimes the pressures are given in:atm,mmHg, Newton per metre square. How will you know the right unit to use?
hamidat
that's why conversion formulas are for
Jallal
that's what*
Jallal
if pressure is given ATM younger convert ATM to Pascal all you would have to do is multiply the amount of ATM let's say 1 by 101325
Jallal
this one 101325 is it constant?
hamidat
yes 1 ATM is always equal to 101325 pascals at sea level at a temperature of 15°C
Jallal
Alright, Good morning
hamidat
But what if your given the pressure in Bars? the same concept still applies you always want to get pressure in pascals. 1 Bar = 100000 pascals
Jallal
okay
hamidat
now let's assume that the pressure is given in mmHg. 1 mmHg is equal to 133.3224 pascals
Jallal
find me if we are given pressure and Newton per square meter square meter is exactly equal to one pascal.
Jallal
let me rephrase that last one. we are given pressure and Newton per square meter then the pressure will be exactly 1 pascal. as one newton per square meter is exactly equal to 1 pascal
Jallal
how do u derive this fundamental constants
am not sure how
Shehu
do you have an idea
Shehu
no
Mugala
Bertram
could you elaborate a little more on your question?
Jallal
or reword it perhaps. I think I understand what you're trying to ask but the wording of the question makes it confusing
Jallal
what fundamental constants?
Ruth
you see Bertran. if you can do a little more explaining on what you are trying to have answered there wouldn't be so much confusion and you'd most likely get your answer your searching for
Jallal
Bertram*
Jallal
this value would be a little hard to understand so I thought you should know how it is derived
Bertram
I need a deeper explanation to this value
Bertram
you're still being very ambiguous. please tell me what fundamental constant you are referring to.
Jallal
am asking this question under fundamental physical constant
Bertram
ok how do u derive the value of Avogadro's number without cramming
Bertram
fundamental physical constants are dimensionless and cannot be derived and have to be measured.
Jallal
dimensionless quantities are obtained as ratios of quantities that are not dimensionless
Jallal
if you want to understand better then I suggest you read up on "dimensional analysis", "dimensionless quantity", and "dimensionless physical constant"
Jallal
Fundamental constants are stationary They can't be created they are just there like current... Always in ampere and others Jallal I like ur knowledge can we be friends?
Its
isnt there some debate as the as to whether or not the constants do actually change?
Dale
of course we can be friends. I like to have intellectual conversations and people alike.
Jallal
there's always a debate about everything as not everyone can always agree on something. this is very common among scientists. however, for the sake of your question since the universe is always heading in the direction of entropy let's assume that some constants may change given a million years
Jallal
of course this is unfounded and I don't believe that but I'm just trying to indulge you
Jallal
what is an atom
An atom is the smallest indivisible particle of an element that is capable of independent existence
akinboboye
That's conventional thinking since the Greeks and our current teaching, but it's been discovered that atoms are made up of even smaller subatomic particles called "quarks". Until fully understood let's stick with the current knowledge that an atom is the smallest unit of mass being indivisible
Jallal
definition of ions
Salman
an ion is an atom or molecule that has a net electrical charge. the charge of the electron atom as negative and is equal and opposite to that of the proton that item which is positive. the net charge of an ion is non-zero.
Jallal
this is due to the total number of electrons being on equal to its total number of protons. a cation is a positively charged ion with fewer electrons than protons while an anion is negatively charged with more electrons and protons.
Jallal
because of their opposite electric charges cations and anions attract each other and readily form ionic compounds.
Jallal
thanks
Salman
so please who can throw more light on acids and base for me
Shehu
an acid is a molecule or ion capable of donating a proton (hydrogen ion H+) it is a substance that increases the concentration of hydronium ions (H3O+) when added to water or decreases the hydroxide concentration water.
Jallal
how separating acid to gas
Abubakkar
a base is a proton acceptor and it's a substance that dissociates in water to form hydroxide ions (OH-). thus the base decreases the aqueous hydronium concentration in water and increases aqueous hydroxide concentration in water.
Jallal
also a reaction between an acid with a base is called a neutralization reaction and the products of the reaction are salt and water and not an acid or base
Jallal
is the smallest particle of elements
Abubakkar
separating acid to gas is there a process called amine gas treating also called acid gas removal. since many different amines can be used in the process each process is slightly different depending on which amine to use.
Jallal
can I get some example of concentrated acid
Shehu
hydrochloric acid (HCl) it can come in diluted forms which you can apply on your hands (obviously not recommended) but that's to show it's concentration then there's highly concentrated HCl which will instantly burn/melt your skin
Jallal
If you want the strongest most concentrated acid there is then that's hydrogen fluoride (HF) now that is some very nasty stuff. even with proper equipment it can still be extremely dangerous to handle
Jallal
thank you jallal
Shehu
anytime
Jallal
jallal am going to be writing an exam on chemistry can you tell me some important stuff in chemistry
Shehu
I need a little more information than that. you see there's general chemistry, organic chemistry, nonorganic chemistry, analytical chemistry, biochemistry. which subject is it and what exactly will be your topic?
Jallal
I'm not really sure what you mean by important stuff so I give general ideas and principals. ranging from basic fundamentals to more in depth knowledge
Jallal
chemistry is a branch of science that studies matter and change. first chemistry deals with the study of the composition and the properties of matter then chemistry deals with change or how these substances evolve when submitted to certain conditions or how one substance changes or reacts while
Jallal
interacting with a different substance
Jallal
everything is chemistry from the chemicals in our foods to the air we breathe even to the medicine we use. everyday modern life all involves mixture of chemistry
Jallal
chemistry is also known as the central science without chemistry you wouldn't have the physical sciences or life sciences such as biology or applied sciences like engineering. in essence all sciences are glued together by chemistry
Jallal
an atom is defined as the basic unit of a chemical element however we now know that atoms are made of smaller particles called subatomic particles known as protons electrons and neutrons. protons and neutrons are not fundamental particles they're made of quarks. electrons are fundamental particles
Jallal
and they are not made of anything smaller
Jallal
a molecule is a group of atoms bound together which is the next step of chemical complexity. molecules represent the basic unit of a chemical compound
Jallal
there are three basic types of chemical compounds which have different bonding properties the difference is the force that holds together the atoms
Jallal
neutral molecules or compounds in nature or held together by covalent bonds. covalent bonds generally occur between two nonmetal atoms with share pairs of electrons or bonding pairs
Jallal
then there are ionic compounds where atoms are in ionic form which is to say charged and are held together by ionic forces which give rise to large networks of oppositely charged ions. ionic bonds occur between metals and nonmetals. one ionic compound example is sodium chloride also known as salt
Jallal
then there are extended networks of atoms formed between one or more types of metal atoms which are called metallic bonds
Jallal
chemistry studies changes in matter thus a chemical reaction is a process in which one set of chemical compounds are transformed into another. this occurs when there is an interaction between the compounds in which some initial bonds are broken and some new bonds are formed.
Jallal
this happens because the energy holding the new bonds together is higher than the energy that held the initial bonds. this is what is known as a thermodynamically favored process. favorable thermodynamics is the most fundamental step that leads to compounds to react with each other
Jallal
another important factor that allows compounds to react with each other is known as reaction kinetics
Jallal
there is also something known as chirality which is a geometric property of certain molecules. molecule is said to be chiral when its mirror image is not superimposable to the molecule itself.
Jallal
essentially chirality is symmetrical mirror image of a molecule. this can also be explained through enantiomers. however the origin of chirality is still unknown and debatable as to why it came to exist in the first place but they do serve important biological functions. many medicines use chirality
Jallal
then there acids and bases. an acid is a compound such as hydrochloric acid (HCl) that is able to release a hydrogen cation or proton (H+). basis such as sodium hydroxide (NaOH) can catch protons water given rise to hydroxide anions.
Jallal
basically an acid is a substance that accepts alone pair of electrons and a base is a substance that donates a lone pair of electrons. relative acidity or basicity of solutions are mixtures is measured using a logarithmic scale called the pH scale which goes from 0 to 14. 7 is considered neutral
Jallal
anyting below 7 is considered acidic with 0 being the most acidic. anything above 7 is considered basic with 14 being the most basic.
Jallal
stoichiometry is a way of measuring or determining the amount of each substance that is involved in a reaction (reactants), and the amount of products that are generated. basically this is just making both sides of the reaction, the reactants and the products equal to each other so the reaction to
Jallal
proceed
Jallal
next comes oxidation and reduction. redox processes are a type of chemical reaction in which one of the reacting compounds gets oxidized and the other gets reduced. a redox reaction involves the transfer of electrons. when a compound or atom loses electrons it is oxidized.
Jallal
when a compound or atom gains electrons it is reduced
Jallal
the most common example of a redox process is the rusting of iron. when oxygen reacts with iron it produces iron oxide. the new oxidation state of iron is + 3 iron has lost three electrons therefore it is oxidized.
Jallal
on the other hand the new oxidation state of oxygen is -2 therefore each oxygen atom has gained two electrons becoming reduced
Jallal
then there is also radioactive decay or radioactivity. this a the process in which an unstable nucleus loses energy by the emission of radiation in the form of a particle. not all atoms that exist or stable. one of these unstable atoms decay they release energy in the form of particles
Jallal
this is what we call radiation. when this process takes place a new nucleus is formed and therefore also a new atom. the new atom can also be unstable and it can keep releasing radiation until it turns into a stable atom which no longer emits energy as radiation.
Jallal
if you would like me to discuss and explain other areas of chemistry please let me know I will be more than happy to
Jallal
thank you jallal
Shehu
any more important things to know
Shehu
there are lots of important things. I can write them up for hours and still have more to say which I don't mind. how much more or detail do you require? in the meantime I'll just keep typing up other important areas of chemistry.
Jallal
balancing equation,Gibbs free energy
Shehu
sure thing. I just need to type protocol development on a clinical trial and send it to my employer. please give me a few moments to finish the study design and guidelines
Jallal
in what other form do we extract zinc apart from blende?
miriam
a chemical equation is the symbolic representation of a chemical reaction. the reactant entities are on the left hand side while the product entities are on the right hand side.
Jallal
because of the law of conservation of mass dictates that the quantity of each element does not change in a chemical reaction and the law of conservation of charge also states that the charge is conserved in a chemical reaction therefore each side of the chemical equation must represent the same
Jallal
quantity of any particular element also the same charge must be present on both sides of the balanced equation. this is where stoichiometric comes into play which is the calculation of reactants and products in chemical reactions
Jallal
balancing a chemical formula for a simple chemical reaction can be easily done by trial-and-error however more complex chemical equations can be solved using a system of linear equations.
Jallal
when balancing equations one should use the smallest whole-number coefficient. if a fractional coefficient exists then multiply every coefficient with the smallest number required to make them whole which is typically the denominator of the fractional coefficient
Jallal
an example of one balanced equation is: 2 HCl + 2 Na --> 2 NaCl + H^2. the 2 on the last hydrogen is supposed to be a subscript but I don't have that symbol on my phone
Jallal
an example of one balanced equation is: 2 HCl + 2 Na --> 2 NaCl + H^2.
Jallal
if if there is no coefficient in front of the chemical formula then the coefficient is automatically 1
Jallal
another example of a balanced equation is CH4 + 2O2 --> CO2 + 2H2O
Jallal
the 2 after the first oxygen on the left side it's supposed to be a subscript also the 2 after the oxygen on the right side is also a subscript same goes for the 2 in H2O
Jallal
the stoichiometric ratio free agent is the optimum amount when all of the reagent consumed, there is no deficiency of the reagent, and there is no excess of their reagent
Jallal
regardless of whether or not all the atoms are actually involved in a reaction both sides of the formula, the reactant side and the product side must be equal
Jallal
also different elements have different atomic mass so as a collection of single atoms molecules have a definite molar mass measured with the unit mole also known as Avogadro's constant. thus to calculate the stokke allometry by mass the number of molecules required for each reactant is expressed
Jallal
in moles and multiplied by the molar mass of each to give the mass of each reactant per mole of reaction. the mass ratios can be calculated by dividing each why the total in the whole reaction.
Jallal
stoichiometric is not only used to balance equations but also used in conversions such as converting from grams to moles using molar mass as a conversion factor or from grams to milliliters using density. for example to find the amount of NaCl (sodium chloride) in 2.00 g, you would do the following.
Jallal
2.00 g NaCl/58.44 g NaCl mol^-1= 0.034 mol
Jallal
stokke allometry is also used to balance chemical equations known as reaction stoichiometry to determine molar proportion. for example 2 diatomic gases, hydrogen oxygen, can combine to form a liquid, water, in an exothermic reaction shown by the following 2 H^2 + O^2 --> 2 H2O
Jallal
stoichiometry can also be used to find the quantity of a product yielded by a reaction. if a piece of solid copper (Cu) or added to an aqueous solution of silver nitrate (AgNO3), the silver (AG) would be replaced in a single displacement reaction forming aqueous copper (II) nitrate (Cu(NO3)2) and
Jallal
solid silver. how much silver is produced if 16.00 grams of Cu is added to the solution of excess silver nitrate?
Jallal
the following steps would be used: 1. write and balance the equation. 2. Mass to moles: convert grams of Cu 2 moles of Cu. 3. Mole ratio: convert moles of Cu to moles of Ag produced. 4. Mole to mass: convert moles of Ag to grams of Ag produced
Jallal
the complete balanced equation would be: Cu + 2 AgNO3 ---> Cu(NO3)2 + 2 Ag
Jallal
I think I went into more depth than you wanted. let me give a simplified version of balancing equations
Jallal
chemical equations are used to describe the rearrangement of atoms and electrons during a chemical reaction.
Jallal
the atoms can be neither created nor destroyed and always react in fixed proportions each side of the equation must contain the same number of each kind of atom. this is called a balanced equation
Jallal
example: for the reaction of hydrogen and oxygen to give water, H2 + O2 ---> H2O
Jallal
equation is unbalanced. there are two oxygens on the reactant side, only one on the product side. to balance it, the number of product water molecules must be increased to 2, and then the number of reactant H2 molecules must also be increased to 2.
Jallal
this provides the balanced equation: 2H2 + O2 ---> 2H2O
Jallal
you can also visually draw these two equations to show you the unbalanced equation and the balanced equation.
Jallal
that concludes balanced equations. now on to Gibbs free energy
Jallal
Gibbs free energy is part of thermodynamics it is also known as free enthalpy which is also a thermodynamic potential that can be used to calculate the maximum of reversible work that may be performed by thermodynamic system at a constant temperature and pressure.
Jallal
the Gibbs free energy equation is: (∆G°=∆H-T∆S°), and is measured in joules
Jallal
∆G° is Gibbs free energy. ∆H° is change in enthalpy. ∆S° is the change in entropy. T is the temperature always in Kelvin at 298.15 K. the second law of thermodynamics helps us determine whether a process will be spontaneous and using changes in Gibbs free energy to predict whether a reaction will be
Jallal
genius in the forward or reverse direction or whether it is at equilibrium
Jallal
the second law of thermodynamics says that entropy of the universe always increases for a spontaneous process: ∆Suniverse=∆Ssystem + ∆Ssurroundings > 0
Jallal
at constant temperature and pressure the change in Gibbs free energy is defined as ∆G=∆H-T∆S
Jallal
when ∆G is negative a process will proceed spontaneously and is referred to as exergonic
Jallal
the spontaneity other process can depend on the temperature
Jallal
in chemistry a spontaneous process is one that occurs without that diction of external energy. spontaneous process may take place slowly or quickly because spontaneity is not related to kinetics or reaction rate.
Jallal
a spontaneous processes can be exothermic or endothermic. which is to say that spontaneity is not necessarily related to the enthalpy change of a process, ∆H
Jallal
we can determine if a process will occur spontaneously by using the second law of thermodynamics which states any spontaneous process must increase the entropy in the universe. this can be expressed mathematically as follows: ∆Suniverse=∆Ssystem + ∆Ssurroundings > 0 for a spontaneous process
Jallal
to determine the change in entropy and spontaneity we use Gibbs free energy equation shown before. the symbol G has typically units of kJ/mol-rxn also known as kilojoules over mole of reaction
Jallal
when using Gibbs free energy to determine the spontaneity of a process we are only concerned with the changes in G rather than its absolute value. the change in Gibbs free energy for a process is thus written as ∆G, which is the difference between Gfinal, the Gibbs free energy of the products
Jallal
and Ginitial, the Gibbs free energy of the reactants. ∆G=Gfinal-Ginitial
Jallal
the equation ∆G=∆H-T∆S allows us to determine the change in Gibbs free energy using the enthalpy change ∆H and the entropy change ∆S. we can use ∆G to determine whether a reaction is spontaneous in the forward direction backward direction or if the reaction is at equilibrium
Jallal
when ∆G<0, the process is exergonic and will proceed spontaneously in the forward direction to form more products
Jallal
|
|
# Ethanol
Ethanol is a plant fermentation by-product. Ethanol is naturally produced by the fermentation of sugars by yeasts. It can also be produced through petrochemical processes such as the hydration of ethylene. Ethanol has medical applications as a disinfectant and antiseptic. It is useful as a chemical solvent and in the synthesis of organic compounds as well as an alternative fuel source.
Ethanol
## Ethanol Formula
It is also called ethyl alcohol, grain alcohol, drinking alcohol, spirits, or simply alcohol. It is an organic chemical compound. Ethanol is alcohol with the chemical formula $$C_2H_6O$$. The chemical formula of ethanol can also be written as $$CH_3-CH_2-OH, C_2H_5OH$$, an ethyl group linked to a hydroxyl group.
The formula is often abbreviated as EtOH. Ethanol is volatile in nature, flammable, colorless liquid with a slight characteristic odour. Ethanol is a psychoactive substance, recreational drug, and the active ingredient in alcoholic drinks.
### Natural Occurrence of Ethanol
Ethanol is a by-product of yeast that is a metabolic process. As such, ethanol is present in any yeast habitat and also in overripe fruit. Ethanol produced by symbiotic yeast can be found in Bertam palm blossoms. It is also processed during the germination of many plants as a result of natural anaerobiosis. Ethanol is detected in outer space, forming an icy coating around dust grains in interstellar clouds. Minute quantity amounts of endogenous ethanol and acetaldehyde were found in the exhaled breath of healthy volunteers.
### Properties of Ethanol
At room temperature, Ethanol is always a liquid. Ethanol has a melting point of 156K and a boiling point of 351K. Ethanol is one of the most active ingredients of all alcoholic drinks. It is also used in making different medicines such as cough syrups, tonics as well as tincture iodine as it is a very good solvent. Any proportion of ethanol is perfectly soluble in water.
Furthermore, the consumption of even a small quantity of pure ethanol is dangerous. Consumption of alcohol for a long period may cause diseases and adverse health effects. Ethanol when oxidised with monatomic oxygen, yields ethanoic acid. Ethanol is a combustible material. It produces carbon dioxide, water, heat, and light when burnt in the presence of oxygen.
### Solvent Properties
Ethanol is a versatile solvent that means it is miscible with water. It is also miscible with many organic solvents, including acetic acid, acetone, benzene, carbon tetrachloride, chloroform, diethyl ether. Also, ethylene glycol, glycerol, nitromethane, pyridine, and toluene. Ethanol’s main use as a solvent is in making a tincture of iodine, cough syrups etc. Ethanol is also miscible with light aliphatic hydrocarbons, such as pentane and hexane, and with aliphatic chlorides such as trichloroethane and tetrachloroethylene.
Ethanol’s miscibility with water contrasts with the immiscibility of longer-chain alcohols. The chain is of five or more carbon atoms, whose water miscibility decreases sharply as the number of carbons increases. The mixtures with dodecane and higher alkanes show a miscibility gap below a certain temperature. The temperature is about 13^oC for dodecane. The miscibility gap tends to get wider and wider with higher alkanes. And the temperature for complete miscibility increases with the gap.
### Uses of Ethanol
In the medical field, Alcohol is used within medicine in various forms as an antiseptic, disinfectant, and antidote. Alcohol is applied to the skin is used to disinfect the skin before a needle stick and before surgery. Alcohol may be used both to disinfect the skin of the person and the hands of the healthcare providers. It can also be used to clean other areas and in mouthwashes. Ethanol is in use to treat methanol or ethylene glycol toxicity when fomepizole is not available.
It is useful in the medical field as medical wipes and most commonly in antibacterial hand sanitizer gels as an antiseptic for its bactericidal and anti-fungal effects. Ethanol kills microorganisms by dissolving the membrane lipid bilayer of microorganisms and denaturing their proteins. It is also effective against most bacteria and fungi and viruses. Ethanol is in use as an antidote to ethylene glycol poisoning and methanol poisoning.
In the cosmetics and beauty products industry, ethanol is a common ingredient when it comes to lotions as preservatives for helping the skin. Ethanol in paints is in use as preservatives because it is an effective solvent and also in cleansing products for preventing a breach of organisms. It is used as a colour additive and is also used for giving an enhanced flavour. It is in use in gasoline for preventing the knocking of engines and maintenance of drivability.
### Ethanol Intoxicates Human Beings
Ethanol is an intoxicating agent. When people drink ethanol, the liver cannot purify or filter all of it at once at a moment. Hence ethanol flows to different parts of the body including the brain and more. As ethanol reaches the brain, the compound obstructs the gaps that are between the neurons in the brain. As the neurons cannot function properly after taking ethanol, people become slower.
Their speeches become stain and their bodies fail to maintain a proper neuron functioning of the body. Moreover, the reward centre of the brain gets trigger upon drinking alcohol and dopamine releases. This tricks the brain into thinking that ethanol is something good because it is making the people happy and energetic. And people keep on drinking ethanol.
Drinking ethanol can cause a lot of undesired effects on the body. In addition to that, because of the impairment of the proper functions of the body and brain, drinking ethanol leads to accidents. A few hours of happiness can give you whole life’s pain.
## FAQs about Ethanol
Question 1: What are the solvent properties of ethanol?
Answer: It is a versatile solvent and is easily miscible with water. It has other organic solvents, including acetone, carbon tetrachloride, ethylene glycol, chloroform. Also, benzene, diethyl ether, glycerol, pyridine, nitromethane, and toluene. The light aliphatic hydrocarbons, such as pentane and hexane, and aliphatic chlorides such as tetrachloroethylene and trichloroethane are often miscible with this compound.
Question 2: What are the Properties of Ethanol?
Answer: The important properties of ethanol are:
• Has no colour
• Remains in the liquid form at room temperature
• Soluble in water
• Combustible material. In a chemical reaction when ethanol produces carbon dioxide, water vapour, heat and light when it is burnt in the presence of oxygen.
• The oxidation of ethanol with Nascent Oxygen results in ethanoic acid
Question 3: Is ethanol dangerous?
Answer: Though ethanol is widely in use, it is a dangerous chemical. Ethanol is highly flammable. As such, it has exact flashpoints which are very important to know while using it. Although ethanol is in use in alcoholic drinks, the ingestion of pure ethanol alone can cause coma and death.
Question 4: What are the uses of ethanol?
Answer: Ethanol is a very important industrial chemical that is in use as a solvent in the synthesis of other organic chemicals. It is also in use as an additive to gasoline in the automotive industry for forming a mixture called gasohol. Ethanol is the key component in almost all alcoholic drinks such as beer, wine, and spirits. It is also in use in the cosmetics and beauty products industry, paints, as a colour additive, Antidote, Antiseptic.
Share with friends
Browse
##### Alcohols, Phenols and Ethers
Customize your course in 30 seconds
Which class are you in?
5th
6th
7th
8th
9th
10th
11th
12th
Get ready for all-new Live Classes!
Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes.
Ashhar Firdausi
IIT Roorkee
Biology
Dr. Nazma Shaik
VTU
Chemistry
Gaurav Tiwari
APJAKTU
Physics
Get Started
Browse
##### Alcohols, Phenols and Ethers
Download the App
Watch lectures, practise questions and take tests on the go.
Customize your course in 30 seconds
Which class are you in?
No thanks.
|
|
# FAQ
### General
Q: I'd like to meet with a fellowships advisor. How can I do that?
Q: May I send questions to the fellowships advisors via email?
A: Yes. Email is good for brief, factual questions. Don't expect an immediate reply; 2-3 days is typical during term-time, and a week (or sometimes longer) is typical during the summer. If you want to discuss your plans in general, or you want feedback on an application, come see us during office hours. We will give essay feedback via email only during the summer time and only if you are unavailable to meet in person or on the phone. Questions regarding elligibility or other details of particular grants are best directed to the grant body (in many cases OCS).
Q: What is the Guide to Grants?
A: The Guide to Grants and the online supplement are lists of fellowships published by the Office of Career Services. See the List of Fellowships page for a more detailed description of each.
Q: What things should I do before coming to meet with a fellowships tutor?
A: Before you come speak with us, think through some of your ideas about what you might like to do, and have a look through the List of Fellowships on this website to get some ideas what fellowships you might apply for. Bring a list of questions with you when you come to meet with us.
Q: What kinds of fellowships are out there? How do I find out about them? Which ones should I apply for?
A: There are fellowships for underclassmen and for graduating seniors. There are fellowships for graduate study, for independent projects in a foreign country, for public service, and for a year as a teacher in a foreign country. There are fellowships that fund projects during the summer, other fellowships that fund a year of work, and still others that fund up to four years of study. Some fellowships are for U.S. citizens; some are specific to Harvard students; some have unusual requirements, such as having a Scottish ancestor. Some fellowships require very strong academic records; other don't depend at all on your grades.
In short, there are fellowships for just about everything. Once you have an idea, browse the List of Fellowships on this website to find out about them.
We can't and won't tell you which ones to apply for. But after you've looked at the different fellowship options, we can help you strategize which ones will be the ones that best fit you and your goals, and then we can advise you as you prepare your applications.
Q: I missed a fellowships meeting or infosession. Is there a make-up session, or could I meet with you to get the information I missed?
A: None of the meetings and infosessions have make-up sessions, and we aren't able to meet with each student who is unable to attend these sessions (nor is the OCS fellowships staff). A few infosessions are offered multiple times. But for the most part, you should arrange to have a friend attend, take notes, and collect a set of handouts for you.
Q: I'm applying for summer grants, and many of the applications refer to CARAT. What's the CARAT? Is it different from the applications for these grants?
A: CARAT (Common Application for Research and Travel) is confusing, and students often have many questions about it. The CARAT allows different funding bodies to gather information and then to share information with each other about who was won scholarships and for how much money. Each funding body still has its own application, though obviously some of them direct you to the CARAT in addition.
Can you tell me my chances of winning the fellowship I'm applying for?
A: Generally not. We can help you assess whether you'll be a competitive candidate for particular fellowships. And in a few cases we can give you an idea of how students are applying for how many scholarships. For most fellowships, if we know this sort of thing we put in in the description on the List of Fellowships.
### Preparing Applications
Q: Can you give me feedback on my fellowship personal statement or proposal?
A: Yes! Please come see us during our office hours. Make sure to send us your essay no later than the day before our appointment.
Q: Will you help me proofread my essay?
A: No. We may point out some errors, but we will read primarily for content and substance. Try reading your essay aloud or reading one sentence or line at a time. We highly recommend getting your family members and/or friends to help you proofread and also to give you feedback on your essays.
Q: My essay exceeds the word limit. Can I submit it as-is? Can you help me cut it down to size?
A: We recommend that you submit essays that do not exceed the word limit. Some students have submitted essays that run over the word limit, though not grossly so (for example 1050-1100 words for a 1000-word personal statement) and still succeeded in the fellowship competition. But that does not mean that a future fellowship committee might choose to enforce the word limit more strictly. We can give you lots of suggestions on how to modify your essays, but we won't go line by line and help you edit down to reach a word limit.
Q: Can you give me feedback on my CV, resume, or activity list for a fellowship application?
A: Yes! One of our former fellowships tutors, Courtney Peterson, has written a brief guide to CV's for Lowell House Fellowships applicants. Email lohofell@fas to get a copy of the guide. Once you've read it and revised your CV according to the suggestions in it, make an appointment to see us during our office hours and we can give you feedback on your CV. Make sure to email it to us no later than the day before you meet with us.
Q: How should I format my application to make it look professional?
A: For CV's or resumes, follow the guidelines in Courtney's guide:
For essays and proposals:
Use one-inch margins on all sides. (Note that Microsoft Word 2003 and earlier have 1.25" left and right margins as the default. Change them.)
Make sure that your text is justified against both margins; essays with left-justified text look unprofessional (all journal articles, books, and magazine and newspaper articles have justified text).
Many essays look good single-spaced with a line of white space in between paragraphs; if you do this, do not indent your paragraphs.
If you indent paragraphs, use 1/4" indents (Microsoft Word has a default tab stop of 1/2", which makes indents that are much too large).
Avoid unusual fonts; you can't go wrong with Times New Roman or Garamond. Serif fonts (like Times and Garamond) generally look better when printed; sans-serif fonts like Helvetica generally look better on computer screens. Avoid fixed width fonts like Courier that make it look like you're using a typewriter.
### Recommendations
Q: Can I get a letter of recommendation from my TF? What about a letter co-signed by the professor and TF?
A: It is acceptable to submit letters from teaching fellows for some fellowships. Some letters written by TF's can be very effective, but it is in most cases much better to submit a letter written by a faculty member. TF-written letters tend to be weaker for four reasons: 1) TF's have a smaller basis of comparison since they have probably taught only a few sections, whereas faculty have in general taught many more students, and often at multiple universities. A strong statement about your academic performance carries more weight if it's coming from someone who's taught hundreds or thousands of students rather than dozens. 2) Professors' opinions carry more weight than graduate students because of their relative stature. 3) TF's are less practiced at writing letters of recommendation. 4) TF's have had few occasions to read letters of recommendation, so they don't have as much of a sense of what makes a letter effective.
One possibility is to have a letter co-signed by a TF and a professor. When it appears that the letter is still written by the TF, this is marginally more effective since it at least indicates that the professor has read the letter and endorses its contents. A better idea would be to ask your professor for a letter, let the professor know who your TF was, and have your TF email the professor to offer some input to the letter. If a professor writes a letter, she might offer her own opinion and also share what your TF says, perhaps by quoting the TF's comments in the letter.
When possible, faculty letters are better. So (pay attention sophomores) make sure that you get to know faculty members! We suggest that you set a goal of getting to know one faculty member each semester. For larger classes, you can do this for example by making an occasional trip to office hours or by inviting a professor to the faculty dinner at Lowell House.
Q: Does it matter whether my recommender is a full professor, assistant professor, or lecturer?
A: No. You should choose recommenders who will write you the strongest letters. Letters from assistant professors are very unlikely to carry less weight than letters from full professors. Also, a letter from a famous professor who isn't familiar with your work is much less effective than a letter from an assistant professor who has read your work and perhaps also interacted with you in class or lab. A strong letter from a famous professor (or someone known personally by a selection committee member) is likely to be very effective, but most fellowship winners have no such thing.
Q: Is it a good idea to get letters of recommendation from coaches, work supervisors, or research PI's?
A: In most cases, yes. Letters from people who have supervised you in some context can be very effective. However, it depends on the application. If you are applying to graduate school, a letter from your coach may not be as effective as an additional letter from a professor in your field. For rigorous academic fellowships, there's nothing wrong with submitting letters entirely from faculty members who've taught you at Harvard. But if you are submitting three or more letters of recommendation, it is often nice to have one of your letters be from a coach or summer boss to speak to other aspects of your accomplishments.
Q: Is it a good idea to get letters of recommendation from concentration advisors, freshman/sophomore advisors, or resident tutors? What about family friends and other people who know me well?
A: It depends, but generally not. The most effective letters come from people who have first-hand exposure to your accomplishments. That includes professors who have read and graded your work, and/or seen your contributions in section. It also includes coaches, research supervisors, and your boss at a summer job. These people have seen what you produce and they have been in a position to evaluate it. In contrast, your tutors and concentration advisors are likely people with whom you talk about what you're doing, but they don't get the same level of exposure to it, and they don't have the chance to compare you to your classmates, teammates, or co-workers. For almost all fellowships, it is much more important to have referees who know your work. It's much less effective to have referees whom you just feel know you well. There are some exceptions to this, such as PRISE and summer school proctorships, for which the applications request a letter from someone who can speak to the way you act in a residential community like Lowell House. But in most cases, references that are primarily character references will be weaker letters.
Q: Can I have a peer write a letter? What if my main activity has been Model UN or the Crimson and faculty members can't attest to my accomplishments?
A: Letters from peers are not likely to be considered, so we advise you not to submit them unless you have some very exceptional circumstance. Many students--probably a majority of students--devote a large amount of time to extracurricular activities in which there is no supervisor (aside from another student) who can attest to the student's accomplishments. That's not a big problem, and you won't be disadvantaged because other students will be in the same situation. You can highlight your contributions in your CV, and if it fits with your proposal you can elaborate on them in your essay. You may also get a chance to speak about them in an interview. For Marshall, Rhodes, and Mitchell Scholarships, it is also possible for these to be discussed in more depth in your institutional endorsement letter.
Q: How should I go about asking someone for a letter of recommendation? How much notice should I give, what materials should I provide, and how can I do this politely?
It is polite to request your letter at least two weeks before she has to submit your letter. 3-4 weeks is preferable. If you ask less than two weeks before the deadline, be very apologetic and realize that you are calling in a favor (the exception to this is when the professor has previously written you a letter and can use the same one with little to no modification).
Have your professor notify you when the letter is submitted! It is definitely acceptable (and usually very necessary) to remind your professor several times during the days prior to the deadline. A few faculty do these well in advance, but most don't.
Q: I asked my professor or supervisor for a letter, and she told me to draft one and bring it to her to sign. Is this a good thing or a bad thing? What should I do?
A: This is a bad thing, and you should either try to convince this person to write the letter herself, or else you should probably find someone else to write you a letter. The letter will likely be weak because the most valuable things that a professor or supervisor can write in a letter are probably things that you won't even know to write. In addition, if the letter doesn't seem genuine to the committee that's reading it, your entirely application will be viewed more skeptically. For faculty members, it is part of their job to write letters of recommendation for students; a professor who asks you to write a letter is being irresponsible. If your professor asks you to write the letter, you can push back and say that you think it would actually be much better if she wrote it herself. But if it seems like she isn't willing to do so, or is likely to write a brief and cursory letter, then you should seek a letter from someone else.
Q: Is it OK to get a recommendation from one of my high school teachers or someone else I worked with before coming to Harvard?
A: Generally not, unless you did something of national significance (e.g., published research) during high school. In any case, recommendations should attest to things you have accomplished since coming to college. And letters from high school teachers will be even less effective because your referee will be comparing you to a cohort of 17-year-olds.
Q: What is the House file and how do I use it?
A: There is a file in the Lowell House office where you can save copies of letters recommendations that have been written for you. Here's how it works:
1) Print out the "Request for Recommendation" form from the Fellowships Resources page, fill out the appropriate section, and give it to your recommender, along with a university mail envelope addressed to Lowell House Office, 10 Holyoke Pl.
2) Your recommender will fill out her section of the form and attach her letter to it. Keep in mind that this letter is just a copy of a letter that your recommender is submitting for some other purpose; this doesn't constitute submitting the letter to anything.
3) At some point in the future, if you would like a photocopy of that letter used for another purpose, you can use this form to have the House office mail a hard copy of the letter to an address that you specify.
The best way to use the file is to have every recommender send a copy of her letter to the House file whenever you request a letter. That way, the letters remain archived and accessible for eventual future use. Putting letters in your House file is also useful for House tutors who may be assisting you with fellowship applications.
Even if there is an old letter from your recommender on file in the House office, you should always contact your recommender and ask him or her to send a fresh copy to the selection committee. Letters that are addressed and tailored to a specific competition are always better received. (It will be of limited value, for example, to send a photocopy of a letter addressed "Dear Phi Beta Kappa Selection Committee" to the selection committee for the Fulbright Scholarship.) Sending a photocopy of an old file letter should only be a last resort; in fact, some fellowships committees may even refuse to accept such a letter.
Q: I feel bad about asking the same professors for lots of letters. Should I ask someone else to write letters to prevent it from being too burdensome?
A: No! Professors don't feel bad when you keep asking them for letters (and that's part of their job). Besides, most of the time is spent actually writing the letter. Once the letter is written, it takes very little time for a professor to change the name of the program for which you are applying, print it, sign it, and mail it. For each application, ask whomever you think will write you the best letter for that fellowship application.
Q: Should I do anything to thank the people who write letters for me?
A: Yes! Write hand-written thank you notes, and deliver them promptly. Sometimes students give small gifts, but that's not at all necessary; just a simple card or note is great.
### Eligibility
Q: Can I get a fellowship as a sophomore?
A: Yes, you certainly can win a fellowship as a sophomore. There are fellowships for summer research and travel, which are definitely open to and often geared especially for sophomores. Have a look at the List of Fellowships on this website for fellowships for which you might be eligible.
Q: Are my grades high enough to win a fellowship?
A: It depends on your grades and on the fellowship. Some fellowships, such as the Churchill and Marshall, have strict GPA requirements; these apply regardless of your in-concentration GPA and regardless of what university you attend. If you fall below the threshold, you aren't eligible. Other fellowships, including most of Harvard's fellowships to the U.K., are rigorous academic fellowships that generally require roughly a 3.7 or higher but don't have a strict cut-off. Still others, like traveling fellowships, public service fellowships, and most summer grants don't depend strongly on your grades at all. Read about the descriptions in the List of Fellowships and see whether you think you're eligible. If you have any doubts, contact us.
Q: My last semester at Harvard will be in the fall rather than the spring. Do I count as a junior or as a senior?
Q: I'm doing a master's during my fourth year at Harvard. Do I still count as an undergraduate?
A: Yes. If you intend to remain at Harvard for four years, you count as a junior during your third year and as a senior during your fourth year. (Also, your A.B. and A.M. are awarded simultaneously at Commencement.)
Q: I have advanced standing and I'm planning to graduate in three years. Do I still count as senior?
A: Yes. If you intend to remain at Harvard for three years, you count as a senior for your final year and should be eligible for all of Harvard's fellowships for graduating seniors.
### Marshall and Rhodes Scholarships
Q: What kind of grades do I need to be competitive for these fellowships?
A: The Marshall Scholarship requires a 3.70 or higher; a very significant majority of winners have above a 3.85. For the Rhodes, Harvard usually says that about a 3.7 or higher is necessary, but in practice a 3.8 or higher is needed in most cases to have a chance at being invited to interview. No one with less than a 3.8 has won in the last 5+ years.
Q: When do I need to start my application?
A: Ideally, you should begin the process before you leave Cambridge at the end of your junior year. Before you depart for the summer, speak with the fellowships advisors and with faculty members. During the end of the semester and the first part of the summer, figure out which degree program you want to pursue. And by mid-summer, you should be working on an essay. To ensure sufficient time to obtain feedback and revise, aim to complete a good first draft before the end of July.
Q: Should I apply in my home region/district or in the Boston region (Marshall) or District 2 (Rhodes)?
A: For these competitions, you can apply either in the region where you attend school or the region where you live. All regions/districts are extremely competitive, and it's impossible to predict whether you'll have better chances by applying in one region compared to another. That said, there are several factors to consider. One is cost. Marshall pays for transportation to your interview; Rhodes applicants are responsible for paying their own way (Harvard doesn't reimburse you, though I don't know that anyone's ever asked). Another factor to consider is time: If you're invited to interview, it will be in mid-November, when you're quite busy with other things. You have the choice of flying across the country to interview or taking a 10-minute ride on the T or in a cab. Finally, you might consider the competition. There are great students in every district, but if you apply in the Boston region or District 2, many of the students competing against you will be your peers at Harvard. The selection committees don't have qualms about choosing multiple people from the same school, but you might not want to be in direct competition with your classmates. The one area in which you might be disadvantaged is at the endorsement round. Harvard always has many more students applying from Massachusetts than from other regions, so you may face stiffer competition at the endorsement round if you don't apply from your home district.
Q: I'm planning just to apply for the Marshall or just for the Rhodes. Is that a good idea, or should I apply for both of them?
A: In nearly all cases, we advise you to apply for both the Marshall and the Rhodes. All Rhodes applicants should certainly apply for the Marshall unless they have below a 3.70 (in which case they're unlikely to be competitive for the Rhodes anyway). Marshall applicants should apply for the Rhodes only if there's a suitable program at Oxford that interests them. Even if you think you're better suited for one program rather than another, it's worth putting your hat in the ring for both because the chances of winning either of them are so low. We know people who've won a Rhodes who seem more like Marshall candidates and vice versa.
Q: Can I edit my application after submitting it to Harvard for endorsement?
A: Yes. This is a good time to proofread and make final edits. However, you should not count on this period as a time to get your application in good shape; if it isn't already very strong by this point, you are unlikely to be endorsed. If you change your proposed course of study, it is essential to notify your fellowships tutor.
Q: Are there any restrictions for the Marshall on where I can study?
A: You may study at any university in the U.K.; however, if your first choice university is Cambridge, Oxford, or LSE then your second choice university may not be any of those three. This information cannot be found in the Rules for Candidates or Memorandum of Guidance on the Marshall website; this rule is only stated in the official, online Marshall application. If your first and second choice universities don't fit these guidelines, you are unlikely to be endorsed by Harvard. Also, you need read carefully on the Marshall website what degrees they will fund; in the past, they have not funded study toward a second B.A. or an MBA.
Q: How rigorous is Harvard's endorsement process?
A: Very. Only about 40-50% of Harvard's applicants receive university endorsement. Applications must be strong, polished, and thoroughly motivated. Applications that have not undergone extensive editing are unlikely to receive endorsement. Even if you have are a varsity athlete and junior Phi Beta Kappa member with a 3.95 GPA, you should not take endorsement for granted. If a student like that submitted a hastily written essay or something little more than a narrative form of her CV, then she would be unlikely to be endorsed.
Q: What do the Rhodes folks mean by "activities list"?
A: It's essentially a CV with the education section omitted; the key is that you need to describe each of your activities and your role in them. All this is thoroughly described in the CV guide written by former fellowships tutor Courtney Peterson, which is available upon request by emailing lohofell@fas.harvard.edu.
Q: How hard is it to get a third year of funding?
A: Students pursuing Ph.D.'s are granted third-year funding nearly automatically. Students pursuing other degrees still have a good chance of obtaining funding for a third year, but it is not guaranteed.
### Fulbright Grants
Q: The Fulbright application procedure seems really complicated. What are the main parts of the application process?
A: About a month before the deadline, students must submit an "intent to apply" statement to OCS (ocsgrant@fas). This should be a rough draft of your proposal, about 1/2 to 1 page long (your full draft may be up to 1000 words).
Then, your full application, including letters of recommendation, is due in the OCS fellowships office in September. You will then be scheduled for an interview with a member of the Harvard faculty; these interviews tend to be very cordial and are not high-stress. Following the interview, OCS will write up an evaluation of your application and submit it to the national Fulbright committee along with your application. Everyone who applies gets forwarded on to the national round. We have no idea how much weight the U.S. Fulbright committee puts on Harvard's evaluation of your application.
The U.S. Fulbright committee will create shortlists of applicants, whose applications are then forwarded to the appropriate host country. You will be notified of your status in the competition at this point. If you make it to the host country round, in most cases you have about a 50% chance of winning a grant.
Finally, the host countries will determine the winners and you will be notified. This rarely happens before April and may not occur until June.
Q: Can I edit my application after submitting it for the Harvard round?
A: Yes. You are welcome to continue editing your application before the national submission deadline. However, Harvard's evaluation will be based only on the materials you submitted by the Harvard deadline. If you change what it is that you're proposing to do, you must contact your fellowships tutors.
Q: What do they mean by curriculum vitae? I've heard that this isn't a typical CV or resume.
A: What the Fulbright folks mean by curriculum vitae is an intellectual autobiography: what are your interests, and how did they develop. That is, it's more of a personal statement. The Fulbright committee wants you to include all the usual components of application essays: what do you want to do, why do you want to do it, how does it fit in with what you've done before, what motivates you, what do you see yourself contributing in the future, etc. For the Fulbright, you can spread out your essay into two statements, the 1000-word proposal and the 500-word personal statement. Due to the length constraints, some of things you would ordinarily put in a personal statement will inevitably have to spill out into the proposal statement instead; that's fine.
Q: What are my chances of winning? Are teaching grants less competitive than full grants?
A: We can't tell you offhand, but on the Fulbright website, you should be able to find competition statistics for the most recent year: you can see how many applications were received and how many grants were given for each type of grant and destination country.
### U.K. Fellowships
Q: How do I figure out what are the good programs in the U.K. in my field?
Q: Should I contact people I want to work with?
A: Yes, definitely. It is important for you to establish that you can work with the faculty members of your choice (and that they will be remaining at your choice of university, instead of moving or taking a sabbatical); this is both to ensure that you'll have a satisfying educational experience and to show the selection committee that your proposed course of study is feasible.
If you are planning to do a Ph.D., it is critical to arrange an advisor in advance. Unlike most Ph.D. programs in the U.S. (in which you are admitted to the program without having a thesis supervisor), in the U.K. you are admitted to the program to work with someone in particular. So to get admitted, you need to have found an advisor who is willing to take you on. As a top Harvard student who would be coming with your own funding, you should have some good options.
Unless you already have contacts in the U.K., you'll just have to email professors out of the blue. Introduce yourself, give a brief overview of your background, explain that you're applying for a fellowship and hoping to study in their department, and express interest in working with them. You might ask whether they'll be taking students to work on projects that you're interested in. Not all professors will reply, and even fewer will reply promptly, so email several of them and leave yourself plenty of time to wait for replies.
Q: Do I also have to apply directly to the graduate school or program?
A: In almost all cases, yes. The scholarship or fellowship provides funding, but you must still get admitted to the graduate program. Thus, you must make sure that you meet the requirements for the graduate program. This is not to be taken for granted: a few programs have very high GPA minimums (studying international relations at Oxford may require a GPA around 3.85 or higher). This is especially important if you had a less-specialized concentration like Social Studies, or you're planning to do a graduate degree in a field that's different from your undergraduate concentration: many British graduate programs expect you to have the equivalent background to someone with a solid undergraduate degree in that discipline, so you should check with that department to ensure that you meet the requirements before you invest a whole lot of time in the application.
A few fellowships don't require you to obtain admission to a degree program. This includes the Harlech, which gives you visiting student status at Oxford, and the Harvard-Cambridge, which will fund a degree program but does not require you to enroll in one.
Q: What is the deadline for my program? If there are several application dates (may be called "gathered fields") or rolling admissions, is it fine to apply at any time?
Q: What's meant by "College" at Cambridge and Oxford?
Q: How competitive are Harvard's U.K. fellowships? What kind of grades do I need?
A: Most of them are very competitive. All except the Harvard-Cambridge explicitly require a strong academic record. For the Knox/Henry/von Clemm/Herchel Smith (non-science) and for the Eben Fiske, a 3.7 or higher is probably needed to be competitive. For the Paul Williams, perhaps a 3.6 or higher. For the Herchel Smith, at least a 3.5, but in-concentration GPA should be higher. The Harvard-Cambridge doesn't have a minumum GPA. Most but all winners have very strong academic records. The selection committee gives strong consideration to students whose academic performance has improved significantly over the course of their time at Harvard.
### Traveling Fellowships
Q: What is a traveling fellowship?
A: Traveling fellowships are a strange and wonderful opportunity found, as far as we know knowledge, only at Harvard. It's a grant to fund a year of purposeful travel in a foreign country. We realize that "purposeful travel" is pretty vague. It means that you don't have a job and you're not studying for a degree, but you still have some goal or project that you intend to pursue during your year. They give you somewhere around $15,000 -$20,000 to last you for at least 9-10 months.
"Traveling fellowship" is also a misleading shorthand: winners travel to their destination countries, but not so much once they're there. These fellowships support a year of immersion in a foreign culture, which you achieve by engaging with the host community in some way. You're not limited to staying in one city or region, but you're expected not to be itinerant nor engaging superficially as a tourist might.
Your proposal is important because it needs to demonstrate your genuine interest in your destination country and frame your time there, describing what you will do and how you will interact with people. But the selection committee will fund you not because they think your proposal is important or meritorious, but because they think you stand to gain from the experience. They expect this to be a time of significant personal growth and reflection.
It's OK if you're planning to go on to med or law school afterward; it's also OK if you're not yet sure what you want to do and this year away will help you figure it out. But to convince the committee to fund you, you'll have to give them much more reason than wanting a gap year or being uncertain of your career path.
Q: What is the House nomination process for the Gardner/Shaw/Sheldon/Trustman and Christian/Segal traveling fellowships, and how seriously should I take it?
Please note that House evaluation is no longer required starting with the 2009-2010 school year.
Q: What is the evaluation component of the application? How do I got about getting one and submitting it?
Please note that House evaluation letter is not required starting with the 2009-2010 school year.
Q: Can I use this evaluation as a letter of recommendation for something else?
Please note that House evaluation letter is not required starting with the 2009-2010 school year.
Q: I'm a sophomore or junior planning to take a year off. Are there any fellowships that will fund travel during my time away from Harvard?
A: Not that we're aware of. There are certainly fellowships that fund projects during the summer, but we don't know of any fellowships that will fund students who are on leave from Harvard. Look through the Guide to Grants and see whether there's anything that fits the bill. If you find one that seems promising, let us know and we'll put it in our List of Fellowships.
Q: Do I need a fellowship in order to do a Ph.D.?
A: No. If you're going for a Ph.D., you should expect not to pay tuition and to receive a stipend that will at least cover your living expenses (somewhere around $20,000/year, plus or minus a few thousand; in science, engineering, or economics it may be closer to$30,000). This money will come from one or a combination of these three primary sources:
Teaching assistantships
Research assistantships
Fellowships
A teaching assistantship (TA, or, at Harvard, TF) means that you teach part-time--typically 2 sections/semester, or 20 hours/week--and take classes and/or do research during the rest of your time. This is the default option if you don't have a fellowship and your advisor doesn't have money to pay for you (which is something that usually happens only in fields like experimental sciences, engineering, and economics.)
A research assistantship (RA) means that your research advisor pays your tuition and stipend out of her grant money. An RA allows you to spend all your time on research. The only potential drawback is that it's tied to that advisor (for that term) and that your advisor may have enough to pay for some students but not all.
A fellowship gives you the most options. Your pay isn't tied to any particular activity, so you have maximum flexibility in whom you work with and how you structure your time. Some fellowships pay stipends that are a bit higher than an RA or a TA. Fellowships may come from outside sources, but some departments and universities may also have fellowships to offer their students.
(n.b.: Sometimes you also need separate support for summer work since TA's may not be available. This isn't something you need to worry about right now. Similarly, you don't need to worry about how much any given school will pay you until you've been admitted.)
Q: Are there fellowships that will fund professional school?
A: Very, very few. The Soros, Merage, and Jack Kent Cooke are among the ones that will. See the List of Fellowships page for more details. Most law and medical students pay with loans or with their own assets.
Q: Are there fellowships that will fund masters programs in the U.S.?
A: Very few. Some master's degrees come with funding or allow you to teach to support yourself. But many expect students to pay, and there aren't very many fellowships available. The few we know about are described in the List of Fellowships.
Q: I'm not a U.S. Citizen. Are there any fellowships I'm eligible for?
A: As far as we know, the only national fellowship for which you are eligible is the Jack Kent Cooke Fellowship. But you are probably eligible for whatever fellowships are offered by the universities to which you are applying, and in any case if you're applying for Ph.D.'s you will almost certainly get funding without needing a fellowship, as we explain above.
### Mitchell Scholarship
Q: Who is the institutional endorser?
A: The letters are signed by our Housemaster, Diana Eck, and the Dean of Harvard College. We will also need to see your other letters of recommendation, so you should contact your referees and have them email copies of their letters to the fellowships tutors (lohofell@fas.harvard.edu) as soon they can.
Q: Do they really have their interviews on the same day as the Rhodes Scholarship interviews?
A: Yes. The Mitchell committee wants students who are committed to accepting a Mitchell Scholarship if offered one rather than students who see it as a second choice if they don't win a Rhodes.
|
|
# Lecture 7 - Symbolic quantum mechanics using SymPsi - Semiclassical equations of motion¶
Author: J. R. Johansson ([email protected]), http://jrjohansson.github.io, and Eunjong Kim.
Status: Preliminary (work in progress)
This notebook is part of a series of IPython notebooks on symbolic quantum mechanics computations using SymPy and SymPsi. SymPsi is an experimental fork and extension of the sympy.physics.quantum module in SymPy. The latest version of this notebook is available at http://github.com/jrjohansson/sympy-quantum-notebooks, and the other notebooks in this lecture series are also indexed at http://jrjohansson.github.io.
Requirements: A recent version of SymPy and the latest development version of SymPsi is required to execute this notebook. Instructions for how to install SymPsi is available here.
Disclaimer: The SymPsi module is still under active development and may change in behavior without notice, and the intention is to move some of its features to sympy.physics.quantum when they matured and have been tested. However, these notebooks will be kept up-to-date the latest versions of SymPy and SymPsi.
## Setup modules¶
In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
In [2]:
import numpy as np
In [3]:
from sympy import *
init_printing()
In [4]:
from sympsi import *
from sympsi.boson import *
from sympsi.pauli import *
from sympsi.operatorordering import *
from sympsi.expectation import *
from sympsi.operator import OperatorFunction
## Semiclassical equations of motion¶
The dynamics of an open quantum system with a given Hamiltonian, $H$, and some interaction with an environment that acts on the system through the sytem operator $a$, and with rate $\kappa$, can often be described with a Lindblad master equation for the dynamics of the system density matrix $\rho$:
$$\frac{d}{dt}\rho = -i[H, \rho] + \kappa \mathcal{D}[a]\rho,$$
where the Lindblad superoperator $\mathcal{D}$ is
$$\mathcal{D}[a]\rho = a \rho a^\dagger -\frac{1}{2}\rho a^\dagger a - \frac{1}{2}a^\dagger a \rho.$$
One common approach to solve for the dynamics of this system is to represent the system operators and the density operator as matrices, possibly in a truncated state space, and solve the matrix-valued ODE problem numerically.
Another approach is to use the adjoint master equation for the system operators $X$:
$$\frac{d}{dt} X = i [H, X] + \kappa \mathcal{D}[a^\dagger]X$$
and then solve for dynamics of the expectation values of the relevant system operators. The advantage of this method is that the ODEs are no longer matrix-valued, unlike the ODE for the density matrix. However, from the density matrix we can calculate any same-time expectation values, but with explicit ODEs for expectation values we need to select in advance which operator's expectation values we want to generate equations for.
We can easily generate an equation for the expectation value of a specific operator by multiplying the master equation for $\rho$ from the left with an operator $X$, and then take the trace over the entire equation. Doing this we obtain:
$$X\frac{d}{dt}\rho = -iX[H, \rho] + \kappa X\mathcal{D}[a]\rho$$
and taking the trace:
$${\rm Tr}\left(X\frac{d}{dt}\rho\right) = -i{\rm Tr}\left(X[H, \rho]\right) + \kappa {\rm Tr}\left(X\mathcal{D}[a]\rho\right)$$
using the cyclic permutation properties of traces:
$$\frac{d}{dt}{\rm Tr}\left(X\rho\right) = -i{\rm Tr}\left([X, H]\rho\right) + \kappa {\rm Tr}\left((\mathcal{D}[a]X) \rho\right)$$
we end up with an equation for the expectation value of the operator $X$:
$$# \frac{d}{dt}\langle X\rangle ¶ i\langle [H, X] \rangle + \kappa \langle \mathcal{D}[a]X \rangle$$
Note that this is a C-number equation, and therefore not as complicated to solve as the master equation for the density matrix. However, the problem with this C-number equation is that the expressions $[H, X]$ and $\mathcal{D}[a]X$ in general will introduce dependencies on other system operators, so we obtain a system of coupled C-number equations. If this system of equations closes when a finite number of operators are included, then we can use this method to solve the dynamics of these expectation values exactly. If the system of equations do not close, which is often the case for coupled systems, then we can still use this method if we introduce some rule for truncating high-order operator expectation values (for example, by discarding high-order terms or by factoring them in expectation values of lower order). However, in this case the results are no longer exact, and is called a semi-classical equation of motion.
With SymPsi we can automatically generate semiclassical equations of motion for operators in a system described by a given Hamiltonian and a set of collapse operators that describe its coupling to an environment.
## Driven harmonic oscillator¶
Consider a driven harmonic oscillator, which interaction with an bath at some temperature that corresponds to $N_{\rm th}$ average photons. We begin by setting up symbolic variables for the problem parameters and the system operators in SymPsi:
In [5]:
w, t, Nth, Ad, kappa = symbols(r"\omega, t, n_{th}, A_d, kappa", positive=True)
In [6]:
a = BosonOp("a")
rho = Operator(r"\rho")
rho_t = OperatorFunction(rho, t)
In [7]:
H = w * Dagger(a) * a + Ad * (a + Dagger(a))
Eq(Symbol("H"), H)
Out[7]:
$$H = A_{d} \left({{a}^\dagger} + {a}\right) + \omega {{a}^\dagger} {a}$$
The master equation for this system can be generated using the master_equation function:
In [8]:
c_ops = [sqrt(kappa * (Nth + 1)) * a, sqrt(kappa * Nth) * Dagger(a)]
In [9]:
me = master_equation(rho_t, t, H, c_ops)
me
Out[9]:
$$\frac{\partial}{\partial t} {{\rho}(t)} = \kappa n_{{th}} {{a}^\dagger} {{\rho}(t)} {a} - \frac{\kappa n_{{th}}}{2} {a} {{a}^\dagger} {{\rho}(t)} - \frac{\kappa n_{{th}}}{2} {{\rho}(t)} {a} {{a}^\dagger} - \frac{\kappa {{a}^\dagger}}{2} \left(n_{{th}} + 1\right) {a} {{\rho}(t)} + \kappa \left(n_{{th}} + 1\right) {a} {{\rho}(t)} {{a}^\dagger} - \frac{\kappa {{\rho}(t)}}{2} \left(n_{{th}} + 1\right) {{a}^\dagger} {a} - i \left[A_{d} \left({{a}^\dagger} + {a}\right) + \omega {{a}^\dagger} {a},{{\rho}(t)}\right]$$
Equation for the system operators can be generated using the function operator_master_equation, and for the specific case of the cavity operator $a$ we obtain:
In [10]:
# first setup time-dependent operators
a_t = OperatorFunction(a, t)
a_to_a_t = {a: a_t, Dagger(a): Dagger(a_t)}
H_t = H.subs(a_to_a_t)
c_ops_t = [c.subs(a_to_a_t) for c in c_ops]
In [11]:
# operator master equation for a
ome_a = operator_master_equation(a_t, t, H_t, c_ops_t)
Eq(ome_a.lhs, normal_ordered_form(ome_a.rhs.doit().expand()))
Out[11]:
$$\frac{d}{d t} {{{a}}(t)} = - i A_{d} - i \omega {{{a}}(t)} - \frac{\kappa {{{a}}(t)}}{2}$$
In [12]:
# operator master equation for n = Dagger(a) * a
ome_n = operator_master_equation(Dagger(a_t) * a_t, t, H_t, c_ops_t)
Eq(ome_n.lhs, normal_ordered_form(ome_n.rhs.doit().expand()))
Out[12]:
$$\frac{d}{d t} {{{{a}^\dagger}}(t)} {{{a}}(t)} + {{{{a}^\dagger}}(t)} \frac{d}{d t} {{{a}}(t)} = - i A_{d} {{{{a}^\dagger}}(t)} + i A_{d} {{{a}}(t)} + \kappa n_{{th}} - \kappa {{{{a}^\dagger}}(t)} {{{a}}(t)}$$
From these operator equations we see that the equation for $a$ depends only on the operator $a$, while the equation for $n$ depends on $n$, $a$, $a^\dagger$. So to solve the latter equation we therefore also have to generate an equation for $a^\dagger$.
### System of semiclassical equations¶
In [13]:
ops, op_eqm, sc_eqm, sc_ode, ofm, oim = semi_classical_eqm(H, c_ops)
In [14]:
html_table([[Eq(Expectation(key), ofm[key]), sc_ode[key]] for key in operator_sort_by_order(sc_ode)])
Out[14]:
$\left\langle {{a}^\dagger} \right\rangle = \operatorname{A_{0}}{\left (t \right )}$ $\frac{d}{d t} \operatorname{A_{0}}{\left (t \right )} = i A_{d} + i \omega \operatorname{A_{0}}{\left (t \right )} - \frac{\kappa}{2} \operatorname{A_{0}}{\left (t \right )}$ $\left\langle {a} \right\rangle = \operatorname{A_{1}}{\left (t \right )}$ $\frac{d}{d t} \operatorname{A_{1}}{\left (t \right )} = - i A_{d} - i \omega \operatorname{A_{1}}{\left (t \right )} - \frac{\kappa}{2} \operatorname{A_{1}}{\left (t \right )}$ $\left\langle {{a}^\dagger} {a} \right\rangle = \operatorname{A_{2}}{\left (t \right )}$ $\frac{d}{d t} \operatorname{A_{2}}{\left (t \right )} = - i A_{d} \operatorname{A_{0}}{\left (t \right )} + i A_{d} \operatorname{A_{1}}{\left (t \right )} + \kappa n_{{th}} - \kappa \operatorname{A_{2}}{\left (t \right )}$
Since this is a system of linear ODEs, we can write it on matrix form:
In [15]:
A_eq, A, M, b = semi_classical_eqm_matrix_form(sc_ode, t, ofm)
A_eq
Out[15]:
$$- \frac{d}{d t} \left[\begin{matrix}\operatorname{A_{0}}{\left (t \right )}\\\operatorname{A_{1}}{\left (t \right )}\\\operatorname{A_{2}}{\left (t \right )}\end{matrix}\right] = \left[\begin{matrix}i A_{d}\\- i A_{d}\\\kappa n_{{th}}\end{matrix}\right] + \left[\begin{matrix}i \omega - \frac{\kappa}{2} & 0 & 0\\0 & - i \omega - \frac{\kappa}{2} & 0\\- i A_{d} & i A_{d} & - \kappa\end{matrix}\right] \left[\begin{matrix}\operatorname{A_{0}}{\left (t \right )}\\\operatorname{A_{1}}{\left (t \right )}\\\operatorname{A_{2}}{\left (t \right )}\end{matrix}\right]$$
We can solve for the steadystate by setting the left-hand-side of the ODE to zero, and solve the linear system of equations:
In [16]:
A_sol = M.LUsolve(-b)
The solution for the three system operators are:
In [17]:
A_sol[ops.index(Dagger(a)*a)]
Out[17]:
$$- \frac{1}{\kappa} \left(\frac{A_{d}^{2}}{i \omega - \frac{\kappa}{2}} + \frac{A_{d}^{2}}{- i \omega - \frac{\kappa}{2}} - \kappa n_{{th}}\right)$$
In [18]:
A_sol[ops.index(a)]
Out[18]:
$$\frac{i A_{d}}{- i \omega - \frac{\kappa}{2}}$$
In [19]:
A_sol[ops.index(Dagger(a))]
Out[19]:
$$- \frac{i A_{d}}{i \omega - \frac{\kappa}{2}}$$
We can also solve for the steadystate directly from the ODE by settings its right-hand-side to zero, and using the SymPy solve function:
In [20]:
solve([eq.rhs for eq in sc_ode.values()], list(ofm.values()))
Out[20]:
$$\left \{ \operatorname{A_{0}}{\left (t \right )} : - \frac{2 i A_{d}}{2 i \omega - \kappa}, \quad \operatorname{A_{1}}{\left (t \right )} : - \frac{2 i A_{d}}{2 i \omega + \kappa}, \quad \operatorname{A_{2}}{\left (t \right )} : \frac{1}{4 \omega^{2} + \kappa^{2}} \left(4 A_{d}^{2} + 4 \omega^{2} n_{{th}} + \kappa^{2} n_{{th}}\right)\right \}$$
### Solve in the ODEs¶
For systems with a small number of dependent operators we can solve the resulting system of ODEs directly:
In [21]:
sols = dsolve(list(sc_ode.values())); sols
Out[21]:
$$\left [ \operatorname{A_{0}}{\left (t \right )} = C_{1} e^{t \left(i \omega - \frac{\kappa}{2}\right)}, \quad \operatorname{A_{1}}{\left (t \right )} = C_{2} e^{t \left(- i \omega - \frac{\kappa}{2}\right)}, \quad \operatorname{A_{2}}{\left (t \right )} = - \frac{i A_{d} C_{1} e^{t \left(i \omega - \frac{\kappa}{2}\right)}}{i \omega + \frac{\kappa}{2}} + \frac{i A_{d} C_{2} e^{t \left(- i \omega - \frac{\kappa}{2}\right)}}{- i \omega + \frac{\kappa}{2}} + \frac{C_{3}}{e^{\kappa t}}\right ]$$
In [22]:
# hack
tt = [s for s in sols[0].rhs.free_symbols if s.name == 't'][0]
We also need o specify the initial conditions: Here the initial conditions are $\langle a(0) \rangle = \langle a^\dagger(0) \rangle = 2$ and $\langle a^\dagger(0)a(0) \rangle = 4$.
In [23]:
ics = {ofm[Dagger(a)].subs(tt, 0): 2,
ofm[a].subs(tt, 0): 2,
ofm[Dagger(a)*a].subs(tt, 0): 4}; ics
Out[23]:
$$\left \{ \operatorname{A_{0}}{\left (0 \right )} : 2, \quad \operatorname{A_{1}}{\left (0 \right )} : 2, \quad \operatorname{A_{2}}{\left (0 \right )} : 4\right \}$$
In [24]:
constants = set(sum([[s for s in sol.free_symbols if (str(s)[0] == 'C')] for sol in sols], [])); constants
Out[24]:
$$\left\{C_{1}, C_{2}, C_{3}\right\}$$
In [25]:
C_sols = solve([sol.subs(tt, 0).subs(ics) for sol in sols], constants); C_sols
Out[25]:
$$\left \{ C_{1} : 2, \quad C_{2} : 2, \quad C_{3} : \frac{1}{4 \omega^{2} + \kappa^{2}} \left(16 A_{d} \omega + 16 \omega^{2} + 4 \kappa^{2}\right)\right \}$$
In [26]:
sols_with_ics = [sol.subs(C_sols) for sol in sols]; sols_with_ics
Out[26]:
$$\left [ \operatorname{A_{0}}{\left (t \right )} = 2 e^{t \left(i \omega - \frac{\kappa}{2}\right)}, \quad \operatorname{A_{1}}{\left (t \right )} = 2 e^{t \left(- i \omega - \frac{\kappa}{2}\right)}, \quad \operatorname{A_{2}}{\left (t \right )} = - \frac{2 i A_{d} e^{t \left(i \omega - \frac{\kappa}{2}\right)}}{i \omega + \frac{\kappa}{2}} + \frac{2 i A_{d} e^{t \left(- i \omega - \frac{\kappa}{2}\right)}}{- i \omega + \frac{\kappa}{2}} + \frac{16 A_{d} \omega + 16 \omega^{2} + 4 \kappa^{2}}{\left(4 \omega^{2} + \kappa^{2}\right) e^{\kappa t}}\right ]$$
Now let's insert numerical values for the system parameters so we can plot the solution:
In [27]:
values = {w: 1.0, Ad: 0.0, kappa: 0.1, Nth: 0.0}
In [28]:
sols_funcs = [sol.rhs.subs(values) for sol in sols_with_ics]; sols_funcs
Out[28]:
$$\left [ 2 e^{t \left(-0.05 + 1.0 i\right)}, \quad 2 e^{t \left(-0.05 - 1.0 i\right)}, \quad \frac{4}{e^{0.1 t}}\right ]$$
In [29]:
times = np.linspace(0, 50, 500)
y_funcs = [lambdify([tt], sol_func, 'numpy') for sol_func in sols_funcs]
fig, axes = plt.subplots(len(y_funcs), 1, figsize=(12, 6))
for n, y_func in enumerate(y_funcs):
axes[n].plot(times, np.real(y_func(times)), 'r')
axes[2].set_ylim(0, 5);
## Driven dissipative two-level system¶
In [30]:
sx, sy, sz, sm, sp = SigmaX(), SigmaY(), SigmaZ(), SigmaMinus(), SigmaPlus()
In [31]:
Omega, gamma_0, N, t = symbols("\Omega, \gamma_0, N, t", positive=True)
values = {Omega: 1.0, gamma_0: 0.5, N: 1.75}
In [32]:
H = -Omega/2 * sx
H
Out[32]:
$$- \frac{\Omega {\sigma_x}}{2}$$
In [33]:
c_ops = [sqrt(gamma_0 * (N + 1)) * pauli_represent_x_y(sm),
sqrt(gamma_0 * N) * pauli_represent_x_y(sp)]
In [34]:
ops, op_eqm, sc_eqm, sc_ode, ofm, oim = semi_classical_eqm(H, c_ops)
In [35]:
html_table([[Eq(Expectation(key), ofm[key]), sc_ode[key]] for key in operator_sort_by_order(sc_ode)])
Out[35]:
$\left\langle {\sigma_x} \right\rangle = \operatorname{A_{0}}{\left (t \right )}$ $\frac{d}{d t} \operatorname{A_{0}}{\left (t \right )} = - N \gamma_{0} \operatorname{A_{0}}{\left (t \right )} - \frac{\gamma_{0}}{2} \operatorname{A_{0}}{\left (t \right )}$ $\left\langle {\sigma_y} \right\rangle = \operatorname{A_{1}}{\left (t \right )}$ $\frac{d}{d t} \operatorname{A_{1}}{\left (t \right )} = - N \gamma_{0} \operatorname{A_{1}}{\left (t \right )} + \Omega \operatorname{A_{2}}{\left (t \right )} - \frac{\gamma_{0}}{2} \operatorname{A_{1}}{\left (t \right )}$ $\left\langle {\sigma_z} \right\rangle = \operatorname{A_{2}}{\left (t \right )}$ $\frac{d}{d t} \operatorname{A_{2}}{\left (t \right )} = - 2 N \gamma_{0} \operatorname{A_{2}}{\left (t \right )} - \Omega \operatorname{A_{1}}{\left (t \right )} - \gamma_{0} \operatorname{A_{2}}{\left (t \right )} - \gamma_{0}$
In [36]:
A_eq, A, M, b = semi_classical_eqm_matrix_form(sc_ode, t, ofm)
A_eq
Out[36]:
$$- \frac{d}{d t} \left[\begin{matrix}\operatorname{A_{0}}{\left (t \right )}\\\operatorname{A_{1}}{\left (t \right )}\\\operatorname{A_{2}}{\left (t \right )}\end{matrix}\right] = \left[\begin{matrix}0\\0\\- \gamma_{0}\end{matrix}\right] + \left[\begin{matrix}- N \gamma_{0} - \frac{\gamma_{0}}{2} & 0 & 0\\0 & - N \gamma_{0} - \frac{\gamma_{0}}{2} & \Omega\\0 & - \Omega & - 2 N \gamma_{0} - \gamma_{0}\end{matrix}\right] \left[\begin{matrix}\operatorname{A_{0}}{\left (t \right )}\\\operatorname{A_{1}}{\left (t \right )}\\\operatorname{A_{2}}{\left (t \right )}\end{matrix}\right]$$
In [37]:
A_sol = M.LUsolve(-b)
The steadystate expectation value of $\sigma_x$:
In [38]:
A_sol[ops.index(sx)]
Out[38]:
$$0$$
The steadystate expectation value of $\sigma_y$:
In [39]:
A_sol[ops.index(sy)]
Out[39]:
$$- \frac{\Omega \gamma_{0}}{\left(- N \gamma_{0} - \frac{\gamma_{0}}{2}\right) \left(- 2 N \gamma_{0} + \frac{\Omega^{2}}{- N \gamma_{0} - \frac{\gamma_{0}}{2}} - \gamma_{0}\right)}$$
The steadystate expectation value of $\sigma_z$:
In [40]:
A_sol[ops.index(sz)]
Out[40]:
$$\frac{\gamma_{0}}{- 2 N \gamma_{0} + \frac{\Omega^{2}}{- N \gamma_{0} - \frac{\gamma_{0}}{2}} - \gamma_{0}}$$
Steadystate of $\sigma_+$:
In [41]:
pauli_represent_x_y(sp).subs({sx: A_sol[ops.index(sx)], sy: A_sol[ops.index(sy)]})
Out[41]:
$$- \frac{i \Omega \gamma_{0}}{2 \left(- N \gamma_{0} - \frac{\gamma_{0}}{2}\right) \left(- 2 N \gamma_{0} + \frac{\Omega^{2}}{- N \gamma_{0} - \frac{\gamma_{0}}{2}} - \gamma_{0}\right)}$$
Steadystate of $\sigma_-$:
In [42]:
pauli_represent_x_y(sm).subs({sx: A_sol[ops.index(sx)], sy: A_sol[ops.index(sy)]})
Out[42]:
$$\frac{i \Omega \gamma_{0}}{2 \left(- N \gamma_{0} - \frac{\gamma_{0}}{2}\right) \left(- 2 N \gamma_{0} + \frac{\Omega^{2}}{- N \gamma_{0} - \frac{\gamma_{0}}{2}} - \gamma_{0}\right)}$$
Alternatively we can also use the SymPy solve function to find the steadystate solutions:
In [43]:
solve([eq.rhs for eq in sc_ode.values()], list(ofm.values()))
Out[43]:
$$\left \{ \operatorname{A_{0}}{\left (t \right )} : 0, \quad \operatorname{A_{1}}{\left (t \right )} : - \frac{2 \Omega \gamma_{0}}{2 \Omega^{2} + \gamma_{0}^{2} \left(2 N + 1\right)^{2}}, \quad \operatorname{A_{2}}{\left (t \right )} : - \frac{\gamma_{0}^{2} \left(2 N + 1\right)}{2 \Omega^{2} + \gamma_{0}^{2} \left(2 N + 1\right)^{2}}\right \}$$
At zero temperature:
In [44]:
solve([eq.subs(N, 0).rhs for eq in sc_ode.values()], list(ofm.values()))
Out[44]:
$$\left \{ \operatorname{A_{0}}{\left (t \right )} : 0, \quad \operatorname{A_{1}}{\left (t \right )} : - \frac{2 \Omega \gamma_{0}}{2 \Omega^{2} + \gamma_{0}^{2}}, \quad \operatorname{A_{2}}{\left (t \right )} : - \frac{\gamma_{0}^{2}}{2 \Omega^{2} + \gamma_{0}^{2}}\right \}$$
## Versions¶
In [45]:
%reload_ext version_information
%version_information sympy, sympsi
Out[45]:
SoftwareVersion
Python3.4.1 (default, Sep 20 2014, 19:44:17) [GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)]
IPython2.3.0
OSDarwin 13.4.0 x86_64 i386 64bit
sympy0.7.5-git
sympsi0.1.0.dev-0c6e514
Sun Oct 12 21:42:08 2014 JST
|
|
# 3. Neural network
## What is a Neural Network?
An artificial neural network, or simply a neural network, can be defined as a biologically inspired computational model which consists of a network architecture composed by artificial neurons. This structure contains a set of parameters, which can be adjusted to perform certain tasks.
Neural networks have universal approximation properties, which means that they can approximate any function in any dimension and up to a desired degree of accuracy.
The most used types of layers used in regression and classification applications are the perceptron, scaling, unscaling, bounding and probabilistic.
In other types of applications, such as computer vision or speech recognition, other types of layers, such as convolutional or associative, are commonly used.
## 3.1. Perceptron layers
The most important layers of a neural network are the perceptron layers (also called dense layers). Indeed, they allow the neural network to learn.
The following figure shows a perceptron neuron, which is the basic unit of a perceptron layer. The perceptron neuron receives information as a set of numerical inputs $x_1,\ldots,x_n$. This information is then combined with a bias $b$ and a set of weights $w_1,\ldots,w_n$ to produce a message in the form of a single numerical output $y$. The parameters of the neuron involve the bias and the weights.
The combination function transforms the set of input values to produce a single combination or net input value, $$combination = bias + \sum weights·inputs$$
The activation function defines the perceptron output in terms of its combination, $$output = activation(combination)$$
The activation function of the perceptrons composing each layer determines the type of function that the neural network represents. Some of the most common activation functions are the linear, hyperbolic tangent, logistic and rectified linear.
### Linear activation function
The output of a perceptron with linear activation function is simply the combination of that neuron. $$activation = combination$$
### Hyperbolic tangent activation function
The hyperbolic tangent is one of the most used activation functions when constructing neural networks. It is a sigmoid function which varies between -1 and +1. $$activation = tanh(combination)$$
### Logistic activation function
The logistic is another type sigmoid function. It is very similar to the hyperbolic tangent, but in this case it varies between 0 and 1. $$activation = \frac{1}{1+e^{-combination}}$$
### Rectified linear activation function
The rectified linear activation function, also known as ReLU is one of the most used activation functions. It is zero when the combination is negative and equal to the combination when the combination is zero or positive. $$activation = \left\{ \begin{array}{lll} 0 &if& \textrm{combination < 0} \\ combination &if& \textrm{combination \geq 0} \end{array} \right.$$
You can read the article Perceptron: The main component of neural networks for a more detailed description about this important neuron model.
In this regard, a perceptron layer is a group of percetpron neurons having connections to the same inputs and sending outputs to the same destinations.
## 3.2. Scaling layer
In practice it is always convenient to scale the inputs to make all of them to have a proper range.
In the context of neural networks, the scaling function can be thought as a layer connected to the inputs of the neural network. The scaling layer contains some basic statistics on the inputs. They include the mean, standard deviation, minimum and maximum values.
Some scaling methods very used in practice are the minimum-maximum, the mean-standard deviation and the standard deviation.
### Minimum and maximum scaling method
The minimum and maximum method processes unscaled inputs in any range to produce scaled inputs which fall between -1 and 1. This method is usually applied to variables with a uniform distribution. $$scaled\_input = \frac{input-minimum}{maximum-minimum}$$
### Mean and standard deviation scaling method
The mean and standard deviation method scales the inputs so that they will have mean 0 and standard deviation 1. This method is usually applied to variables with a normal (or Gaussian) distribution. $$scaled\_input = \frac{input-mean}{standard\_deviation}$$
### Standard deviation scaling method
The standard deviation scaling method produces inputs with standard deviation 1. This is usually applied to half-normal distributions, that is, variables which are centered at zero and have only positive values. $$scaled\_input = \frac{input}{standard\_deviation}$$
All scaling methods are linear and, in general, produce similar results. In all cases, the scaling of the inputs in the data set must be synchronized with the scaling of the inputs in the neural network. Neural Designer does that without any intervention by the user.
## 3.3. Unscaling layer
The scaled outputs from a neural network are to be unscaled to produce the original units.
In the context of neural networks, the unscaling function can be interpreted as an unscaling layer connected to the outputs of the percetpron layers. An unscaling layer contains some basic statistics on the outputs. They include the mean, standard deviation, minimum and maximum values.
Four unscaling methods very used in practice are the minimum-maximum the mean-standard deviation, the standard deviation and the logarithmic methods.
### Minimum and maximum unscaling method
The minimum and maximum method unscales variables that have been previously scaled to have minimum -1 and maximum +1, to produce outputs in the original range, $$unscaled\_output = \frac{scaled\_output-mean}{standard\_deviation}$$
### Mean and standard deviation unscaling method
The mean and standard deviation method unscales variables that have been previously scaled to have mean 0 and standard deviation 1, $$unscaled\_output = minimum\\+0.5(scaled\_output+1)(maximum-minimum)$$
### Standard deviation unscaling method
The standard deviation method unscales variables that have been previously scaled to have standard deviation 1, to produce outputs in the original range,
$$unscaled\_output = mean\\+ scaled\_output\cdot standard\_deviation$$
### Logarithmic unscaling method
The logarithmic method unscales variables that have been previously subjected to a logarithmic transformation, $$unscaled\_output = minimum\\+0.5(\exp{(scaled\_output)}+1)(maximum-minimum)$$
In all cases, the scaling of the targets in the data set must be synchronized with the unscaling of the outputs in the neural network. Neural Designer does that without any intervention by the user.
## 3.4. Bounding layer
In many cases, the output needs to be limited between two values. For instance, the quality of a product might be comprised between 1 and 5 stars.
The bounding function can be interpreted as a bounding layer connected to the outputs of the unscaling layer. It uses the following formula
$$bounded\_output = \left\{ \begin{array}{l} lower\_bound, \quad \textrm{output < lower\_bound} \\ output, \quad \textrm{lower\_bound \leq output \leq upper\_bound} \\ upper\_bound, \quad \textrm{output \geq upper\_bound} \end{array} \right.$$
### 3.5. Probabilistic layer
In classification problems, outputs are usually interpreted in terms of probabilities of class membership. In this way, the probabilistic outputs will always fall in the range [0, 1], and the sum of all will always be 1.
In the context of neural networks, the probabilistic output function can be interpreted as an additional layer connected to the last perceptron layer.
There are several probabilistic output methods. Two of the most popular are the competitive method and the softmax method.
### Binary probabilistic method
The binary method is used in binary classification problems. Here the output can either take the value 1 (positive) or 0 (negative).
The decision threshold can be defined as the probability from which we consider a positive. The default value is 0.5. In this waw, the probabilistic output can be calculated as:
$$probabilistic\_output = \left\{ \begin{array}{lll} 0 &if& \textrm{output < decision\_threshold} \\ 1 &if& \textrm{output \geq decision\_threshold} \end{array} \right.$$
### Continuous probabilistic method
This method is also used in binary classification problems. Here the output can take any value between 0 and 1. $$probabilistic\_output = \left\{ \begin{array}{lll} 0 &if& \textrm{output < 0} \\ output &if& \textrm{0 \leq output \leq 1} \\ 1 &if& \textrm{output> 1} \end{array} \right.$$
### Competitive probabilistic method
The competitive method is used in multiple classification problems. It assigns a probability of one to that output with the greatest value, and a probability of zero to the rest of outputs. $$probabilistic\_output = \left\{ \begin{array}{lll} 1 &if& \textrm{output = maximum(outputs)} \\ 0 &if& \textrm{output \neq maximum(outputs)} \end{array} \right.$$
### Softmax probabilistic method
This method is also used in multiple classification problems. It is a continuous probabilistic function, which holds that the outputs always fall in the range [0, 1], and the sum of all is always 1. $$probabilistic\_output = \frac{e^{output}}{\sum e^{outputs}}$$
As we have seen, a neural network might be composed by different types of layers, depending on the particular needs of the predictive model.
Next, we describe the most common neural networks configurations for each application type.
## 3.6. Network architecture
A neural network can be symbolized as a graph, where nodes represent neurons and edges represent connectivities among neurons. An edge label represents the parameter of the neuron for which the flow goes in.
Most neural networks, even biological neural networks, exhibit a layered structure. Therefore, layers are the basis to determine the architecture of a neural network.
A neural network is built up by organizing layers of neurons in a network architecture. The characteristic network architecture here is the so-called feed-forward architecture. In a feed-forward neural network layers are grouped into a sequence, so that neurons in any layer are connected only to neurons in the next layer.
The next figure represents a neural network with 4 inputs, several layers of different types and 3 outputs.
### Approximation neural networks
An approximation model usually contains a scaling layer, several perceptron layers, and an unscaling layer. A neural network for approximation might also contain a bounding layer.
Most of the times, two layers of perceptrons will be enough to represent the data set. For very complex data sets, deeper architectures with three, four, or more layers of perceptrons might be required.
The following figure represents a neural network to estimate the power generated by a combined cycle power plant as a function of meteorological and plant variables. This neural network has 4 inputs and 1 output. It consists of a scaling layer (yellow), a perceptron layer with 4 neurons (blue), a perceptron layer with 1 neuron (blue) and an unscaling layer (red).
### Classification neural networks
A classification problem usually requires a scaling, two perceptron layers and a probabilistic layer. It might also contain a principal components layer.
Most of the times, two layers of perceptrons will be enough to represent the data set.
The following figure is a binary classification model for the diagnose of breast cancer from fine-needle aspirates. This neural network has 9 inputs and 1 output. It consists of a scaling layer (yellow), a layer of 1 perceptron (blue), a layer of 1 perceptron (blue) and a probabilistic layer (red).
|
|
# Question mark or bold citation key instead of citation number
I've browsed the forums and found a number of posts that have addressed this issue, but none of the solutions seem to work for me. I have the following script that I just copied from the bibtex home page to get familiar with it. Instead of the citation number I get a question mark. I compile using Latex+Bibtex+Latex+Latex+PDFLatex+ViewPDF just as has been previously suggested and the problem persists.
\documentclass[11pt]{article}
\usepackage{cite}
\begin{document}
\title{My Article}
\author{Nobody Jr.}
\date{Today}
\maketitle
Blablabla said Nobody ~\cite{Nobody06}.
\bibliography{mybib}
\bibliographystyle{plain}
\end{document}
My bibliography (Bib.bbl)
@misc{ Nobody06,
author = "Nobody Jr",
title = "My Article",
year = "2006" }
Looking at previous posts one thing that is concerning is that my .bbl looks empty as shown below. Further, I don't have a .blg
\begin{thebibliography}{}
\end{thebibliography}
• not addressing the question itself, ..., but if the ~ before \cite is intended to keep the cross-reference from being broken to a new line, the input shown -- Nobody ~\cite -- won't do that. the space character preceding the ~ will (1) happily allow a line break, and (2) double the width of the space before the xref when it's printed. should be Nobody~\cite to have the no-break effect. – barbara beeton Jul 19 '12 at 17:57
Since this question comes up so often, I thought I'd try to supplement ArTourter's correct answer with a more general comment.
# What does a question mark mean
It means that somewhere along the line the combination of LaTeX and BibTeX has failed to find and format the citation data you need for the citation: LaTeX can see you want to cite something, but doesn't know how to do so.
### Missing citations show up differently in biblatex
If you are using biblatex you will not see a question mark, but instead you will see your citation key in bold. For example, if you have an item in your .bib file with the key Jones1999 you will see Jones1999 in your PDF.
## How does this all work
To work out what's happening, you need to understand how the process is (supposed to) work. Imagine LaTeX and BibTeX as two separate people. LaTeX is a typesetter. BibTeX is an archivist. Roughly the process is supposed to run as follows:
1. LaTeX (the typesetter) reads the manuscript through and gives three pieces of information to BibTeX (the archivist): a list of the references that need to be cited, extracted from the \cite commands; a note of a file where those references can be found, extracted from the \bibliography command; a note of the sort of formatting required, extracted from the \bibliographystyle command.
2. BibTeX then goes off, looks up the data in the file it has been told to read, consults a file that tells it how to format the data, and generates a new file containing that data in a form that has been organised so that LaTeX can use it (the .bbl file).
3. LaTeX then has to take that data and typeset the document - and may indeed need more than one 'run' to do so properly (because there may be internal relationships within the data, or with the rest of the manuscript, which BibTeX neither knows or cares about, but which matter for typesetting.
Your question-mark tells you that something has gone wrong with this process.
### More biblatex and biber notes:
• If you are using biblatexthe style information is located in the options passed to the to the biblatex package, and the raw data is in the \addbibresource command.
• If you are using biblatex, the stage described as BiBTeX in this answer are generally replaced with a different, and more cunning, archivist, Biber.
## What to do
The first thing to do is to make sure that you have actually gone through the whole process at least once: that is why, to deal with any new citation, you will always need at least a LaTeX run (to prepare the information that needs to be handed to BibTeX), one BibTeX run, and one or more subsequent LaTeX runs. So first, make sure you have done that. Please notice, that latex and bibtex/biber need to be run on your main file (without the file ending). In other words, the basename of your main file: you do not run any commands on the .bib file.
latex MainFile
bibtex MainFile
latex MainFile
latex MainFile
If you still have problems, then something has gone wrong somewhere. And it's nearly always something about the flow of information.
Your first port of call is the BibTeX log (.blg) file. That will usually give you the information you need to diagnose the problem. So open that file (which will be called blah.blg where 'blah' is the name of your source file).
In a roughly logical order:
1. BibTeX did not find the style file. That's the file that tells it how to format references. In this case you will have an error, and BibTeX will complain I couldn't open the style file badstyle.bst. If you are trying to use a standard style, that's almost certainly because you have not spelled the style correctly in your \bibliographystyle command - so go and check that. If you are trying to use a non-standard style, it's probably because you've put it somewhere TeX can't find it. (For testing purposes, I find, it's wise to remember that it will always be found if it's in the same directory as your source file; but if you are installing using the facilities of your TeX system -- as an inexperienced person should be - you are unlikely to get that problem.)
2. BibTeX did not find the database file. That's the .bib file containing the data. In that case the log file will say I couldn't open database file badfile.bib, and will then warn you that it didn't find database files. The cure is the same: go back and check you have spelled the filename correctly, and that it is somewhere TeX can find it (if in doubt, put it in the folder with your source file).
3. BibTeX found the file, but it doesn't contain citation data for the thing you are trying cite. Now you will just get, in the log-file: Warning--I didn't find a database entry for "yourcitation". That's what happened to you. You might think that you should have got a type 2 error: but you didn't because as it happens there is a file called mybib.bib hanging around on the system (as kpsewhich mybib.bib will reveal) -- so BibTeX found where it was supposed to look, but couldn't find the data it needed there. But essentially the order of diagnosis is the same: check you have the right file name in your \bibliography command. If that's all right, then there is something wrong with that file, or with your citation command. The most likely error here is that you've either forgotten to include the data in your .bib file, or you have more than one .bib file that you use and you've sent BibTeX to the wrong one, or you've mis-spelled the citation label (e.g. you've done \cite{nobdoy06} for \cite{nobody06}.
4. There's something wrong with the formatting of your entry in the .bib file. That's not uncommon: it's easy (for instance) to forget a comma. In that case you should have errors from BibTeX, and in particular something like I was expecting a ',' or a '}' and you will be told that it was skipping whatever remains of this entry. Whether that actually stops any citation being produced may depend on the error; I think BibTeX usually manages to produce something -- but biblatex can get totally stumped. Anyway, check and correct the particular entry.
### biblatex and biber notes
If you are using biblatex, then generally you will also be using the Biber program instead of BiBTeX program to process your bibliography, but the same general principles apply. Hence the compilation sequence becomes
latex MainFile
biber MainFile
latex MainFile
## Summary
The order of diagnosis is as follows:
1. Have I run LaTex, BibTeX (or Biber), LaTeX, LaTeX?
2. Look at the .blg file, which will help mightily in answering the following questions.
3. Has BibTeX/Biber found my style file? (Check you have a valid \bibliographystyle command and that there is a .bst with the same name where it can be found.)
4. Has Bibtex/Biber found my database? (Check the \bibliography names it correctly and it is able to be found.)
5. Has it found the right database?
6. Does the database contain an entry which matches the citation I have actually typed?
7. Is that entry valid?
8. Finally: When you have changed something, don't forget that you will need to go through the same LaTeX -- BibTeX (or Biber) -- LaTeX -- LaTeX run all over again to get it straight. (That's not actually quite true: but until you have more of a feel for the process it's a safe assumption to make.)
• You say it in the first sentence, this question comes very often and now I know where to send people looking for an answer... – matth Jul 19 '12 at 9:05
• fwiw, i'll scrape the answer to improve the faq answer on the same topic (the current faq answer doesn't even touch on biblatex/biber, since i've never used either...). my reuse doesn't add much to the coverage of your work, but it helps me -- ok? – wasteofspace Oct 26 '13 at 18:49
• In my case, in most of the times was a missing colon (,), quotation mark ("), or a wrong field. – srodriguex Apr 25 '15 at 22:17
• Thank you very much for the concise explanation. I have always seen the command with several latex but never understood the reason behind it. – rkachach Feb 11 '16 at 21:12
• Is there a way to automate this? I don't want to have to run latex, then bibtex, then latex twice, everytime I cite something new. – becko Oct 31 '17 at 10:10
The syntax for the \bibliography{} command is \bibliography{file1,file2,...}
in your case you seem to be calling a file called mybib when your bib file is in fact Bib.
Also note that bibtex file should have the .bib extension. the .bbl file will be created by bibtex.
You should therefore rename your bibliography file mybib.bib and get rid of the extra {} in the \bibliography{mybib}{} call, and then recompile. This should fix your problem.
• note: if you don't see a .blg file, you likely have not run bibtex yet. latex doesn't process the .bib file directly: it needs to be run once (to create its list of requests for bibtex, then you need to run bibtex to process this list and deliver the requests. Then you can run latex again and it can incorporate these into your output pdf. So assuming a masterfile called masterfile.tex, you need to latex masterfile; bibtex masterfile; latex masterfile Second time, latex will have the files it needs to insert bibtex's output into its own latex output… – tim Feb 15 '16 at 23:18
## protected by WernerOct 26 '13 at 18:11
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
|
# MATH 251: Calculus 3, SET8
## 14: Partial Derivatives
### 14.3: Partial Derivatives
These problems are done with the CAS. See Hand Solutions for details.
#### 1. [924/10]
We estimate the partial derivatives from the contour plot using difference quotients.
f_x_2_1 = (14-10) / (3-2)
f_x_2_1 = 4
f_y_2_1 = (8-10) / (2-1)
f_y_2_1 = -2
#### 2. [924/16]
As well as ordinary derivatives (from Calculus 1), the diff command computes partial derivatives as well.
syms x y
f = x^2*y - 3*y^4
f =
fx = diff(f,x), fy = diff(f,y)
fx =
fy =
#### 3. [924/30]
Recall Part 2 of the Fundamental Theorem of Calculus and Properties of Integrals.
syms alpha beta t
F(alpha,beta) = int(sqrt(t^3 + 1), t, alpha, beta);
F_alpha = simplify(diff(F,alpha)), F_beta = simplify(diff(F,beta))
F_alpha(alpha, beta) =
F_beta(alpha, beta) =
#### 4. [924/40]
Here we do an experiment with a specific number of variables, .
This may help you to see the general pattern, which is obtained via the Chain Rule.
In general, whence .
X = sym('x', [1 5])
X =
u = sin(X(1) + 2*X(2) + 3*X(3) + 4*X(4) + 5*X(5)) % sample with n = 5
u =
M = [];
for j = 1:5
M = [M; [j diff(f, X(j))]];
end
M % column 1: j; column 2: partial derivative of u w.r.t. the jth variable
M =
#### 5. [925/44]
syms x y z
f(x,y,z) = x^(y*z), fz = diff(f,z), e = exp(sym(1))
f(x, y, z) =
fz(x, y, z) =
e = e
fz_at_P = fz(e,1,0)
fz_at_P = 1
#### 6. [925/50]
Here we look ahead to the end of Section 14.5: The Chain Rule. There we find a much quicker way to do implicit differentiation. See page 943 of textbook.
( It sure beats the Calc 1 way of doing things since its a one-step formula! )
syms x y z
F = y*z + x*log(y) - z^2 % Let F = LHS - RHS of the equation given, understood to be set to zero.
F =
zx = -diff(F,x) / diff(F,z)
zx =
zy = -diff(F,y) / diff(F,z)
zy =
#### 7. [925/66]
By hand, you would compute a first, second, and third partial derivative in succession.
That's what the diff command is doing internally and just displaying the final result.
syms r s t
g = exp(r) * sin(s*t), g_rst = simplify( diff(g, r,s,t) )
g =
g_rst =
#### 8. [925/76]
Let's check all six functions at once! We see that 1st and 3rd functions are not solutions of Laplace's equation ; the others are solutions.
syms x y
u = [x^2+y^2 x^2-y^2 x^3+3*x*y^2 1/2*log(x^2+y^2) sin(x)*cosh(y)+cos(x)*sinh(y) exp(-x)*cos(y)-exp(-y)*cos(x)].'
u =
L = simplify( diff(u,x,2) + diff(u,y,2) )
L =
#### 9. [926/82]
The requisite partial derivatives are evaluated to determine rates of change of temperature in the specified directions.
syms x y
T(x,y) = 60 / (1+x^2+y^2)
T(x, y) =
Tx = diff(T,x), Ty = diff(T,y)
Tx(x, y) =
Ty(x, y) =
Tx_P = Tx(2,1), Ty_P = Ty(2,1)
Tx_P =
Ty_P =
#### 10. [927/98]
Parameterize the curve of intersection of the paraboloid and the plane as . The point corresponds to . Now proceed in the usual manner.
syms t u x y
r(t) = [1 t 4-2*t^2], Dr = diff(r,t), s1 = sym(1)
r(t) =
Dr(t) =
s1 = 1
P = r(2), v = Dr(2)
P =
v =
L(u) = P + u*v
L(u) =
z = 6 - x - x^2 - 2*y^2
z =
%
figure
fsurf(z, [-2 4 -2 4], 'MeshDensity', 16); hold on
fplot3(s1, t, 4-2*t^2, [-2 4], 'm', 'LineWidth', 3)
fplot3(s1, u+2, -8*u-4, [-5 3], 'r', 'LineWidth', 3)
axis([-2 4 -2 4 -45 35])
view(-53,53)
xlabel('x'); ylabel('y'); zlabel('z')
title('SET8, 927/98')
|
|
# 1990 JAMB Mathematics Past Questions & Answers - page 1
### JAMB CBT App
Study offline with EduPadi JAMB CBT app that has so many features, including thousands of past questions, JAMB syllabus, novels, etc.
1
Simplify 4$$\frac{3}{4}$$ - 6$$\frac{1}{4}$$
A -7$$\frac{7}{8}$$
B $$\frac{-2}{7}$$
C $$\frac{-10}{21}$$
D $$\frac{10}{21}$$
Correct Option: B
Solution
4$$\frac{3}{4}$$ - 6$$\frac{1}{4}$$
$$\frac{19}{4}$$ - $$\frac{25}{4}$$............(A)
$$\frac{21}{5}$$ - $$\frac{5}{4}$$.............(B)
Now work out the value of A and the value of B and then find the value $$\frac{A}{B}$$
A = $$\frac{19}{4}$$ - $$\frac{25}{4}$$
= $$\frac{-6}{4}$$
B = $$\frac{21}{5}$$ x $$\frac{5}{20}$$
= $$\frac{105}{20}$$
= $$\frac{21}{4}$$
But then $$\frac{A}{B}$$ = $$\frac{-6}{4}$$
$$\frac{21}{4}$$ = $$\frac{-6}{4}$$ $$\div$$ $$\frac{21}{4}$$
= $$\frac{-6}{4}$$ x $$\frac{4}{21}$$
= $$\frac{-24}{84}$$
= $$\frac{-2}{7}$$
2
The H.C.F. of a2bx + ab2x and a2b - b2 is
A b
B a + b
C b(a $$\div$$ b)
D abx(a2 - b2)
Correct Option: B
Solution
a2bx + ab2x; a2b - b2
abx(a + b); b(a2 - b2)
b(a + b)(a + b)
∴ H.C.F. = (a + b)
3
Correct 241.34(3 x 10-3)2 to 4 significant figures
A 0.0014
B 0.001448
C 0.0022
D 0.002172
Correct Option: D
Solution
first work out the expression and then correct the answer to 4 s.f = 241.34..............(A)
(3 x 103)2............(B)
= 32x
= $$\frac{1}{10^3}$$ x $$\frac{1}{10^3}$$
(Note that x2 = $$\frac{1}{x^3}$$)
= 24.34 x 32 x $$\frac{1}{10^6}$$
= $$\frac{2172.06}{10^6}$$
= 0.00217206
= 0.002172(4 s.f)
4
At what rate would a sum of N100.00 deposited for 5 years raise an interest of N7.50?
A 1$$\frac{1}{2}$$%
B 2$$\frac{1}{2}$$%
C 1.5%
D 25%
Correct Option: C
Solution
Interest I = $$\frac{PRT}{100}$$
∴ R = $$\frac{100 \times 1}{100 \times 5}$$
= $$\frac{100 \times 7.50}{500 \times 5}$$
= $$\frac{750}{500}$$
= $$\frac{3}{2}$$
= 1.5%
5
Three children shared a basket of mangoes in such a way that the first child took $$\frac{1}{4}$$ of the mangoes and the second $$\frac{3}{4}$$ of the remainder. What fraction of the mangoes did the third child take?
A $$\frac{3}{16}$$
B $$\frac{7}{16}$$
C $$\frac{9}{16}$$
D $$\frac{13}{16}$$
Correct Option: A
Solution
You can use any whole numbers (eg. 1. 2. 3) to represent all the mangoes in the basket.
If the first child takes $$\frac{1}{4}$$ it will remain 1 - $$\frac{1}{4}$$ = $$\frac{3}{4}$$
Next, the second child takes $$\frac{3}{4}$$ of the remainder
which is $$\frac{3}{4}$$ i.e. find $$\frac{3}{4}$$ of $$\frac{3}{4}$$
= $$\frac{3}{4}$$ x $$\frac{3}{4}$$
= $$\frac{9}{16}$$
the fraction remaining now = $$\frac{3}{4}$$ - $$\frac{9}{16}$$
= $$\frac{12 - 9}{16}$$
= $$\frac{3}{16}$$
6
Simplify and express in standard form $$\frac{0.00275 \times 0.0064}{0.025 \times 0.08}$$
A 8.8 x 10-1
B 8.8 x 10-2
C 8.8 x 10-3
D 8.8 x 103
Correct Option: C
Solution
$$\frac{0.00275 \times 0.0064}{0.025 \times 0.08}$$
Removing the decimals = $$\frac{275 \times 64}{2500 \times 800}$$
= $$\frac{88}{10^4}$$
88 x 10-4 = 88 x 10-1 x 10-4
= 8.8 x 10-3
7
three brothers in a business deal share the profit at the end of a contact. The first received $$\frac{1}{3}$$ of the profit and the second $$\frac{2}{3}$$ of the remainder. If the third received the remaining N12000.00 how much profit did they share?
A N60 000.00
B N54 000.00
C N48 000.00
D N42 000.00
Correct Option: B
Solution
use "T" to represent the total profit. The first receives $$\frac{1}{3}$$ T
remaining, 1 - $$\frac{1}{3}$$
= $$\frac{2}{3}$$T
The seconds receives the remaining, which is $$\frac{2}{3}$$ also
$$\frac{2}{3}$$ x $$\frac{2}{3}$$ x $$\frac{4}{9}$$
The third receives the left over, which is $$\frac{2}{3}$$T - $$\frac{4}{9}$$T = ($$\frac{6 - 4}{9}$$)T
= $$\frac{2}{9}$$T
The third receives $$\frac{2}{9}$$T which is equivalent to N12000
If $$\frac{2}{9}$$T = N12, 000
T = $$\frac{12 000}{\frac{2}{9}}$$
= N54, 000
8
Simplify $$\sqrt{160r^2}$$ + $$\sqrt{71r^4}$$ + $$\sqrt{100r^2}$$
A 9r2
B 12$$\sqrt{3r}$$
C 13r
D $$\sqrt{13r}$$
Correct Option: C
Solution
$$\sqrt{160r^2 + 71r^4 + 100r^8}$$
Simplifying from the innermost radical and progressing outwards we have the given expression
$$\sqrt{160r^2 + 71r^4 + 100r^8}$$ = $$\sqrt{160r^2 + 81r^4}$$
$$\sqrt{160r^2 + 9r^2}$$ = $$\sqrt{169r^2}$$
= 13r
9
Simplify $$\sqrt{27}$$ + $$\frac{3}{\sqrt{3}}$$
A 4$$\sqrt{3}$$
B $$\frac{4}{\sqrt{3}}$$
C 3$$\sqrt{3}$$
D $$\frac{\sqrt{3}}{4}$$
Correct Option: A
10
Simplify 3 log69 + log612 + log664 - log672
A 5
B 7776
C log631
D (7776)6
Correct Option: A
Solution
3 log69 + log612 + log664 - log672
= log693 + log612 + log664 - log672
log6729 + log612 + log664 - log672
log6(729 x 12 x 64) = log6776
= log665 = 5 log66 = 5
N.B: log66 = 1
### JAMB CBT APP
EduPadi JAMB CBT app that has thousands of past questions and answers, JAMB syllabus, novels, etc. And works offline!
|
|
# GATE Papers >> EEE >> 2015 >> Question No 105
Question No. 105
Consider a function f(x) = 1-$\left|x\right|$ on -1$\le$ x $\le$ 1. The value of x at which the function attains a maximum, and the maximum value of the function are:
##### Answer : (C) 0, 1
Solution of Question No 105 of GATE 2015 EEE Paper
The graph of f(x) = 1 - |x| looks like as shown below
So minimum occurs at x = 0 , f(x) = 1
|
|
# Influence of Temperature on Biochemical Reactions [closed]
Short question
How (quantitatively speaking) does temperature influences rate of decay of proteins? I am looking for some general number/function, the influence of temperature on an average protein.
Long question
For computer modeling purposes, I am looking for some referenced quantitative measurements of the effect(s) of temperature on the dynamic of biochemical reactions. In particular, my question is:
How does temperature influences rate of protein decay?
I am looking for a function (eventually expressing the michaelis-menten constant as a function of the temperature) that I can plug in my algorithms to incorporate the influence of various temperatures on the process I want to simulate. The model under simulation is an eukaryote such as Saccharomyces telluris (a yeast) for example living at some common range of temperature [5-35 °C]. I am not thinking to any particular protein rather I am looking for some approximative effect of temperature on the decay rate on an "average protein". I welcome any kind of expression of how temperature influences the rate of decay. Some values using the Erying equation, Van't Hoff equation or a $Q10$ (measured in some reasonable range [5°C - 35°C]) for examples would also fit my needs. I would be very surprise that there is not enough literature on the subject in order to get some average impact of temperature on protein decay rate but as a biologist (and not a biochemist) I can't wrap my head around this problem and can't find any satisfying article. Thanks for your precious help!
I am not looking for...
I am NOT looking for a theoretical explanation of how temperature influences these reactions (I have basic understanding of the importance of the activation energy as displayed on a Maxwell-Boltzman distribution and of the michaelis-menten equation). I am just looking for some empirical observations of how temperature influences proteins decay rate. I am not looking for any accurate value but only for some estimation that I can plug into my algorithms.
Update
Reading the comments I realize that my question is too broad in the sense that there might have too much variance in how different proteins respond to temperature in order to draw in general tendency. If this is so, someone willing to answer may reduce the question to transcription factor, to General Transcription Factor (GTF) or even to TFIIA.
|
|
# 7.06 Group actions and covering spaces, 1
## Video
Below the video you will find accompanying notes and some pre-class questions.
## Notes
### Properly discontinuous group actions
(0.10) Recall that a group $$G$$ acts continuously on $$X$$ if for each $$g\in G$$ there exists a homeomorphism $$\rho(g)\colon X\to X$$ such that $$\rho(gh)=\rho(g)\circ\rho(h)$$ and $$\rho(1)=id_X$$. The quotient $$X/G$$ is the set of equivalence classes, where $$x\sim\rho(g)x$$ for all $$g\in G$$, equipped with the quotient topology.
(1.36) Suppose that $$G$$ acts continuously on $$X$$. Suppose moreover that for all $$x\in X$$ there is an open set $$U$$ containing $$x$$ such that $$U\cap \rho(g)(U)=\emptyset$$ for all $$g\neq 1$$ in $$G$$. Then the quotient map $$q\colon X\to X/G$$ is a covering map.
(3.10) Consider the action of $$\mathbf{Z}$$ on $$\mathbf{R}$$ given by $$\rho(n)(x)=x+n$$. For any $$x\in\mathbf{R}$$, for sufficiently small $$\epsilon$$, the interval $$(x-\epsilon,x+\epsilon)$$ is disjoint from any of its translates under the action of $$\mathbf{Z}$$. The theorem tells us that the quotient map $$q\colon\mathbf{R}\to \mathbf{R}/\mathbf{Z}$$ is a covering map. This is the covering map $$x\mapsto e^{i2\pi x}$$ we have been using for a while now.
(5.00) Given a point $$x\in X$$ and its equivalence class $$[x]\in X/G$$ there exists an open neighbourhood $$U\subset X$$ of $$x$$ such that $$\rho(g)(U)$$ is disjoint from $$U$$ unless $$g=1$$. Let $$V=q(U)$$. In this section, we saw that the quotient map for a quotient by a group action is an open map, so $$V$$ is an open set.
(6.45) The set $$q^{-1}(V)$$ equals the union of all translates of $$U$$ under the group action, that is $q^{-1}(V)=\coprod_{g\in G}\rho(g)(U).$ To prove that $$q$$ is a covering map, we need to show that there is a homeomorphism $$h\colon q^{-1}(V)\to V\times F$$ for some discrete set $$F$$ such that $$pr_1\circ h=q$$ (where $$pr_1\colon V\times F\to V$$ is the projection to the first factor).
(8.14) The discrete set $$F$$ will be the group $$G$$. Each $$\rho(g)(U)$$ is homeomorphic to $$U$$ (via the homeomorphism $$\rho(g)$$) and $$U$$ is homeomorphic to $$V$$ via the map $$q$$. To see that $$q|_U\colon U\to V$$ is a homeomorphism, note that it is continuous and open (so if it is bijective then it will be a homeomorphism) and that it is surjective (because $$V=q(U)$$ by definition) and injective because if $$a,b\in U$$ satisfy $$q(a)=q(b)$$ then $$\rho(g)(a)=b$$ for some $$g\in G$$, so $$\rho(g)(U)\cap U\neq\emptyset$$, so $$g=1$$, so $$a=b$$.
(11.00) Since $$q^{-1}(V)=\coprod_{g\in G}\rho(g)(U)$$, we can define $$h\colon q^{-1}(V)\to V\times G$$ as $h(\rho(g)(u))=(q(u),g).$ We have now seen that this is a homeomorphism and $$q(\rho(g)(u))=q(u)=pr_1(q(u),g)$$.
(13.36) The condition from the theorem is that for all $$x\in X$$ there is an open set $$U$$ containing $$x$$ such that $$U\cap \rho(g)(U)=\emptyset$$ for all $$g\neq 1$$ in $$G$$. An action satisfying this condition is called properly discontinuous.
### Examples
(13.58) Let $$X$$ be a metric space and suppose that $$G$$ acts by isometries (i.e. $$d(\rho(g)(x),\rho(g)(y))=d(x,y)$$ for all $$g\in G$$ and $$x,y\in X$$). Suppose moreover that there exists $$c>0$$ such that for all $$x\in X$$ and all $$g\neq 1$$ in $$G$$ we have $d(x,\rho(g)(x))\geq c.$ Then the action is properly discontinuous.
(16.12) Pick a point $$x\in X$$ and take the metric ball of radius $$r\in(0,c/2)$$ centred at $$x$$. Then $$B_r(x)\cap\rho(g)(B_r(x))$$ is empty if $$g\neq 1$$. Otherwise there is some point $$y\in B_r(x)\cap\rho(g)(B_r(x))$$ and so $$c/2>r>d(x,y)$$ and $$2/c>r>d(\rho(g)x,y)$$ so $$c>d(x,\rho(g)(x))$$ by the triangle inequality, which contradicts the hypothesis.
(18.12) In our earlier example of $$\mathbf{Z}$$ acting on $$\mathbf{R}$$, translations are isometries for the standard metric on $$\mathbf{R}$$ and the hypothesis of the theorem is satisfied by $$c=1/2$$ ($$d(x,x+n)\geq 1$$ for any integer $$n\neq 0$$).
More generally, $$\mathbf{Z}^n$$ acts on $$\mathbf{R}^n$$ via $\rho(k_1,\ldots,k_n)(x_1,\ldots,x_n)=(x_1+k_1,\ldots,x_n+k_n)$ and $$d(\mathbf{x},\rho(\mathbf{k})(\mathbf{x}))=\sqrt{\sum k_i^2}\geq 1$$. In this case, the quotient map gives a cover of the $$n$$-dimensional torus by $$\mathbf{R}^n$$.
(20.23) Take $$S^n\subset\mathbf{R}^{n+1}$$ to be the sphere of radius 1 and $$G=\mathbf{Z}/2$$. There is a $$G$$-action on $$S^n$$ where the nontrivial element acts as the antipodal map $$x\mapsto -x$$. The distance $$d(x,-x)$$ is always equal to 2 (using the metric that just takes distances in the ambient Euclidean space) or equal to $$\pi$$ (using the metric that takes distances along paths that stay on the sphere), so with either of these metrics we could take $$c=1$$. This gives a covering map $$S^n\to S^n/(\mathbf{Z}/2)$$. The quotient $$S^n/(\mathbf{Z}/2)$$ is called the real projective space $$\mathbf{RP}^n$$.
In the next video, we will see that if $$G$$ acts properly discontinuously on $$X$$ and $$X$$ is simply-connected then the fundamental group of $$X/G$$ is isomorphic to $$G$$. This will allow us to say $$\pi_1(S^1)=\mathbf{Z}$$, $$\pi_1(T^n)=\mathbf{Z}^n$$ and $$\pi_1(\mathbf{RP}^n)=\mathbf{Z}/2$$ just because these spaces are constructed as quotients by the corresponding group actions.
## Pre-class questions
1. Prove that the map $$p_n\colon S^1\to S^1$$, $$p_n(e^{i\theta})=e^{in\theta}$$, is a covering map by considering a suitable $$\mathbf{Z}/n$$-action on $$S^1$$ and showing it is properly discontinuous.
|
|
### A new record of a mammal in China and new provincial records in Xizang, Hubei and Sichuan
LIU Shaoying, LIU Yingxun, MENG Guanliang, ZHOU Chengran, LIU Yang, LIAO Rui
1. (1 Sichuan Academy of Forestry, Chengdu 610081) (2 BGI-Shenzhen, Shenzhen 518083) (3 Key Laboratory of Bio-Resource and Eco-Environment of Ministry of Education, College of Life Sciences, Sichuan University, Chengdu 610065)
• Online:2020-05-30 Published:2020-05-28
### 中国兽类一新纪录白尾高山䶄及西藏、湖北和四川兽类各一省级新纪录
1. (1 四川省林业科学研究院,成都 610081)(2 深圳华大生命科学研究院 深圳 518083) (3 四川大学生命科学学院,生物资源与生态环境教育部重点实验室 成都 610064)
Abstract:
Recent surveys of small mammals in China discovered national and provincial records. Taxonomic and phylogenetic studies based on morphological and molecular comparisons were used to assess the collections. Collected on the Pamir Plateau in Tashikuergan county, Xinjiang, Alticola albicauda is a new record for China. The tail of this species is wholly white ending in a white tuft. The belly is pure white. A phylogenetic analysis based on Cyt b places it together Alticola albicauda and forms the sister of Alticola argentatus. The K2P distance between them is 5%. Further, Phodopus roborovskii was captured in Xizang and it is a new record for Tibet Autonomous Region. It does not differ morphologically from conspecifics elsewhere in China and its Cyt b K2P distance from a population in Ningxia is 0.6%. Meanwhile, collections verify the presence of Eothenomys fidelis in Sichuan for the first time. The characters of its teeth and proportion of TL/HBL are the same as in topotypes from Lijiang, Yunnan, but the specimens are smaller than the topotypes and they belong to a different subclade in the Cyt b tree. An average K2P distance of only 1.1% separates the two populations, suggesting they are conspecific. Finally, collections verify the presence of Ochotona xunhuaensis in Hubei for the first time. All specimens have a congenial tragus and flattened skull, which are diagnostic characters of the
species, and K2P distance of Cyt b between the Hubei population and topotypes is 1.9%.
|
|
# DX on Small Projects
## My Opinion and how I manage my own projects
— 7 min
Continuing my series, I will take a look at what tools and workflows I use to manage my small projects. I will also explain some of the very opinionated guidelines that I follow.
We will specifically talk about code that will be published, and can be consumed publicly by anyone. This has some implications on the structure of the code.
## # Maintaining a public API
I do have quite a strong opinion on bundling and how to best publish / expose code that you write, which has implications on how you consume that code.
Writing a small and focused library means that you should ideally have only one, or very limited and explicit set of entry points.
The problem is that in theory, people can just import any file that is included in an npm package. And others will start to begin relying on internal implementation details they really shouldn’t. And they will complain if you break things by re-organizing your internal code.
People will just happily import { SomeInternalClass } from "your-library/some/internal/file".
## # Bundling Code
One way to avoid this is to bundle your code, which I highly recommend everyone should do. I am a big fan of, and an early adopter and contributor to rollup, and one project I would like to show off to highlight some of my recommendations is rollup-plugin-dts, which you can use also bundle up TS type definitions alongside your code.
The README of rollup-plugin-dts shows a clear example of how to best use it.
## # Managing Dependencies
I am still surprised how often people get this wrong. Or how little thought they put into it.
For example, there are multiple sources out there that explain why libraries should not pin their dependencies, but rather delegate that choice to the users of that library.
Another important thing to understand is the difference between direct dependencies and peerDependencies. The yarn blog has a good article about that. TLDR: When your libraries users should not care or even know, put it into dependencies. If your library is used alongside or together with some other dependency, put it into peerDependencies. For example, rollup-plugin-dts put both rollup and typescript into peerDependencies, because it can’t work independently of those two, and someone using rollup-plugin-dts will have to use the other two as well.
Something else I see quite often, which I think is just wrong is that some libraries are putting @types into their dependencies. This is only valid for other @types packages!
Why? Because @types should never ever be used in production. They are by definition devDependencies. Just because users of your-library happen to also use typescript and are getting typechecking errors because they are missing @types/node does not mean that @types/node belongs into dependencies. The code will run in production without that!
## # Support Targets
A little bit related to dependencies. I would recommend to people to publish code in the most recent JS dialect possible, that you can run natively in the most up-to-date runtime. Make your users pick a support target. Don’t force transpiled code or polyfills on your users. A short test showed that not transpiling async/await but rather using it natively cut down the bundle size by ~10%, but more importantly, it cut the startup time of the code by ~25%.
This sadly is one of the disadvantages of JS being a language that relies on a runtime. :-(
When talking about a target, I would also encourage people to publish code that targets a standard module system, by which I mean native import/export syntax. That way the user of your library has the choice how to best consume it, such as by bundling it with the rest of their code.
Sadly though, this goal is at odds with being able to run that code in node natively sadface. One solution is to publish the code both as commonjs, and as native modules, which however is also at odds with using deep import paths, which I previously argued you should avoid anyway :-).
But the takeaway is to publish code in a way that is friendly to bundlers. Which also has implications on the dependencies, which also need to be friendly to bundlers, which sadly most code still is not.
## # Testing and Linting
Todays post is about small libraries, such as rollup-plugin-dts, of which I wrote, say ~98% myself. This means I don’t really need a complex linting setup, apart from format on save that the IDE provides. I will focus more on linting in a future post.
Being a small and focused library also means its easy to test, which I usually put quite some effort into doing, as close to 100% coverage as possible.
For this I use jest in combination with ts-jest. Being small also means that the convenience of having a single command to both typecheck and test my code including code coverage outweighs the disadvantage of that workflow being slow. Running the testsuite takes around ~6 seconds for rollup-plugin-dts and maybe ~10 seconds for intl-codegen. The slowness probably comes from the fact that both tools do typechecking using TS as part of the tests themselves, rather than the tooling itself.
One nice thing about software engineering itself is that you are constantly challenged, and you need to re-evaluate all your decisions and opinions all the time, which makes you grow. There is a saying that if you are not somewhat ashamed of your own code you wrote a year ago, you didn’t really grow as a developer. But I digress.
So, while yes, I do like jest for its convenience and especially for its expect matchers and snapshot testing, I also dislike it at the same time. It is an overly large behemoth that tries to do too much, and creates a lot of problems doing so.
One example is buggy handling of TS code, because well jest aims to support TS out of the box, but fails to do so ever so subtly. And while it supports file-level mocking, the way it does that is not always obvious and can lead to quite some surprising problems. You can mock the local ./send.ts file by creating a ./__mocks__/send.ts, but it will then also use that mock for import X from "send" in a completely different part of your codebase. This was surprising. But once I figured it out, it also kind of explains why jest spews tons of warnings when you have two mocks named ./__mocks__/index.ts in different parts of your codebase.
Learning from this, I would recommend to just avoid file based mocking. I will also re-evaluate my opinion about having a test runner running your tests. Maybe it would be a better idea to build a testsuite as a dedicated executable that you can run, which explicitly uses a testing library internally for organizational purpuses, a concept that for example zora advocates. I grew quite wary of tools that force you to organize your code in a certain way.
I think I will experiment with this concept in rollup-plugin-dts and intl-codegen in the future.
|
|
# Find the coordinates of the points on the curve
• August 15th 2012, 06:12 PM
Find the coordinates of the points on the curve
I've encountered the following question:
Find the coordinates of the points on the curve $y=\frac{cos(x)}{2+sin(x)}, 0 \le x < 2\pi$, where the tangent is horizontal.
My plan was to take the derivative and then try to find where x goes to infinity by taking the limit. Things quickly fell apart.
I managed to get the derivative of $f'(x)=\frac{-sin^2(x)-2sin(x)-cos^2(x)}{(2+sin(x))^2}$ however when I check my work with WolframAlpha I'm told I'm wrong with $f'(x)=-\frac{2sin(x)+1}{(2+sin(x))^2}$. Now I can see the identity. However I see -1 and WolframAlpha is saying +1. My train of thought is $-sin^2(x)-cos^2(x)=-1$. So where is my basic algebra failing me with +1?
Furthermore, is my thought process correct by thinking I need to find the limit as $x\rightarrow\infty$?
• August 15th 2012, 08:58 PM
rainer
Re: Find the coordinates of the points on the curve
Your derivative and Wolfram's agree. Note that in the numerator $-\sin^2{x}-\cos^2{x}-2\sin{x}=-(2\sin{x}+1)$
Not sure why you would want to be concerned about the limit of y as x goes to infinity. Think more about the slope of the tangent line. What is the slope of a horizontal tangent line? Now that you have the derivative of the curve, what does the derivative tell you about slope?
• August 16th 2012, 09:49 AM
|
|
# Electric Potentials graphs
1. Sep 22, 2004
### vsage
The electric potential along the x-axis (in kv) is plotted versus the value of x, (in meters). Evaluate the x-component of the electrical force (in Newtons)on a charge of 5.10 micro-C located on the x-axis at x=2.8 m.
http://www.geocities.com/vsage3/p.bmp
I tried finding the value of kV at x = 2.8 so I would have this:
dV/dx = E = F/q
q = 5.1e-06C
Hint: Use graphical techniques to evaluate the electric field, i.e., the x component of the electric field is the negative of the change of the potential with respect to x. Careful with units of potential (given in kV) and of charge (micro-C). In order to check the sign, remember in which direction the positive charge moves when located at the given position.
(this was given with the problem)
Any ideas? I have to get the value within 3% of what the computer says so that might be why I'm having such a hard time.
Edit: here is the value I got the time I tried it
V = Electric field * distance
V = Force / charge * distance
-2500V = F / 5.1e-06 * 2.8
F = -0.00455N but it's wrong according to the computer.
Edit thanks but I got the answer wrong too many times and I can't correct it. I heeded what you said but apparently I don't have a good enough grasp on the subject to apply it :(
Last edited by a moderator: Sep 22, 2004
2. Sep 22, 2004
### robphy
E= -dV/dx, that is, minus the slope of the V-vs-x graph.
V=Ed only when E is uniform.
|
|
# Solving a System?
• Feb 4th 2009, 11:06 PM
Solving a System?
Hey Guys!
This is my last question of the Unit that I am planning to send in tomorrow morning.
My text does not provide me with enough examples or information to help me solve this question so I'm looking for some answers please!
Solve the following system:
2^(x+3y) =128 and 2^(3x-7)=2
(Surprised)
Thank You for any help anyone can provide to me!
• Feb 4th 2009, 11:12 PM
red_dog
$\left\{\begin{array}{ll}2^{x+3y}=2^7\\2^{3x-7}=2^1\end{array}\right.\Rightarrow\left\{\begin{a rray}{ll}x+3y=7\\3x-7=1\end{array}\right.$
Now you have yo solve the last system.
• Feb 4th 2009, 11:37 PM
That is actually as far as I got....
• Feb 5th 2009, 04:41 AM
Jester
Quote:
Originally Posted by red_dog
$\left\{\begin{array}{ll}2^{x+3y}=2^7\\2^{3x-7}=2^1\end{array}\right.\Rightarrow\left\{\begin{a rray}{ll}x+3y=7\\3x-7=1\end{array}\right.$
Now you have yo solve the last system.
Quote:
That is actually as far as I got....
From red-dog's work, solve the second for x
$x = \frac{8}{3}$
then substitute into the first to get y.
• Feb 5th 2009, 06:35 AM
|
|
Next: Normal Zeeman effect Up: Angular momentum and its Previous: Angular momentum and its
#### Angular momentum of coupled systems
We have seen earlier that we can represent the angular momentum states using the quantum numbers j and mj Now, suppose we have two sources of angular momentum represented by and J. How may we represent the composite state?
It follows from the commutation relations for angular momentum that two representations are possible:
Uncoupled representation: < j1mj1j2mj2 >
Coupled representation: < j1j2jmj >
j1, j2, mj1, mj1 are the quantum numbers for the two angular momenta observables, and .
j and mj are the corresponding quantum numbers of the composite angular momentum where
The quantum number j is obtained from the so-called Clebsch-Gordan series: and the quantum number mj is obtained as follows:
mj = - j to + j in integral steps. Similarly, mj1 = - j1 to + j1 in integral steps and mj2 = - j2 to + j2 in integral steps.
The two representations can be visualized pictorially using the vector-model of coupled angular momentum.
First we show the uncoupled representation below in the form of an animation.
Look carefully at the animation. The two blue arrows attached to the sides of the two cones represent the two angular momentum vectors and . The red arrow represents the total angular momentum . ||, ||, the z components J1z and J2z respectively, all remain fixed. However || and its z component Jz do change. Hence in this representation, the corresponding quantum numbers j1, j2, mj1, mj1 may be specified together but not with j, mj.
Next we show the coupled representation below in the form of an animation.
Look carefully again at the animation. The two blue arrows attached to the sides of the two cones represent the two angular momentum vectors and as before. So does the red arrow represent the total angular momentum . However note now that the two cones are now tilted to the vertical z-axis and have as their common axis. ||, || remain fixed but the corresponding z components J1z and J2z vary. However || and the z component Jz remain fixed. and remain locked together, the angle between them remaining constant. Hence in this representation, the corresponding quantum numbers j1, j2, j, mj may be specified together but not with mj1 and mj2 . Also note that although mj1 and mj2 vary, their sum, mj remains fixed.
Quantization of angular momentum has many interesting fall outs as we shall see subsequently.
Next: Normal Zeeman effect Up: Angular momentum and its Previous: Angular momentum and its
Abhijit Poddar
2007-09-27
|
|
# Tag Info
14
One of the advantages is purely on the human side of security. From RFC 6238's abstract: The HOTP algorithm specifies an event-based OTP algorithm, where the moving factor is an event counter. The present work bases the moving factor on a time value. A time-based variant of the OTP algorithm provides short-lived OTP values, which ...
8
The HOTP standard describes the resynchronization algorithm (section 7.4). Basically, the server remembers the last value $C$ of the counter for which a correct password was presented. When a new password is to be verified, the server tries $C+1$, $C+2$... until one matches, or $C+w$ is reached for some $w$ called the "window size". The intended scenario is ...
6
It looks to me that the original intent was to make sure that all bits of the hash digest have an equal chance to contribute to the truncated portion. But one of the properties of a secure hash function is to ensure that a single bit change results in a cascade that yields changing bits across the entire digest. If you don't trust this property in the hash ...
4
Why stop at 8 digits? 10 digits will be even more secure. Or 12. The output of the HOTP algorithm is 160 bits so you could go all the way to about 48 digits. Bottom line: 6 digits is secure enough for most applications and that is all that counts. Any more is inconvenient for the user and slightly more expensive when used in a hardware token (8 digit ...
4
The usual resynchronization method involves getting several consecutive codes from the token and then running the algorithm once with a very large look-ahead window until the set of consecutive codes are found. The number of consecutive codes needed depends on how far off the token is. With a typical token, two codes would suffice to handle a desynch of ...
3
There is no "fresh client" with HOTP. The whole counter business is based on the idea that there is a single client, who maintains his counter which is more-or-less synchronized with the server counter. The synchronization window is just a way to cope with small unsynchronization events which come from realistic situations (e.g. your 3-year-old played with ...
3
It is for user experience reasons, as you surmise, but the security is not compromised as much as you may think. Most implementations use 6 digit HOTP/TOTP schemes and design their implementation of the scheme to give them a security level they are comfortable with. For HOTP, the key parameter that allows 6 digits to be secure enough is the throttling ...
3
From RFC 4226: 7.4. Resynchronization of the Counter Although the server's counter value is only incremented after a successful HOTP authentication, the counter on the token is incremented every time a new HOTP is requested by the user. Because of this, the counter values on the server and on the token might be out of synchronization. ...
2
Yes, HOTP can include a PIN/Password also. If you check RFC 4226, it says Composite Shared Secrets It may be desirable to include additional authentication factors in the shared secret K. These additional factors can consist of any data known at the token but not easily obtained by others. Examples of such data include: PIN or ...
2
I agree that Gilles' interpretation in the comments is the only one that makes sense; the RFC clearly contains an editorial error, and should read either (emphasis indicates corrections): "If the value calculated by the authentication server matches the value calculated by the client, then the HOTP value is validated." or: "If the value received by ...
2
So I was finally able to work this out. The pin code isn't important and is simply used to decrypt the activation code locally (not sure why our server asks for it in that case). The activation code is a base32 encoding of a seed where every fifth character acts as a checksum for the previous four. The seed is then passed through KDF1 to generate the ...
1
It is very hard to estimate time it takes for attacker to brute-force password. This is because HMAC-SHA256 can be calculated billions of times per second on some devices, and there are some devices existing which take millisecond to calculate one HMAC. BTW, I understood that password is actually a 128-bit cryptographic key, i.e. it can may contain any ...
1
It looks like unnecessary window dressing to me. As far as I can see, there is absolutely no reason to use this scheme instead of just choosing the first four bytes of the hash. It looks like unnecessary complexity -- or, as fgrieu put it, over-engineering. If the hash function is any good, then all this should be unnecessary. And if the hash function ...
1
RFC 4226, section 7.5 defines two shared key generation schemes: deterministic and random. I would suggest that you use the deterministic scheme, which only requires the server to store a single "master key": "Deterministic Generation A possible strategy is to derive the shared secrets from a master secret. The master secret will be stored at ...
1
As I understand, the user's token normally can't be reset (without destroying it). So, the assistance would consist in either giving a new token to the user (and declaring the old one invalid), or in stepping the server ahead until it matches again (i.e. running the algorithm once with a really large window size).
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
Question
# A tin of oil was $\dfrac{4}{5}$ full. When 6 bottles of oil was taken out and 4 bottles of oil was poured into it was $\dfrac{3}{4}$ full. How many bottles of oil can the tin contain?(a) 10(b) 20(c) 30(d) 40
Hint: First, a process is done in which 6 bottles of oil is taken out and then 4 bottles of oil is poured into it to get the tin as $\dfrac{3}{4}$ full and we are given with the condition that $\dfrac{4}{5}$ of the tin is full. Then, from the above two conditions, we get that $\dfrac{3}{4}$ of x is subtracted from $\dfrac{4}{5}$ of x then the number of bottles of oil changes from 6 to 4. Then, solve the above expression to get the value of x which is the number of bottles required to fill the tin.
Then, we are given the condition that $\dfrac{4}{5}$ of the tin is full.
Then, a process is done in which 6 bottles of oil are taken out and then 4 bottles of oil is poured into it to get the tin as $\dfrac{3}{4}$ full.
Now, from the above two conditions, we get that $\dfrac{3}{4}$ of x is subtracted from $\dfrac{4}{5}$ of x then the number of bottles of oil changes from 6 to 4.
$\dfrac{4}{5}x-\dfrac{3}{4}x=6-4$
\begin{align} & \dfrac{16x-15x}{20}=2 \\ & \Rightarrow \dfrac{x}{20}=2 \\ & \Rightarrow x=40 \\ \end{align}
Note: Now, to solve these types of questions we must be careful with the calculations of the questions as this is only a mistake we can make as the question is very simple and straightforward. Moreover, we must be aware of the fact that $\dfrac{4}{5}$ is greater than $\dfrac{3}{4}$ to get the answer correctly.
|
|
~/imallett (Ian Mallett)
Energy-Conserving Blinn-Phong Specular
For BRDFs in graphics, we typically ignore energy conservation, in favor of energy normalization (that is, energy is lost, but at least it isn't gained). Most people are satisfied with that because it models the shadowing and masking effects in microfacet theory. Unfortunately, it completely ignores multiple scattering. The correct answer, then, is somewhere in-between the extremely lossy models typically used and an idealized model that loses none.
This page is my attempt to derive such an idealized model for the Blinn-Phong specular BRDF. I do not succeed. To the first person who can solve the final integral (even just the inner one, as this would make numerically integrating the outer one easier) in closed-form (no sums/integrals/etc.) in terms of elementary functions (sin/cos/etc., no hypergeometric operators/etc., although constants are okay), I will send \$50. The same bounty given if the correct answer to the entire problem is obtained a different way.
The (scalar variant of the) Blinn-Phong specular BRDF (at a point $\vec{x}$ with normal $\vec{N}$, and with energy-conservation factor $\alpha$ and specular exponent $n$) is:
$f_r(\vec{\omega}_i,\vec{\omega}_o) ~:=~ \alpha (\vec{H} \cdot \vec{N})^n = \alpha \left( \frac{ \vec{\omega}_i + \vec{\omega}_o }{ \left \| \vec{\omega}_i + \vec{\omega}_o \right \| }\cdot\vec{N} \right)^n$
We want to solve for $\alpha$.
First, energy-conservation means the following:
$\int_{\Omega_{\vec{N}}} f_r(\vec{\omega}_i,\vec{\omega}_o) \left( \vec{N} \cdot \vec{\omega}_i \right) d \vec{\omega}_i = 1$
We can now substitute in and rearrange:
$\int_{\Omega_{\vec{N}}} \alpha (\vec{H} \cdot \vec{N})^n \left( \vec{N} \cdot \vec{\omega}_i \right) d \vec{\omega}_i = 1\\ \int_{\Omega_{\vec{N}}} \left( \frac{ \vec{\omega}_i + \vec{\omega}_o }{ \left \| \vec{\omega}_i + \vec{\omega}_o \right \| }\cdot\vec{N} \right)^n \left( \vec{N} \cdot \vec{\omega}_i \right) d \vec{\omega}_i = \frac{1}{\alpha}$
To compute $\left \| \vec{\omega}_i + \vec{\omega}_o \right \|$ is actually surprisingly simple. We can consider both vectors to lie in a plane, separated by angle $\gamma$. Without loss of generality, for purposes of computing the length, we can move to 2D: suppose $\vec{\omega}_o:=<0,1>$ and $\vec{\omega}_i:=<\sin(\gamma),\cos(\gamma)>$. Their sum is $<\sin(\gamma),1+\cos(\gamma)>$ and the length of that sum is (Pythagorean theorem):
$\left \| \vec{\omega}_i + \vec{\omega}_o \right \| = \sqrt{ \sin^2(\gamma) + (1+\cos(\gamma))^2 } = \sqrt{ 2 + 2 \cos(\gamma)} = 2 \cos \left(\frac{\gamma}{2}\right)$
Since $\gamma$ is just $\arccos(\vec{\omega}_i \cdot \vec{\omega}_o)$, we have:
$\left \| \vec{\omega}_i + \vec{\omega}_o \right \| = 2 \cos \left(\frac{\gamma}{2}\right) = 2 \cos \left(\frac{1}{2} \arccos(\vec{\omega}_i \cdot \vec{\omega}_o) \right) = 2 \sqrt{\frac{\vec{\omega}_i \cdot \vec{\omega}_o + 1}{2}} = \sqrt{2} \sqrt{\vec{\omega}_i \cdot \vec{\omega}_o + 1}$
Substitute this into the integral:
$\frac{1}{\alpha} = \int_{\Omega_{\vec{N}}} \left( \frac{ \vec{\omega}_i + \vec{\omega}_o }{ \sqrt{2} \sqrt{\vec{\omega}_i \cdot \vec{\omega}_o + 1} }\cdot\vec{N} \right)^n \left( \vec{N} \cdot \vec{\omega}_i \right) d \vec{\omega}_i$
At this point, you have the basic formulas. Now it's time to try to solve them.
The attempt I pushed most on uses spherical coordinates. First, (re-)define $\vec{\omega}_o$ and $\vec{\omega}_i$ in spherical coordinates:
$\vec{\omega}_o = \begin{bmatrix} x_o = \sqrt{1-y_o^2}\\ y_o = \vec{N} \cdot \vec{\omega}_o\\ 0 \end{bmatrix},~~~~ \vec{\omega}_i = \begin{bmatrix} \cos(\Delta\theta)~\cdot &\!\!\!\!\! \cos(\phi)\\ &\!\!\!\!\! \sin(\phi)\\ \sin(\Delta\theta)~\cdot &\!\!\!\!\! \cos(\phi) \end{bmatrix}$
Convert to spherical coordinates, substitute in, and simplify:
\begin{align*} \frac{1}{\alpha} &= \int_0^{\pi/2}\int_0^{2\pi} \left( \frac{ \vec{\omega}_i + \vec{\omega}_o }{ \sqrt{2} \sqrt{\vec{\omega}_i \cdot \vec{\omega}_o + 1} }\cdot\vec{N} \right)^n \left( \vec{N} \cdot \vec{\omega}_i \right) \cdot \cos(\phi) \cdot d \Delta\theta \cdot d \phi \\ &= \int_0^{\pi/2}\int_0^{2\pi} \left( \frac{ \cos(\phi) + y_o }{ \sqrt{2} \sqrt{\smash[b]{ \cos(\Delta\theta)\underbrace{\cos(\phi)\sqrt{1-y_o^2}}_A + \sin(\phi) y_o + 1 }} } \right)^n \cos^2(\phi) \cdot d \Delta\theta \cdot d \phi \\[14pt] &= \int_0^{\pi/2} \cos^2(\phi) \int_0^{2\pi} \left( \frac{ \cos(\phi) + y_o }{ \sqrt{2 A} \sqrt{\smash[b]{ \cos(\Delta\theta) + \underbrace{\frac{\sin(\phi) y_o + 1}{A}}_B }} } \right)^n \cdot d \Delta\theta \cdot d \phi \\[14pt] &= \int_0^{\pi/2} \left(\frac{\cos(\phi)+y_o}{\sqrt{2 A}}\right)^n \cos^2(\phi) \int_0^{2\pi}\left( \cos(\Delta\theta) + B \right)^{-n/2} \cdot d \Delta\theta \cdot d \phi \end{align*}
The inner integral here is surprisingly difficult, even if you assume $n/2$ is an integer. If you can solve this integral, you get the prize mentioned above!
COMMENTS
Ian Mallett - Contact - Donate - 2018 -
|
|
# Central Tendency | Mean | Example
Mean, median and mode are the various methods that give an overview of the statistical data numerically. A given statistical data can be represented either graphically or numerically. Graphical description of data is with the help of a bar graph, line graph, pie chart or by frequency curve. All graphs give an information about behavior of data with the variation in peak of a graph.
## Mean
A statistical mean gives the middle values of the data. It is simply the average of the given numbers. It is most commonly used for finding the averages. For Example : class average, average height of a group of students, average daily temperature during the month of august etc…
It is represented by $bar{x{color{Purple} }}$ and read as “x-bar”.
Mathematically, it can be written as;
$bar{x}= frac{x_1+x_2+x_3+x_4+x_5+....x_n}{n}$
Where, x1 x2, x3…. x are the sum of the given number in nominator and n in denominator is the total number of values.
### Example
A sample consists of 6 numbers. The numbers are 7, 12, 15, 13, 11 and 6. Find the mean of given data.
Solution
By using mean formula.
$bar{x}= frac{7+12+15+13+11+6}{6}$
$bar{x}= 10.67$
|
|
Total displacement of sail boat
1. Sep 25, 2006
needhelp83
A sailboat tacking against the wind moves as follows:
5.8 km at 45 degrees east of north, and then
4.5 km at 50 degrees west of north.
The entire motion takes 1 h 15 min
What is the total displacement?
What is the average velocity of this section of the trip?
What is the speed, if it is assumed to be constant?
Total Displacement
/\x = 5.8*sin(45) + 4.5*sin(-50)= 6.116 km
/\y = 5.8*cos(45) + 4.5*cos(-50)= 7.389 km
/\d = sqrt(/\x^2 + /\y^2) = 9.592 km = 9592 m
Average Velocity
v= r/t
How do I figure out the rest from this information? Any help would be appreciated.
2. Sep 25, 2006
big man
The speed (assuming it is constant through the trip) is just the total distance travelled divided by the time it took to travel that distance.
Average velocity is displacement divided by the time.
3. Sep 25, 2006
needhelp83
Average velocity=displacement/time
Average velocity=9.592 km/1.25 h=7.623 km/h
4. Sep 25, 2006
big man
yeah that's right formula.
However, looking at your calculation for displacement I don't think it's right. I don't think you're going to have a right angle triangle with those angles. I think that you will have to use the law of cosines to calculate displacement.
5. Sep 25, 2006
junior_J
hmmm .... Im new to vectors (and Physics as a whole) ..... but i did manage to do some calculations using the cosine law and it seems that big man is correct . The displacement should be around 10.3 Kms . As far as the average velocity is concerned ... well its about 2.3 ms^-1 . It says that the speed is assumed to be constant , so ... use this equation s = ut where s = total distance , t = total time and u = constant speed ....
BTW which book are u going through ??
6. Sep 25, 2006
junior_J
sorry i think i got it wrong .... it said west of north damn !! anyways i think u get the picture right ?
7. Sep 25, 2006
big man
Welcome junior_j!!
I actually calculated the displacement to be roughly 7 km.
The angle opposite the displacement line is 85 degrees. Is that what you got?
EDIT: haha that was a quick post. Disregard this post ;)
8. Sep 25, 2006
junior_J
now that u mentioned it im doing it again :)
..... yes those are what i got .... the angle opposite to the resultant vector is 85 degrees and the resultant displacement is 7km
9. Sep 25, 2006
needhelp83
Alright here's another shot...
Total Displacement
180-angle A-angle B=angle C
180-45-50=85
v3^2=v1^2+v2^2-2(v1)(v2)cos(C)
v3^2=5.8^2+4.5^2-(52.2)cos(85)
v3^2=49.341
v3=7.024 km
Average velocity=displacement/time
Average velocity=7.024 km/1.25 h=5.619 km/h
How would I calculate the speed?
Constant Acceleration
10. Sep 25, 2006
big man
speed = distance/time
So 4.5 + 5.8 = 10.3 km. That is your total distance covered in the given time interval.
Edit: Forgot to say that your above working is correct as well for the average velocity.
11. Sep 25, 2006
needhelp83
speed=distance/time
speed=4.5 km + 5.8 km/1.25 h= 8.24 km/h
12. Sep 25, 2006
big man
Yup that's it!!!
|
|
# Learning to Predict Vehicle Trajectories with Model-based Planning CoRL 2021
Hong Kong University of Science and Technology
### Abstract
Predicting the future trajectories of on-road vehicles is critical for autonomous driving. In this paper, we introduce a novel prediction framework called PRIME, which stands for Prediction with Model-based Planning. Unlike recent prediction works that utilize neural networks to model scene context and produce unconstrained trajectories, PRIME is designed to generate accurate and feasibility-guaranteed future trajectory predictions, which guarantees the trajectory feasibility by exploiting a model-based generator to produce future trajectories under explicit constraints and enables accurate multimodal prediction by using a learning-based evaluator to select future trajectories. We conduct experiments on the large-scale Argoverse Motion Forecasting Benchmark. Our PRIME outperforms state-of-the-art methods in prediction accuracy, feasibility, and robustness under imperfect tracking.
(PRIME had been ranked 1st on the Argoverse Motion Forecasting Challenge until March 2021.)
### Motivation & Key idea
In traffic scenarios, most vehicles operate under their inherent kinematic constraints (e.g., non-holonomic motion constraints for vehicles) while in compliance with the road structure (e.g., lane connectivity, static obstacles) and semantic information (e.g., traffic lights, speed limits). All these kinematic and environmental constraints explicitly regularize the trajectory space.
However, most existing future prediction approaches model traffic agents as points and produce sequences of future positions without constraints. Such constraint-free predictions may be incompliant with kinematic or environmental characteristics, which gives rise to massive uncertainty in the predicted future states. Consequently, the downstream planning module would inevitably undergo some extra burdens and even the "freezing robot problem."
Moreover, the recent learning-based prediction models follow the typical paradigm of generating trajectory predictions by network regression that highly relies on long-term tracking results. But for some dense driving scenarios where the target would be momently occluded or suddenly appears within the sensing range, tracking results are discontinuous or not accumulated enough. The prediction accuracy would degrade under such imperfect tracking cases.
Toward overcoming these challenges, we propose a novel prediction architecture called PRIME. The critical idea is to exploit a model-based motion planner as the prediction generator to sample feasible future trajectories under explicit constraints, together with a deep neural network as the prediction evaluator to model implicit interactions and select future trajectories by scoring. The novel architecture contributes to accurate, feasible, and robust trajectory predictions.
More specifically, the model-based generator (left) which samples the target's feasible future trajectories $$\mathcal{T}$$ by taking its real-time state $$\mathbf{s}_{tar}^0$$ and the map $$\mathcal{M}$$, while explicitly imposing kinematical and environmental constraints to guarantee trajectory feasibility; the learning-based evaluator (right) which receives the feasible trajectories $$\mathcal{T}$$ and all observed tracks $$\mathcal{S}$$ to model the implicit interactions among all traffic agents, and selects a final set of feasible trajectories $$\mathcal{T}_{tar}\subset\mathcal{T}$$ as the prediction result.
### Framework Overview
The model-based generator searches reachable paths $$\mathcal{P}$$ through the map with Depth-First-Search and samples a set of feasible future trajectories $$\mathcal{T}$$ with the Frenet Planer. This part is detailed in our paper.
The learning-based evaluator first encodes scene context given by $$(\mathcal{P}, \mathcal{T}, \mathcal{S})$$, including $$l$$ paths in $$\mathcal{P}$$, $$(m+1)$$ history tracks in $$\mathcal{S}$$ and $$n$$ future trajectories in $$\mathcal{T}$$. The implicit agent-map interactions are learned in the subsequent attention modules: P2T and P2F propagate the spatial information of each reference path $$\mathcal{P}_i$$ into history tracks and corresponding future trajectories, and A2A takes track tensors from P2T to capture the multi-agent interactions. As the path-based Frenet coordinate is used in our dual spatial representation, P2T, P2F, and A2A operate for each path, while F2F fuses all the future trajectories processed by P2F to obtain a global understanding for the reachable space. Subsequently, each feasible trajectory $$\mathcal{T}_{j,k}$$ could query its track tensor $$\mathbf{X}_j(\mathbf{s}_{tar})$$ from P2T, interaction tensor $$\mathbf{Y}_j(\mathbf{s}_{tar})$$ from A2A and future tensor $$\mathbf{Z}(\mathcal{T}_{j,k})$$ from F2F, and it is scored by feeding the concatenation of these tensors to fully-connected layers. Finally, the evaluator ranks all feasible future trajectories in $$\mathcal{T}$$ by scoring and outputs a final set of $$K$$ predicted trajectories.
### Qualitative Results
Qualitative results under various scenarios on the Argoverse validation set. The model-based generator produces the set of future trajectories $$\mathcal{T}$$ (blue) with feasibility guaranteed, which well regularize the target vehicle's future trajectory space. The learning-based evaluator selects $$K$$ trajectories from $$\mathcal{T}$$ as multimodal prediction results (red), and the depth of red indicates their probability.
### [Supp. 1] Comparison with Fully Learning-based Prediction
Compared with the mainstream learning-based methods that generate unconstrained trajectory predictions by neural networks, the main difference of our proposed PRIME framework is to explicitly constrain the prediction space and thereby ensure trajectory feasibility. Here, we use LaneGCN as a representative for the typical fully learning-based prediction models, and among the current state-of-the-art methods, it is open-source. We demonstrate some common failures of kinematically and environmentally infeasible predictions in the following.
Due to kinematic constraints, vehicles cannot take a sudden turn at high speed (1st-row in Fig. 6), or reverse the moving direction (2nd-row in Fig. 6). Also, the prediction results of turning with across lane boundaries (1st-row in Fig.7), or heading towards reverse lanes (2nd-row in Fig.7) are incompliant with environmental constraints. Moreover, the counter-intuitive bidirectional trajectories predicted by LaneGCN (2nd-row in Fig.7) also reveal that the fully learning-based prediction relies on relative long-range tracks for regressing trajectories, but it may degrade under short-range tracks.
In some of the above examples, although it looks PRIME and LaneGCN show comparable performance when evaluated by minADE$$_6$$ and minFDE$$_6$$, their impacts on the downstream planning differ a lot. The infeasible trajectories generated by LaneGCN bring massive uncertainty in the predicted future states, which would cause redundant burdens for an autonomous vehicle to make decisions and motion plans. Especially in dense traffic where multiple surrounding vehicles need to be predicted, the negative impact of infeasible predictions would be further aggravated. By contrast, PRIME regularizes the future trajectory space (blue) by given constraints and thus makes accurate and reasonable future predictions (red).
### [Supp. 2] Impacts Caused by Defect Data
Although Argoverse is one of the most recognized benchmarks for trajectory prediction due to its high-quality trajectory and map annotation, some of its ground truth trajectories are not completely correct. The common issues result from the tracking method used for annotating the data, including position oscillation (Fig. 8(a)) and id switch (Fig. 8(b)) that the ground truth trajectory is suddenly switched to a neighboring agent. Such defect cases would lead to worse performance indicators (ADE/FDE-based metrics) of our method in the quantitative evaluation, but it is evident that the smooth trajectories predicted by PRIME are more realistic and reasonable.
### [Supp. 3] Failure Cases
We demonstrate the failure cases of our method on the Argoverse validation set in Fig. 9. The failures are mostly related to the estimation deviation for the target vehicle's current state $$\mathbf{s}_{tar}^0$$. Although the sampling-based strategy in our generator could compensate for inaccurate state estimation to some extent, estimating the heading and velocity from sequences of centroid positions would be intractable when there exists serious data noise. For example, the position oscillation of a short-distance history track would make the heading direction hard to estimate, as shown in Fig. 9(a). As a result, the ground truth trajectory locates out of the resulted prediction space's span range. When the position sequence vibrates too much, the accuracy of velocity estimation would even be affected. As exemplified in Fig. 9(b), the future trajectory space does not cover the ground truth trajectory due to the inaccurate estimation for the target's low velocity, leading to a relatively large displacement error in the prediction results.
Nonetheless, the accuracy of state estimation could be improved by incorporating more information. For instance, the vehicle's bounding box given by detection provides geometry information in addition to discrete positions, which would enable more robust and accurate state estimation for prediction targets.
### Runtime Analysis
The inference frequency of our prediction framework depends on the scene complexity, sampling density, and computing power. Running with Intel i7-7820X, the generation of a single trajectory with a single thread spends 0.1~0.2 ms on average. With each trajectory sample produced independently, the model-based trajectory generator could be highly parallelized to provide full coverage to the future prediction space with satisfactory real-time performance. For the learning-based evaluator, it is implemented by a lightweight network with only 1.02 million parameters. Its inference time on NVIDIA 2080TI is 8~12 ms. Overall, the whole framework of PRIME could well satisfy the real-time requirements for autonomous driving.
(Note: The current C++ implementation of the model-based part has a 1/10 runtime of the previous version implemented by Python. i.e., sampling 1k trajectories on single thread costs ~10ms.)
### BibTeX
@inproceedings{song2021learning,
title={Learning to Predict Vehicle Trajectories with Model-based Planning},
author={Haoran Song and Di Luan and Wenchao Ding and Michael Y Wang and Qifeng Chen},
booktitle={5th Annual Conference on Robot Learning },
year={2021},
}
|
|
### Monte Carlo Gridworld
Easily share your publications and get them in front of Issuu’s. Menu; Academics ICSE ; 1st Standard; 2nd Standard. This is a problem that can occur with some deterministic policies in the gridworld environment. Monte Carlo Methods. m and updateVfield. The members of the production and cast of each selected programme will be invited to Monte-Carlo to present their work through premiere public screenings, conferences, press activities and to claim to win one of the prestigious Golden Nymphs. 强化学习系列(六):时间差分算法(Temporal-Difference Learning) 7027 2018-07-28 一、前言 在强化学习系列(五):蒙特卡罗方法(Monte Carlo)中,我们提到了求解环境模型未知MDP的方法——Monte Carlo,但该方法是每个episode 更新一次(episode-by-episode)。. 1 INTRODUCTION Monte Carlo Tree Search (MCTS) is a best-first search which uses Monte Carlo methods to probabilistically sample actions in a given. Basically we can produce n simulations starting from random points of the grid, and let the robot move randomly to the four directions until a termination state is achieved. 1, but use action values (see section 5. Race Track. Thomas Gabor, Jan Peter, Thomy Phan, Christian Meyer, and Claudia Linnhoff-Popien, „Subgoal-Based Temporal Abstraction in Monte-Carlo Tree Search“, in 28th International Joint Conference on Artificial Intelligence (IJCAI ’19), 2019, pp. ca, Canada's largest bookstore. Monte Carlo: requires just the state and action space SARSA Example: Windy Gridworld – reward = -1 for all transitions until termination at goal state. Deep Learning using Tensorflow Training Deep Learning using Tensorflow Course: Opensource since Nov,2015. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1. If nsamples are taken, an estimate for the degree of factoredness introduced in Equa-tion 1 for a private utility gis given by: 1 n d Xn i=1 u[(g(A) g(A A ;t + Ri)) (G(A) G(A A ;t + Ri))]. We all learn by interacting with the world around us, constantly experimenting and interpreting the results. It did so without learning from games played by humans. Rl gridworld. The data for the learning curves is generated as fol-lows: after every 1000 steps (actions) the greedy pol-icy is evaluated offline to generate a problem specific performance metric. Search; Courses. The Monte Carlo approach to solve the gridworld task is somewhat naive but effective. temporal-difference learning. 2 Monte-Carlo Control 3 On-Policy Temporal-Di erence Learning 4 O -Policy Learning 5 Summary. Stanford Libraries' official online search tool for books, media, journals, databases, government documents and more. The multi-armed bandit problem and the explore-exploit dilemma Ways to calculate means and moving averages and their relationship to stochastic gradient descent Markov Decision Processes (MDPs) Dynamic Programming Monte Carlo Temporal Difference (TD) Learning (Q-Learning and SARSA) Approximation Methods (i. –This is what a batch Monte Carlo method gets •If we consider the sequentiality of the problem, then we would set V(A)=. 3 (Lisp) Chapter 5: Monte Carlo Methods. Open source interface to reinforcement learning tasks. Its only input features are the black and white stones from the board. via Monte Carlo). mp4 8,095 KB 045 Policy Evaluation in Windy Gridworld. TD learning solves some of the problem of MC learning and in the conclusions of the second post I described one of these problems. The third major group of methods in reinforcement learning is called Temporal Differencing (TD). I will briefly review classical large sample approximations to posterior distributions (e. ” With an image like this, we dare not disagree. The data for the learning curves is generated as fol-lows: after every 1000 steps (actions) the greedy pol-icy is evaluated offline to generate a problem specific performance metric. Monte Carlo Monte Carlo Intro (3:10) Monte Carlo Policy Evaluation (5:45) Monte Carlo Policy Evaluation in Code (3:35) Policy Evaluation in Windy Gridworld (3:38) Monte Carlo Control (5:59) Monte Carlo Control in Code (4:04) Monte Carlo Control without Exploring Starts (2:58) Monte Carlo Control without Exploring Starts in Code (2:51) Monte. Barto: Reinforcement Learning: An Introduction 9 Advantages of TD Learning. The actions are the standard four—up, down, right, and left—but in the middle region the resultant next states are shifted upward by a. Dynamic programming methods are well developed mathematically, but require a. Revving it up at work, good progress on Udacity and a casual 20K practice run! | Weekly Report 91 28 May 2018. A new Approach for Quantifying Root-Reinforcement of Streambanks: the Rip Root Model. Basically we can produce n simulations starting from random points of the grid, and let the robot move randomly to the four directions until a termination state is achieved. In part 3 we do some simple Q learning to teach the agent to play cart pole. Course materials: Lecture: Slides-1a, Slides-1b, Background reading: C. Monte-Carlo Introduction Dans cette partie, nous voyons comment associer l'idée de la programmation dynamique avec l'idée de Monte-Carlo (MC). Monte Carlo Methods. Your implementation of Monte Carlo Exploring Starts algorithm appears to be working as designed. This reinforcement process can be applied to computer programs allowing them to solve more complex problems that classical programming cannot. new: Browser Search Plugins Login Register Register. MCTS incrementally builds up a search tree, which stores the visit countsN(s t), N s t;a t, and the val-uesV (s t) andQ(s t;a t) for each simulated state and action. Humans learn best from feedback—we are encouraged to take actions that lead to positive results while deterred by decisions with negative consequences. 1, but use action values (see section 5. X Reinforcement Learning Cookbook, Deep Learning With R For Beginners, Hands-On Deep Learning Architectures With Python e Python Machine Learning By Example. Sutton and A. MCTS Monte-Carlo Tree Search [1, 2] has had much publicity recently due to their successful application in solving Go [13]. CSDN提供最新最全的ballade2012信息,主要包含:ballade2012博客、ballade2012论坛,ballade2012问答、ballade2012资源了解最新最全的ballade2012就上CSDN个人信息中心. • Dynamic Programming & Monte Carlo methods on Gambler’s problem • Temporal-Difference methods on Windy Gridworld problem • Function Approximation and TD(0) on Random Walk problem • Semi-gradient Sarsa on Mountain Car problem. Monte Carlo Methods We're working with a small grid world example, with an agent who would like to make all the way to the state and the bottom right corner as qiukly as possible. Offline Monte Carlo Tree Search. You can run your UCB_QLearningAgent on both the gridworld and PacMan domains with the following commands. For example, if the policy took the left action in the start state, it would never terminate. Monte Carlo 방식은 모든 Action에 대한 Value를 평균을 내면 그 state의 value를 알 수 있다는 아이디어로 시작되었다. 本文共 1484 个字,阅读需 4分钟. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques. Humans learn best from feedback—we are encouraged to take actions that lead to positive results while deterred by decisions with negative consequences. Each class of methods has its strengths and weaknesses. A simulação de Monte Carlo é comum em análises de mercado, sendo muito usada, por exemplo, para se estimar resultados futuros de um projetos, investimentos ou negócios. Abstract (Framed for a general scientific audience): The gridworld is the canonical example for Reinforcement Learning from exact state-transition dynamics and discrete actions. Q&A for students, researchers and practitioners of computer science. 强化学习系列(六):时间差分算法(Temporal-Difference Learning) 7027 2018-07-28 一、前言 在强化学习系列(五):蒙特卡罗方法(Monte Carlo)中,我们提到了求解环境模型未知MDP的方法——Monte Carlo,但该方法是每个episode 更新一次(episode-by-episode)。. The actions are the standard four—up, down, right, and left—but in the middle region the resultant next states are shifted upward by a. python package for fast shortest path computation on 2D grid or polygon maps. Book recipes, as well as real-world examples will help you master various RL techniques such as dynamic programming, Monte Carlo simulations, time difference and queue learning for you will also find an overview for specific application art techniques. 4-5 Central Limit Theorem Applied to 4-Way Gridworld One-Step Errors. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. Monte-Carlo Policy Gradient. 5의 Monte-Carlo와 같이 model-free한 방법으로써, Temporal Difference Methods에 대해 다루겠습니다. Monte-Carlo tree search inset shows sequences of actions taken during 1 simulation depth (e. action_space. Model-free value estimation 1 - Monte Carlo 기법 : Approximated DP (ADP) 기법 중 하나인 몬테카를로 기법을 설명하고 Grid world에서 MC를 구현 및 결과 설명 3. Learn Hacking, Photoshop, Coding, Programming, IT & Software, Marketing, Music and more. Monte Carlo Reinforcement Learning. For more information on these agents, see Q-Learning Agents and SARSA Agents. 5: Windy Gridworld Figure 6. This article is a continuation of the previous article, which was on-policy Monte Carlo methods. Your implementation of Monte Carlo Exploring Starts algorithm appears to be working as designed. 2 words related to Monte Carlo: Monaco, Principality of Monaco. Reinforcement Learning Algorithms with Python: Learn, understand, and develop smart algorithms for addressing AI challenges [Lonza, Andrea] on Amazon. The vector r. It’s a technique that simply interpolates (using the coefficient λ \lambda λ ) between Monte Carlo and TD updates In the limit λ = 0 \lambda=0 λ. Thomas Stanford CS234: Reinforcement Learning, Guest Lecture May 24, 2017. 36MB 04 Markov Decision Proccesses/026 The Markov Property. The Logic of Adaptive Behavior: Knowledge Representation and Algorithms for Adaptive Sequential Decision Making under Uncertainty in First-Order and Relational Domains. The book Monte Carlo Techniques in Radiation Therapy (CRC Press, Taylor & Francis, Seco and Verhaegen) can be ordered via this link. Jean-Gabriel Domergue. You'll even teach your agents how to navigate Windy Gridworld, a standard exercise for finding the optimal path even with special conditions!. I will briefly review classical large sample approximations to posterior distributions (e. There is one dilemma that all…. Dynamic Programming: Policy evaluation and policy iteration algorithms with gridworld and supply chain problems. See the complete profile on LinkedIn and discover Wangyu (Castiel)’s connections and jobs at similar companies. Monte-Carlo가 value function으로 policy를 improve하려면 MDP model를 알아야 하는데, 이는 mode-free method가 되지 않는다. Implement the MC algorithm for policy evaluation in Figure 5. 75 –This is correct for the maximum likelihood estimate of a Markov model generating the data –i. Issuu is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, and more online. python gridworld. Menu; Academics ICSE ; 1st Standard; 2nd Standard. Monte carlo gridworld. Chapter 6: Temporal Difference Learning Objectives of this chapter: Introduce Temporal Difference (TD) learning Focus first on policy evaluation, or prediction, methods. Program schedule of IJCAI 19. Monte Carlo Simulation and Reinforcement Learning Part 1: Introduction to Monte Carlo simulation for RL with two example algorithms playing blackjack. "Monte-Carlo tree search as regularized policy optimization", Grill et al 2020 {DM} (AlphaZero/MuZero) (A gridworld with different number of agents present, PPO. This book develops the use of Monte Carlo methods in. sutton 교수의 Reinforcement Learning An Introduction을 읽고 공부하기 3. To increase complexity, we assume that there are obstacles located in different squares of the world. In this exercise you will learn techniques based on Monte Carlo estimators to solve reinforcement learning problems in which you don't know the environmental behavior. Lecture 4: Model-Free Prediction. 机器学习之Grid World的Monte Carlo算法解析. m, state2cells. 9 learning rate • Monte carlo updates vs bootstrapping Start goal. My setting is a 4x4 gridworld where reward is always -1. reset() for _ in range(1000): env. Grokking Deep Reinforcement Learning. 2, using the equiprobable random policy. 12: Racetrack The gridworld is the canonical example for Reinforcement Learning from exact state-transition dynamics and discrete actions. Lecture 5: Model-Free Control On-Policy Temporal-Di erence Learning. Sutton and A. Soap Bubble. 博客 Example3. Offline Monte Carlo Tree Search. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1. mp4 7,993 KB Please note that this page does not hosts or makes available any of the listed filenames. 10 shows a standard gridworld, with start and goal states, but with one difference: there is a crosswind upward through the middle of the grid. 3 Monte Carlo Control without Exploring Starts. For each simulation we save the 4 values: (1) the initial state, (2) the action taken. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. GridWorld实训答案. The goal is to find the shortest path from START to END. Aliased Gridworld Example Example: Aliased Gridworld (3) An optimalstochasticpolicy will randomly move E or W in grey states ˇ (wall to N and S, move E) = 0:5 ˇ (wall to N and S, move W) = 0:5 It will reach the goal state in a few steps with high probability Policy-based RL can learn the optimal stochastic policy. A policy is a function ˇ: A S!R. Complete policy : The complete expert's policy π E is provided to LPAL. Offline Monte Carlo Tree Search. Published as a conference paper at ICLR 2019 Reward Constrained Policy Optimization Chen Tessler 1, Daniel J. You will learn about core concepts of reinforcement learning, such as Q-learning, Markov models, the Monte-Carlo process, and deep reinforcement learning. Open source interface to reinforcement learning tasks. It's taken $280 million and more than four years, but in March, the famed Hôtel de Paris Monte-Carlo, regarded as one of the world's most luxurious hotels, will debut its dramatic renovation in full. number” in Monte Carlo methods we need to satisfy theorem 1. For more information on these agents, see Q-Learning Agents and SARSA Agents. Der 2014er Roman "Monte Carlo" des belgischen Autors ist 2016 ins Deutsche übersetzt worden – um "De Bewaker" von 2009 hingegen, seinerzeit mit dem Literaturpreis der Europäischen Union. Two main approaches:. 04 Markov Decision Proccesses/025 Gridworld. Monte Carlo Tree Search (MCTS) is a best-first search algorithm that has produced many breakthroughs in AI research. Cliff Walking and other gridworld examples) and a large class of stochastic environments (including Blackjack). Docker allows for creating a single environment that is more likely to work on all systems. In this article the off-policy Monte Carlo methods will be presented. It's taken$280 million and more than four years, but in March, the famed Hôtel de Paris Monte-Carlo, regarded as one of the world's most luxurious hotels, will debut its dramatic renovation in full. Monte Carlo Monte Carlo Intro (3:10) Monte Carlo Policy Evaluation (5:45) Monte Carlo Policy Evaluation in Code (3:35) Policy Evaluation in Windy Gridworld (3:38) Monte Carlo Control (5:59) Monte Carlo Control in Code (4:04) Monte Carlo Control without Exploring Starts (2:58) Monte Carlo Control without Exploring Starts in Code (2:51) Monte. Learning control for a communicating mobile robot, on our recent research on machine learning for control of a robot that must, at the same time, learn a map and optimally transmit a data buffer. Complete policy : The complete expert's policy π E is provided to LPAL. We consider the problem of learning to follow a desired trajectory when given a small number of demonstrations from a sub-optimal expert. 5: Windy Gridworld Shown inset below is a standard gridworld, with start and goal states, but with one di↵erence: there is a crosswind running upward through the middle of the grid. This article is a continuation of the previous article, which was on-policy Monte Carlo methods. 042 Monte Carlo Intro. Monte-Carlo (MC): Approximate the true value function. CSDN提供最新最全的ballade2012信息,主要包含:ballade2012博客、ballade2012论坛,ballade2012问答、ballade2012资源了解最新最全的ballade2012就上CSDN个人信息中心. Gridworld! Actions: north, south, east, west; deterministic. Abstract: We propose a simple model for genetic adaptation to a changing environment, describing a fitness landscape characterized by two maxima. DeepMind Pycolab is a customizable gridworld game engine. 81 MB] 046 Monte Carlo Control. Barto: Reinforcement Learning: An Introduction 3 Monte Carlo: TD:! Use V to estimate remaining return n-step TD:. In addition to its ability to function in a wide. Specs 85 Monte Carlo Engine Learnsmart Answer Key Accounting Cow Testes Dissection 2012 Ap Gridworld Solutions Free John Deere 4039 Workshop Manual Manual Hp 48gx. As I promised in the second part I will go deeper in model-free reinforcement learning (for prediction and control), giving an overview on Monte Carlo (MC) methods. Monte Carlo methods only learn when an episode terminates. Lecture 4: Model-Free Prediction. just run the agent following the policy the first time that state s is visited in an episode and do following calculation Every-Visit Monte-Carlo policy evaluation. Sutton and A. jl Author JuliaPOMDP. Policy is currently equiprobable randomwalk. Monte Carlo is an unbiased estimator of the value function compared to TD methods. Artificial Intelligence CS 165A Feb27, 2020 Instructor:Prof. , 2012) and we sum together multiple acquisition functions derived from these kernel parameter samples (figure 9). of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019, IFAAMAS, 3 pages. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1. They quickly learn during the episode that such policies are poor, and. For this study we have largely transfered that code over to Python. 2 Monte-Carlo(MC)法をわかりやすく解説 ・モデル法とモデルフリー法のちがい ・MC法による最適状態行動価値関数Q(s,a)の求め方とポイント ・簡易デモ(python):Gridworld (2種類MC法の実行と比較:概念を理解する). How can we compute ? Compute by averaging the observed returns after on the trajectories in which was visited. The figure below is a standard grid-world, with start. 32 Markov Decision Process. In GridWorld, an agent starts off at one square (START) and moves (up, down, left, right) around a 2D rectangular grid of size (x, y) to find a designated square (END). Safe Reinforcement Learning Philip S. Docker allows for creating a single environment that is more likely to work on all systems. Offline Monte Carlo Tree Search. The data for the learning curves is generated as fol-lows: after every 1000 steps (actions) the greedy pol-icy is evaluated offline to generate a problem specific performance metric. We present an algorithm that (i) extracts the—initially unknown—desired trajectory from the sub-optimal expert’s demonstrations and (ii) learns a local model suitable for control along the learned trajectory. sample() # your agent here (this takes random actions) observation, reward, done, info = env. 8, Code for Figures 3. ipynb; MC methods learn directly from episodes of experience. 75 MB] 044 Monte Carlo Policy Evaluation in Code. **Udemy - Artificial Intelligence Reinforcement Learning in Python** The complete guide to artificial intelligence and machine learning, prep for deep reinforce Udemy - Artificial Intelligence Reinforcement Learning in Python. Can be used on-line No model of the world necessary. Lớp Math cung cấp một phương thức mang tên random để trả lại một số phẩy động giữa 0. Documentation Help Center. In the first and second post we dissected dynamic programming and Monte Carlo (MC) methods. 3: The optimal policy and state-value function for blackjack found by Monte Carlo ES. 1 INTRODUCTION Monte Carlo Tree Search (MCTS) is a best-first search which uses Monte Carlo methods to probabilistically sample actions in a given. My setting is a 4x4 gridworld where reward is always -1. In this case, of course, don't run it to infinity!. 8 gridworld. It requires move. ! State-value function ! for equiprobable ! random policy;! γ = 0. Monte-Carlo Introduction Dans cette partie, nous voyons comment associer l'idée de la programmation dynamique avec l'idée de Monte-Carlo (MC). In this exercise you will learn techniques based on Monte Carlo estimators to solve reinforcement learning problems in which you don't know the environmental behavior. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. A,B: Two random instances of the 28 × 28 synthetic gridworld, with the VIN-predicted trajectories and ground-truth shortest paths between random start and goal positions. Search; Courses. The value of a state s is computed by averaging over the total rewards of several traces starting from s. MCTS incrementally builds up a search tree, which stores the visit countsN(s t), N s t;a t, and the val-uesV (s t) andQ(s t;a t) for each simulated state and action. Faculty of Science and Bio-Engineering Sciences Department of Computer Science Unsupervised Feature Extraction for Reinforcement Learning Thesis submitted in partial ful llment of the requirements for the degree of. In this case, of course, don't run it to infinity!. Evans, Owain - Active Reinforcement Learning with Monte-Carlo Tree Search - 2018-03-13 - https. Code: SARSA. Get the latest machine learning methods with code. the Monte-Carlo Tree Search (MCTS) planning algorithm. Windy Gridworld undiscounted, episodic, reward = –1 until goal. py -a q -k 100 -g TallGrid -u UCB_QLearningAgent python pacman. Monte-Carlo (MC): Approximate the true value function. These methods require completing entire episodes before the value function can be updated. Monte Carlo Tree Search (MCTS)is a popular approach to Monte Carlo Planning and has been applied to a wide range of challenging environments[Rubin and Watson, 2011; Silveret al. Each class of methods has its strengths and weaknesses. The agent still maintains tabular value functions but does not require an environment model and learns from experience. Get the latest machine learning methods with code. Mankowitz2, and Shie Mannor 1Technion Israel Institute of Technology, Haifa, Israel. This stands in contrast to the gridworld examble seen before, where the full behavior of the environment was known and could be modeled. Windy Gridworld ! Temporal-Difference Learning 29 Sarsa: On-Policy TD Control!! "=0. The abbreviated for loop is introduced. 044 Monte Carlo Policy Evaluation in Code. category: report. The convergence results presented here make progress for this long-standing open problem in reinforcement learning. Open source interface to reinforcement learning tasks. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. markovjs-gridworld - gridworld implementation example for markovjs package #opensource. Tile 30 is the starting point for the agent, and tile 37. Monte Carlo method has an advantage over Dynamic Programming as it does not have to know the transition probabilities and the reward system before hand. The differences between Dynamic Programming, Monte Carlo Methods, and Temporal-Difference Learning are teased apart, then tied back together in a new, unified way. See the complete profile on LinkedIn and discover Wangyu (Castiel)’s connections and jobs at similar companies. As you make your way through the book, you’ll work on projects with various datasets, including numerical, text, video, and audio, and will gain experience in gaming, image rocessing, audio. How should it begin if it initially knows nothing about the environment?. Windy Gridworld Example * n-Step SARSA Q(S_t, A_t) Off-policy Monte-Carlo learning is really a bad idea for off-policy learning, importance sampling is useless in. The vector r. Computing approximate responses is more computationally feasible, and fictitious play can handle approximations [42, 61]. Mhtdit if– Much extra credit. ipynb; MC methods learn directly from episodes of experience. ; Presentations. Can be used on-line No model of the world necessary. We use cookies to offer you a better experience, personalize content, tailor advertising, provide social media features, and better understand the use of our services. 91 MB] 045 Policy Evaluation in Windy Gridworld. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. Artificial Intelligence: Reinforcement Learning in Python | Download and Watch Udemy Pluralsight Lynda Paid Courses with certificates for Free. For example, if the policy took the left action in the start state, it would never terminate. The system uses MSC-51 series single-chip ATSC51 and programmable parallel I/O interface chip 8255A-centric device designed to control traffic Lights, can be achieved in accordance with the actual traffic flow through the P1 port 8051 chip set red, green fuel Liang function of time traffic Light cy. Code for habit-based action selection and the hybrid system was custom written in Python. Monte Carlo method has an advantage over Dynamic Programming as it does not have to know the transition probabilities and the reward system before hand. One caveat is that it can only be applied to episodic MDPs. For example, if the policy took the left action in the start state, it would never terminate. 3: The optimal policy and state-value function for blackjack found by Monte Carlo ES. It’s a technique that simply interpolates (using the coefficient λ \lambda λ ) between Monte Carlo and TD updates In the limit λ = 0 \lambda=0 λ. py -a q -k 100 -g BookGrid -u UCB_QLearningAgent python pacman. Meyer, and C. You'll also work on various datasets including image, text, and video. There is one dilemma that all…. The easiest way to use this is to get the zip file of all of our multiagent systems code. Barto: Reinforcement Learning: An Introduction 3 Monte Carlo: TD:! Use V to estimate remaining return n-step TD:. Two improvements: Example 12. Windy Gridworld is a grid problem with a 7 * 10 board, which is displayed as follows: An agent makes a move up, right, down, and left at a step. just run the agent following the policy the first time that state s is visited in an episode and do following calculation Every-Visit Monte-Carlo policy evaluation. 29 The windy gridworld problem 30 Monte who 31 No substitute for action – Policy evaluation with Monte Carlo methods 32 Monte Carlo control and exploring starts 33 Monte Carlo control without exploring starts 34 Off-policy Monte Carlo methods 35 Return to the frozen lake and wrapping up Monte Carlo methods 36 The cart pole problem 37 TD(0. of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019, IFAAMAS, 3 pages. 6] Temporal Difference Methods 이번 포스팅에서는 Ch. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. Ideally suited to improve applications like automatic controls, simulations, and other adaptive systems, a RL algorithm takes in data from its environment and improves its accuracy. py -a q -k 100 -g TallGrid -u UCB_QLearningAgent python pacman. Monte Carlo method has an advantage over Dynamic Programming as it does not have to know the transition probabilities and the reward system before hand. , 2012) and we sum together multiple acquisition functions derived from these kernel parameter samples (figure 9). Learn Hacking, Photoshop, Coding, Programming, IT & Software, Marketing, Music and more. View code README. •Monte Carlo policy gradient estimator has extremely high variance. I've done the chapter 4 examples with the algorithms coded already, so I'm not totally unfamiliar with these, but somehow I must have misunderstood the Monte Carlo prediction algorithm from chapter 5. 2 Monte-Carlo(MC)法をわかりやすく解説 ・モデル法とモデルフリー法のちがい ・MC法による最適状態行動価値関数Q(s,a)の求め方とポイント ・簡易デモ(python):Gridworld(2種類MC法の実行と比較:概念を理解する). ca, Canada's largest bookstore. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques. Chapter 6: Temporal Difference Learning Objectives of this chapter: Introduce Temporal Difference (TD) learning Focus first on policy evaluation, or prediction, methods. 33 Student MDP. For this study we have largely transfered that code over to Python. 下载 GridWorld实训答案. Seems like the way this differs from the nested monte carlo search work it is based on is the use of gradient methods to adjust policy; Not directly applicable to 2-player games, he was motivated by go but hasn’t applied ti to that yet; Doesn’t maintain statistics over the search tree, just the policy. post-706122736640732099. See the complete profile on LinkedIn and discover Wangyu (Castiel)’s connections and jobs at similar companies. On January 26, 2014, Google announced it had agreed to acquire DeepMind Technologies, a privately held artificial intelligence company from London. The course then proceeds with discussing elementary solution methods including dynamic programming, Monte Carlo methods, temporal difference learning, and eligibility traces. Monte Carlo vs Bootstrapping 5 10 15 20 25 5 10 15 20 25 • 25 x 25 grid world • +100 reward for reaching goal • 0 reward else • discount = 0. Show more Show less. 18MB 04 Markov Decision Proccesses/027 Defining and Formalizing the MDP. action_space. It requires move. These applications have, in turn, stimulated research into new Monte Carlo methods and renewed interest in some older techniques. 1: Convergence of iterative policy evaluation on a small gridworld; Figure 4. The information about distribution of possible next states is provided by the AZQuiz. 17 MB] 048 Monte Carlo Control without Exploring Starts. Q&A for students, researchers and practitioners of computer science. post-706122736640732099. REINFORCE: MONTE CARLO POLICY GRADIENT 271 REINFORCE, A Monte-Carlo Policy-Gradient Method (episodic) on the gridworld from Example 13. Monte-Carlo Policy Iteration의 문제는 3가지가 있다. Lecture 4: Model-Free Prediction. The abbreviated for loop is introduced. Note: At the moment, only running the code from the docker container (below) is supported. The Learning Path starts with an introduction to RL followed by OpenAI Gym, and TensorFlow. So a deterministic policy might get trapped and never learn a good policy in this gridworld. 1 Consider the 4⇥4 gridworld shown below. Evaluating a Random Policy in the Small Gridworld I No discounting, = 1 I States 1 to 14 are not terminal, the grey state is terminal I All transitions have reward 1, no transitions out of terminal states I If transitions lead out of grid, stay where you are I Policy: Move north, south, east, west with equal probability 20. 4 Gillian Hayes RL Lecture 10 8th February 2007 17 Q-Learning. These applications have, in turn, stimulated research into new Monte Carlo methods and renewed interest in some older techniques. Monte-Carlo planning (POMCP), v1. Monte Carlo a. 24 ½ x 39 1/8 in. Policy is currently equiprobable randomwalk. Der 2014er Roman "Monte Carlo" des belgischen Autors ist 2016 ins Deutsche übersetzt worden – um "De Bewaker" von 2009 hingegen, seinerzeit mit dem Literaturpreis der Europäischen Union. 2 On-Policy Monte-Carlo Control 3 On-Policy Temporal-Di erence Learning 4 O -Policy Learning 5 Summary. Infinite Variance. Monte Carlo Intro (03:10) Monte Carlo Policy Evaluation (05:45) Monte Carlo Policy Evaluation in Code (03:35) Policy Evaluation in Windy Gridworld (03:38) Monte Carlo Control (05:59) Monte Carlo Control in Code (04:04) Monte Carlo Control without Exploring Starts (02:58) Monte Carlo Control without Exploring Starts in Code (02:51) Monte Carlo. Simple gridworld python. , 2012) and we sum together multiple acquisition functions derived from these kernel parameter samples (figure 9). 2 words related to Monte Carlo: Monaco, Principality of Monaco. View Wangyu (Castiel) Huang’s profile on LinkedIn, the world's largest professional community. Implement reinforcement learning techniques and algorithms with the help of real-world examples and recipes Key Features Use PyTorch 1. It requires pickActions. Note: At the moment, only running the code from the docker container (below) is supported. Cliff GridWorld. This is across randomly chosen predator locations (n = 20); fill shows SEM. Policy is currently equiprobable randomwalk. See the complete profile on LinkedIn and discover Wangyu (Castiel)’s connections and jobs at similar companies. Hey there, Apart from the application to DRL in robotics and open world games, is there a specific example(s) related towards signal processing …. Issuu is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, and more online. Multi-Agent Systems. Code: SARSA. csdn已为您找到关于强化学习相关内容,包含强化学习相关文档代码介绍、相关教程视频课程,以及相关强化学习问答内容。. Reinforcement Learning An Introduction From Sutton & Barto. 78 billion by 2024The increasing demand for energy efficiency across the globe has propelled the need for artificial intelligence in energy. These methods require completing entire episodes before the value function can be updated. Presented at the Fall 1997 Reinforcement Learning Workshop. For online information and ordering of this and other Manning books, please visit www. Application of a GRID Technology for a Monte Carlo Simulation of Elekta Gamma Knife P-047 Data-intensive automated construction of phenomenological plasma models for the advanced tokamaks control P-048 AMGA WI: the AMGA Web Interface to Remotely Access Metadata P-049 BMPortal - A Bio Medical Informatics Framework P-050. Experiment 1 -- Gridworld 128 * 128 gridworlds. MC는 한 episode가 끝난 후에 얻은 return값으로 각 state에서 얻은 reward를 시간에 따라 discounting하는 방법으로 value func. Das Erlernen von Spielverhalten anhand des "Reinforcement Learning" bei Videospielen - Informatik - Diplomarbeit 2007 - ebook 34,99 € - Diplomarbeiten24. Gridworld - Evolving Intelligent Critters Recently I’ve been independent-studying for the AP Computer Science exam, and I made this to help me prepare. ca, Canada's largest bookstore. py -p PacmanUCBAgent -x 2000 -n 2010 -l smallGrid Remember from last week that both domains have a number of available layouts. This course covers the topics: Markov Desicsion Processes, Dymanic Programming, Monte Carlo Methods and Temporal Difference Learning, which should introduce the basic princibles and key terms of reinformcement learning and set the fundament for learning about more advanced topics. It has scikit-flow similar to scikit-learn for high level machine learning API's. AIXI Tutorial Part II John Aslanides and Tom Everitt Short Recap Approximations (Break) Variants of AIXI 1/41 AIXITutorial PartII Intuitions,Approximations,andtheRealWorld™. Safe Reinforcement Learning Philip S. I've done the chapter 4 examples with the algorithms coded already, so I'm not totally unfamiliar with these, but somehow I must have misunderstood the Monte Carlo prediction algorithm from chapter 5. Monte Carlo a. The Monte Carlo approach to solve the gridworld task is somewhat naive but effective. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. Der 2014er Roman "Monte Carlo" des belgischen Autors ist 2016 ins Deutsche übersetzt worden – um "De Bewaker" von 2009 hingegen, seinerzeit mit dem Literaturpreis der Europäischen Union. Meyer, and C. Monte Carlo Method 발표자료입니다. 9 learning rate • Monte carlo updates vs bootstrapping Start goal. Students also work with ArrayLists and learn the advantages and disadvantages of. O que é a simulação de Monte Carlo? Conhecido também como método de Monte Carlo ou MMC, a simulação de Monte Carlo é uma série de cálculos de probabilidade que. The Monte Carlo approach to solve the gridworld task is somewhat naive but effective. Monte Carlo a. Extract accurate information from data to train and improve machine learning models using NumPy, SciPy, pandas, and scikit-learn libraries Key Features Discover solutions for feature generation, feature extraction, and feature selection Uncover the end-to-end feature engineering process across continuous, discrete, and unstructured datasets Implement modern feature extraction techniques using. The agent still maintains tabular value functions but does not require an environment model and learns from experience. The third major group of methods in reinforcement learning is called Temporal Differencing (TD). OpenSpiel includes some basic optimisation algorithms which are applied to games. ; Presentations. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1. A simulação de Monte Carlo é comum em análises de mercado, sendo muito usada, por exemplo, para se estimar resultados futuros de um projetos, investimentos ou negócios. 06 Monte Carlo. 3 Some notation 24 2. REINFORCE: MONTE CARLO POLICY GRADIENT 271 REINFORCE, A Monte-Carlo Policy-Gradient Method (episodic) on the gridworld from Example 13. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. python package for fast shortest path computation on 2D grid or polygon maps. Allocating resources to customers in the customer service is a difficult problem, because designing an optimal strategy to achieve an optimal trade-off between available resources and customers' satisfaction is non-trivial. 下载 中山大学软件工程中级实训阶段二答案. Over the past few years, the PAC-Bayesian approach has been applied to numerous settings, including classification, high-dimensional sparse regression, image denoising and reconstruction of large random matrices, recommendation systems and collaborative filtering, binary ranking, online ranking, transfer learning, multiview learning, signal processing, to name but a few. Search; Courses. Code for habit-based action selection and the hybrid system was custom written in Python. essas redes empregaram uma árvore de pesquisa de Monte Carlo. 在鏈接預測問題中,僅存在已知的邊和節點。如果節點對中存在已知邊緣,則將節點對視為正樣本。除了那些邊緣已知的節點對之外,某些節點對中可能存在未觀察到的邊緣,或者某些節點對中實際上不存在邊緣。. ! If would take agent off the grid: no move but reward = –1! Other actions produce reward = 0, except actions that move agent out of special states A and B as shown. Specs 85 Monte Carlo Engine Learnsmart Answer Key Accounting Cow Testes Dissection 2012 Ap Gridworld Solutions Free John Deere 4039 Workshop Manual Manual Hp 48gx. Implement reinforcement learning techniques and algorithms with the help of real-world examples and recipes Key Features Use PyTorch 1. csdn已为您找到关于强化学习相关内容,包含强化学习相关文档代码介绍、相关教程视频课程,以及相关强化学习问答内容。. A simulação de Monte Carlo é comum em análises de mercado, sendo muito usada, por exemplo, para se estimar resultados futuros de um projetos, investimentos ou negócios. Monte Carlo Methods. The vector r. Username/Email * Password *. C: An image of the Mars domain, with points of elevation sharper than 10 colored in red. Monte Carlo Intro (03:10) Monte Carlo Policy Evaluation (05:45) Monte Carlo Policy Evaluation in Code (03:35) Policy Evaluation in Windy Gridworld (03:38) Monte Carlo Control (05:59) Monte Carlo Control in Code (04:04) Monte Carlo Control without Exploring Starts (02:58) Monte Carlo Control without Exploring Starts in Code (02:51) Monte Carlo. Sutton & Barto Exercise 5. Bayesian Localization demo, (See also Sebastian Thrun's Monte Carlo Localization videos) Bayesian Learning. Menu; Academics ICSE. Ritchie Ginther (Ferrari 156) leads from Jim Clark (car 28) Lotus 21 and winner Stirling Moss, Lotus 18 The Monaco Grand Prix is the one race of the year that every driver dreams of winning. 5 (Lisp) Chapter 4: Dynamic Programming Policy Evaluation, Gridworld Example 4. 3 (Lisp) Chapter 5: Monte Carlo Methods. Monte Carlo Methods We're working with a small grid world example, with an agent who would like to make all the way to the state and the bottom right corner as qiukly as possible. Lớp Math cung cấp một phương thức mang tên random để trả lại một số phẩy động giữa 0. Figure 21: Gridworld derived from image 442 in AOI-5 Khartoum. Soap Bubble. Learning control for a communicating mobile robot, on our recent research on machine learning for control of a robot that must, at the same time, learn a map and optimally transmit a data buffer. Example: Aliased Gridworld • Partial observability: features describe whether there is a wall in N,E,S,W. 在鏈接預測問題中,僅存在已知的邊和節點。如果節點對中存在已知邊緣,則將節點對視為正樣本。除了那些邊緣已知的節點對之外,某些節點對中可能存在未觀察到的邊緣,或者某些節點對中實際上不存在邊緣。. In this case, of course, don't run it to infinity!. **Udemy - Artificial Intelligence Reinforcement Learning in Python** The complete guide to artificial intelligence and machine learning, prep for deep reinforce Udemy - Artificial Intelligence Reinforcement Learning in Python. Encuentra más productos de Libros, Revistas y Comics, Libros. This program is a Gridworld 1 Critter (named FlowerHunter) that hunts Flowers by using an artificial neural network (ANN) to make decisions. or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. how to plug in a deep neural. You can run your UCB_QLearningAgent on both the gridworld and PacMan domains with the following commands. 学習進度を反映した割引率の調整 尾川 順子 , 並木 明夫 , 石川 正俊 電子情報通信学会技術研究報告. make("CartPole-v1") observation = env. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap Tim Salimans, Diederik Kingma, Max Welling Paper | Abstract Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables. The data for the learning curves is generated as fol-lows: after every 1000 steps (actions) the greedy pol-icy is evaluated offline to generate a problem specific performance metric. It is an approach to do online planning, which attempts to pick the best action for a current situation by simulating interactions with the environment. m and updateVfield. "Monte-Carlo tree search as regularized policy optimization", Grill et al 2020 {DM} (AlphaZero/MuZero) (A gridworld with different number of agents present, PPO. Lecture 5: Model-Free Control Outline 1 Introduction 2 On-Policy Monte-Carlo Control 3 On-Policy Temporal-Di erence Learning 4 O -Policy Learning 5 Su 0 downloads 84 Views 1MB Size Report. The agent still maintains tabular value functions but does not require an environment model and learns from experience. Open source interface to reinforcement learning tasks. 32 Markov Decision Process. However, no bootstrapping is not a good idea if we only have a finite amount of time – he gave a few examples. Monte carlo approach. These results suggest that RL methods that use temporal differencing (TD) are superior to direct Monte Carlo estimation (MC). It is possible for your policy improvement step to generate such a policy, and there is no recovery from this built into the algorithm. Value iteration; Policy iteration - policy evaluation & policy improvement; Environments. Currently,manynumericalproblemsinFinance,Engineering and Statistics are solved with this method. We dene the nite gridworld state. Its fair to ask why, at this point. Abstract (Framed for a general scientific audience): The gridworld is the canonical example for Reinforcement Learning from exact state-transition dynamics and discrete actions. com/profile/01383203873324630917 [email protected] Tron s Light Cycle APCS Gridworld Search and download Tron s Light Cycle APCS Gridworld open source project / source codes from CodeForge. O que é a simulação de Monte Carlo? Conhecido também como método de Monte Carlo ou MMC, a simulação de Monte Carlo é uma série de cálculos de probabilidade que. Monte Carlo Methods for Making Numerical Estimations. OpenSpiel includes some basic optimisation algorithms which are applied to games. py -a q -k 100 -g TallGrid -u UCB_QLearningAgent python pacman. At the other extreme Monte Carlo (MC) methods have no model and rely soley on experience from agent-environment interaction. Students also work with ArrayLists and learn the advantages and disadvantages of. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. We all learn by interacting with the world around us, constantly experimenting and interpreting the results. 3 Monte Carlo ES Control. One caveat is that it can only be applied to episodic MDPs. Synonyms for Monte Carlo casino in Free Thesaurus. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques. Learning control for a communicating mobile robot, on our recent research on machine learning for control of a robot that must, at the same time, learn a map and optimally transmit a data buffer. python gridworld. 2 cm “The most beautiful women in the world”: that’s “The Summer in Monte Carlo. 49% during the forecast period from 2019 to 2024, enabling the market to reach \$7. Monte Carlo method has an advantage over Dynamic Programming as it does not have to know the transition probabilities and the reward system before hand. Monte Carlo tree search The graph structure on the previous slide might make you think of a range of algorithms that you could already be familiar with. Program schedule of IJCAI 19. In this section we present an on-policy TD control method. 29 The windy gridworld problem 30 Monte who 31 No substitute for action – Policy evaluation with Monte Carlo methods 32 Monte Carlo control and exploring starts 33 Monte Carlo control without exploring starts 34 Off-policy Monte Carlo methods 35 Return to the frozen lake and wrapping up Monte Carlo methods 36 The cart pole problem 37 TD(0. Cliff Walking and other gridworld examples) and a large class of stochastic environments (including Blackjack). 2 On-Policy Monte-Carlo Control 3 On-Policy Temporal-Di erence Learning 4 O -Policy Learning Windy Gridworld Example Reward = -1 per time-step until reaching goal. Reinforcement learning is a machine learning technique that follows this same explore-and-learn approach. Gridworld • States given by grid cells –Additionally, specified start and end • Randomly pick some policy π(0), compute (or approx. Temporal Difference (TD) Learning (Q-Learning and SARSA) Approximation Methods (i. Monte carlo approach. Basically we can produce n simulations starting from random points of the grid, and let the robot move randomly to the four directions until a termination state is achieved. 75 MB] 044 Monte Carlo Policy Evaluation in Code. Monte Carlo Monte Carlo Intro (3:10) Monte Carlo Policy Evaluation (5:45) Monte Carlo Policy Evaluation in Code (3:35) Policy Evaluation in Windy Gridworld (3:38) Monte Carlo Control (5:59) Monte Carlo Control in Code (4:04) Monte Carlo Control without Exploring Starts (2:58) Monte Carlo Control without Exploring Starts in Code (2:51) Monte. or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. DeepCubeA builds on DeepCube, a deep reinforcement learning algorithm developed by the same team and released at ICLR 2019, that solves the Rubik’s cube using a policy and value function combined with Monte Carlo tree search (MCTS). Monte-Carlo tree search inset shows sequences of actions taken during 1 simulation depth (e. At the other extreme Monte Carlo (MC) methods have no model and rely soley on experience from agent-environment interaction. Bayesian Localization demo, (See also Sebastian Thrun's Monte Carlo Localization videos) Bayesian Learning. 3: The solution to the gambler’s problem; Chapter 5. Students also work with ArrayLists and learn the advantages and disadvantages of. Andrew Bagnell CMU-RI-TR-04-67 Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213. 2 Monte-Carlo(MC)法をわかりやすく解説 ・モデル法とモデルフリー法のちがい ・MC法による最適状態行動価値関数Q(s,a)の求め方とポイント ・簡易デモ(python):Gridworld(2種類MC法の実行と比較:概念を理解する). Monte Carlo PCA for Parallel Analysis is a compact application that can easily calculate the results of a Monte Carlo analysis. Monte-Carlo planning (POMCP), v1. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. 具体的 Control 方法,在《动态规划》一文中我们提到了 Model-based 下的广义策略迭代 GPI 框架,那在 Model-Free 情况下是否同样适用呢?. Monte Carlo Simulation and Reinforcement Learning Part 1: Introduction to Monte Carlo simulation for RL with two example algorithms playing blackjack. My setting is a 4x4 gridworld where reward is always -1. ) – blackbrandt Jul 2 '19 at 21:04. It is possible for your policy improvement step to generate such a policy, and there is no recovery from this built into the algorithm. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. But at least one very popular framework died. 1, Figure 4. Performing Monte Carlo policy evaluation. Monte Carlo: TD: Sarsa(λ) Gridworld Example With one trial, the agent has much more information about how to get to the goal. 05,– accumulating traces Comparisons Convergence of the Q(λ)’s None of the methods are proven to converge. 1, Figure 4. always go left ⇒depending on the start state the agent might get stuck • a stochastic policy sometimes would take the. Course materials: Lecture: Slides-1a, Slides-1b, Background reading: C. Udemy - Artificial Intelligence: Reinforcement Learning in Python [TP] Complete guide to Artificial Intelligence, prep for Deep Reinforcement Learning with Stock Trading Applications. If it uses Monte-carlo then it seems strange to compare with policy gradient using bootstrapping. Dynamic programming methods are well developed mathematically, but require a. Get the latest machine learning methods with code. A new Approach for Quantifying Root-Reinforcement of Streambanks: the Rip Root Model. e, if we do a best fit Markov model, and assume it is exactly correct, and then compute what it predicts. The third major group of methods in reinforcement learning is called Temporal Differencing (TD). View Wangyu (Castiel) Huang’s profile on LinkedIn, the world's largest professional community. In part 3 we do some simple Q learning to teach the agent to play cart pole. 4: Results of Sarsa applied to a gridworld (shown inset) in which movement is altered by a location-dependent, upward Òwind. Docker allows for creating a single environment that is more likely to work on all systems. TD learning solves some of the problem of MC learning and in the conclusions of the second post I described one of these problems. components work together. 3 Lecture: Slides-2, Slides-2 4on1. py -p PacmanUCBAgent -x 2000 -n 2010 -l smallGrid Remember from last week that both domains have a number of available layouts. Actor-critic. We present an algorithm that (i) extracts the—initially unknown—desired trajectory from the sub-optimal expert’s demonstrations and (ii) learns a local model suitable for control along the learned trajectory. number” in Monte Carlo methods we need to satisfy theorem 1. Illustrated examples from Sutton & Barto. The Monte Carlo method for reinforcement learning learns directly from episodes of experience without any prior knowledge of MDP transitions. Stanford Libraries' official online search tool for books, media, journals, databases, government documents and more. Course materials: Lecture: Slides-1a, Slides-1b, Background reading: C. Basically we can produce n simulations starting from random points of the grid, and let the robot move randomly to the four directions until a termination state is achieved. Grokking Deep Reinforcement Learning. 을 update합니다. empowerment for the continuous case and gives an algorithm for its computation based on Monte-Carlo approximation of the underlying high-dimensional integrals. This is across randomly chosen predator locations (n = 20); fill shows SEM. The actions are the standard four-- up, down, right , and left --but in the middle region the resultant next states are shifted upward by a "wind," the strength of. 5 (Lisp) Chapter 4: Dynamic Programming Policy Evaluation, Gridworld Example 4. Windy Gridworld Example * n-Step SARSA Q(S_t, A_t) Off-policy Monte-Carlo learning is really a bad idea for off-policy learning, importance sampling is useless in. Q&A for students, researchers and practitioners of computer science. It requires pickActions. makepdf, a Windows XP batch script to automate the creation of PDF files from DVI (21 November 2008, 2. Documentation Help Center. The information about distribution of possible next states is provided by the AZQuiz. Implement reinforcement learning techniques and algorithms with the help of real-world examples and recipes Key Features Use PyTorch 1. python performance python-3. Figure 21: Gridworld derived from image 442 in AOI-5 Khartoum. It’s a technique that simply interpolates (using the coefficient λ \lambda λ ) between Monte Carlo and TD updates In the limit λ = 0 \lambda=0 λ. The Monte Carlo method is a computational method that uses random numbers and statisticstosolveproblems. Monte Carlo methods don't require a model and are conceptually simple, but are not suited for step-by-step incremental computation. Este método foi aplicado, como forma de exemplo, em modelos e sistemas cujos resultados são conhecidos, com a finalidade de comparar com estes resultados os obtidos neste trabalho. –This is what a batch Monte Carlo method gets •If we consider the sequentiality of the problem, then we would set V(A)=. You can think of these as smart ways of exploring the possibly very large branching structures that can spring up. Multiagent Monte Carlo Tree Search. 3 Monte Carlo Control without Exploring Starts. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. e, if we do a best fit Markov model, and assume it is exactly correct, and then compute what it predicts. How do these results hold up in deep RL, which deals with perceptually. CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley. Monte Carlo methods only learn when an episode terminates. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. You can run your UCB_QLearningAgent on both the gridworld and PacMan domains with the following commands. All the learning-curves below are. A common approach is to implement a simulator of the stochastic dynamics of the MDP and a Monte Carlo optimization algorithm that invokes this simulator to solve the MDP. Evans, Owain - Active Reinforcement Learning with Monte-Carlo Tree Search - 2018-03-13 - https. What OS are you on? (Also, as a formatting note, you want to use a backtick (the key above the tab key), not a single quote for code blocks. It requires move. Example: Windy Gridworld. MCTS Monte-Carlo Tree Search [1, 2] has had much publicity recently due to their successful application in solving Go [13]. The interactions. m (previously maze1fvmc. 3 Monte Carlo ES Control. MC is model-free : no knowledge of MDP transitions / rewards. Find books. 1 (Lisp) Policy Iteration, Jack's Car Rental Example, Figure 4. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques. Ó A trajectory under the optimal policy is also shown. Related post. 을 update합니다. python gridworld. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. AIXI Tutorial Part II John Aslanides and Tom Everitt Short Recap Approximations (Break) Variants of AIXI 1/41 AIXITutorial PartII Intuitions,Approximations,andtheRealWorld™. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. Welcome to the second part of the dissecting reinforcement learning series. The Monte Carlo Tree Search has to be slightly modified to handle stochastic MDP. m: Simulation of an exploration algorithm based goalkeeper. 78 billion by 2024The increasing demand for energy efficiency across the globe has propelled the need for artificial intelligence in energy. csdn已为您找到关于强化学习相关内容,包含强化学习相关文档代码介绍、相关教程视频课程,以及相关强化学习问答内容。. Sarsa avoid this trap, because it would learn such policies or bad during the episode. Menu; Academics ICSE ; 1st Standard; 2nd Standard. a Monte Carlo Tree Search. The Monte Carlo approach to solve the gridworld task is somewhat naive but effective. Monte Carlo methods don’t require. 4-5 Central Limit Theorem Applied to 4-Way Gridworld One-Step Errors. Recap: Incremental Monte Carlo Algorithm • Incremental sample-average procedure: • Where n(s) is number of first visits to state s – Note that we make one update, for each state, per episode • One could pose this as a generic constant step-size algorithm: – Useful in tracking non-staonary problems (task + environment). Offline Monte Carlo Tree Search. The actions are the standard four-- up, down, right , and left --but in the middle region the resultant next states are shifted upward by a "wind," the strength of. Lecture 5: Model-Free Control On-Policy Temporal-Di erence Learning. MC는 한 episode가 끝난 후에 얻은 return값으로 각 state에서 얻은 reward를 시간에 따라 discounting하는 방법으로 value func. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. jl Author JuliaPOMDP. 2, using the equiprobable random policy. Multi-Agent Systems. For more information on these agents, see Q-Learning Agents and SARSA Agents. The goal is to find the shortest path from START to END. Monte-Carlo Policy Gradient Likelihood Ratios Monte Carlo Policy Gradient r E[R(S;A)] = E[r log ˇ (AjS)R(S;A)] (see previous slide) This is something we can sample Our stochastic policy-gradient update is then t+1 = t + R t+1r log ˇ t (A tjS t): In expectation, this is the actual policy gradient So this is a stochastic gradient algorithm. Monte Carlo method has an advantage over Dynamic Programming as it does not have to know the transition probabilities and the reward system before hand. Rl gridworld. Monte Carlo (MC) Method : Demo Code: monte_carlo_demo. The Monte Carlo Tree Search has to be slightly modified to handle stochastic MDP. This example shows how to solve a grid world environment using reinforcement learning by training Q-learning and SARSA agents. 26 MB] 047 Monte Carlo Control in Code. Ideally suited to improve applications like automatic controls, simulations, and other adaptive systems, a RL algorithm takes in data from its environment and improves its accuracy. python package for fast shortest path computation on 2D grid or polygon maps. MCTS Monte-Carlo Tree Search [1, 2] has had much publicity recently due to their successful application in solving Go [13]. One stop shop for the Julia package ecosystem. Monte-Carlo Introduction Dans cette partie, nous voyons comment associer l'idée de la programmation dynamique avec l'idée de Monte-Carlo (MC).
e4m3m3gry0h7pf fpfv1kvrqod rtb4lldcr1jfq o02los0rfh s7fp9cls0i7w5w pz6el2s6mn7ar qr532ho56ccf2 gmhpv6kjlw3ohx i00l3ci9yfn6er 59857jc8cm2ahk utlrj2hh3x9 j2icqq2vq0uh34g b0byiwz4vn 66z8z99cz4g skk3e4e3nf ahgdb9ma0oqnyls 06yr7vxmno5l qfu57yfdk0wsi puw0cga2gidk3x uks86szhw8av0 2huomtqsk5 zg05f03esdrj hxhyflck9wzafa zu4yuj1niq7d9 hdp7iwqbsomphse jvzxggms59gv 1sq59mfqi6
|
|
# [tex-hyphen] Why does "\-" not work?
Barbara Beeton bnb at ams.org
Sun Aug 21 14:40:29 CEST 2016
On Sun, 21 Aug 2016, Adrian Fronda wrote:
Dear Barbara, dear Phil,
\selectlanguage{british}
in the root file before the inclusion of all files, but it did not help. No automatic hyphenating happens and no hypenating occurs when I use ?\-?.
Also when I follow Phil's remark and write
\hyphenation{im-pas-sive-ly} impassively
the word ?impassively? is not hyphenated as it should.
Do you have any other ideas? As I said, ideally I would prefer to use ?\-? to the more cumbersome \hyphenation{} command.
perhaps it wasn't entirely clear --
the \hyphenation{...} command is
intended to be used *in the preamble*,
where it will (is supposed to) affect
*all* instances of the specified word(s)
in the entire text. so if a particular
word occurs frequently near the end of
a line, that is a more less labor
intensive method.
however, the failure of \- to work
at all is very mysterious.
like claudio, whose response i have now
read, i constructed a small test file,
but created the text myself, setting at
the right margin a word whose hyphenatin
is known to be faulty (or at least very
peculiar) using the british patterns.
as claudio found, there is no problem
with any of the packages you load, so
the problem must be in one of your text
files.
-- bb
here is the test file i used:
\documentclass[14pt, oneside, a4paper]{book}
Fisipojakene}}
\title{Beletristics}
\date{\today}
\usepackage[utf8]{inputenc}
\usepackage[german,british]{babel}
\selectlanguage{british}
\usepackage{cjhebrew}
\usepackage{graphicx}
\usepackage{url}
\usepackage[authoryear,round,semicolon]{natbib}
\makeindex
\usepackage{makeidx}
\usepackage{lmodern}
\usepackage{textcomp}
\usepackage{verbatim}
\usepackage[T1]{fontenc}
\usepackage[pdftex,unicode=true,%
bookmarks,plainpages=false]{hyperref}
\begin{document}
\frontmatter
\maketitle
\mainmatter
Here is some text that should occupy more than one line. x The word
alternate is placed to appear at the end of a line, where, according
to the preloaded hyphenation rules (faulty) for British, it should be
hyphenated incorrectly.
Here is some text that should occupy more than one line. x The word
alter\-nate is placed to appear at the end of a line, where, according
to the preloaded hyphenation rules (faulty) for British, it should be
hyphenated incorrectly.
Here is some text that should occupy more than one line. The word
alternate is placed to appear at the end of a line, where, according
to the preloaded hyphenation rules (faulty) for British, it should be
hyphenated incorrectly.
\end{document}
|
|
# Why is the Mean Square Error (MSE) used in the Peak Signal-to-Noise Ratio (PSNR) calculation rather than the Root Mean Square Error (RMSE)?
Peak signal-to-noise ratio (PSNR) is calculated with
$$\text{PSNR} = 10 \log \frac{\;\text{MAX}^2}{\text{MSE}}\;,$$
with MSE the mean square error and MAX the maximum signal value.
Why is the MSE used in calculating the PSNR, rather than RMSE (root mean square error)?
$$\text{PSNR} = 10 \log \frac{\text{MAX}^2}{\text{MSE}} = 20 \log \frac{\text{MAX}}{\text{MSE}^{1/2}} = 20 \log \frac{\text{MAX}}{\text{RMSE}}$$
|
|
# Simulation and Observation
To determine if an observation is consistent with the spherical Earth model, we can create simulations to understand the expected result, and then see if they match the actual observation.
Flat-Earthers like to reject the results of simulation as being unreal, not real-world observation. In reality, the simulations are presented not to dispute their observation, but to demonstrate that their observation is consistent with expectation if Earth is a sphere.
We often witness flat-Earthers present their observation and claim that it somehow “proves” a flat Earth. Usually, it is due to their misunderstanding about physics or the geometries involved. One way to debunk their assertions is to show them a simulation of their observation. This way, we know how it would appear if Earth is a sphere 6371 km in radius. If the result of simulation matches with their observation, we can conclude that the observation is consistent with the spherical Earth model, and thus, does not disprove it.
Flat-Earthers would usually attempt to discredit the simulation by mentioning it is “just a simulation, not an actual observation,” as if the simulation was intended to dispute their observation. In the majority of the cases, nobody is trying to dismiss flat-Earthers’ observation as being faked. The simulations were created to show that their observation matches the expectation if Earth is a sphere, and, therefore, does not disprove spherical Earth.
Such their response is probably due to their own behavior if the reverse happens. If we were to show them an observation that proves spherical Earth, many of them would quickly respond by claiming it as fake.
|
|
# IPAB2016 (Intense and Powerful Accelerator Beams for industrial and energy application)
Europe/Rome
"Villi" Meeting Room (INFN Laboratori Nazionali di Legnaro)
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
Description
The workshop "IPAB2016" will be held from 14th to 15th March 2016 at Laboratori Nazionali di Legnaro, Legnaro ITALY. The Conference programme will begin on Monday, March 14th at 9.00 and will close on Tuesday, March 15th in the afternoon (about 17.00). A visit to nearby Consorzio RFX is included.
The meeting is organized in the framework of the Accelerator Application Network Activity (Work Package 4 of the Eucard2 project), to foster knowledge of the application capability of accelerators.
Registration is free of charges.
International Scientific Committee M. Cavenago (Chair), INFN Legnaro, Italy V. Antoni, CNR e RFX, Padua, Italy R. Edgecock, Huddersfield University, UK H. Owen, Manchester University, UK F. Taccogna, CNR-Nanotech, Italy Local Organizing Committee M. Cavenago (Chair), INFN Legnaro, Italy E. Fagotti, INFN Legnaro, Italy M.C. Buoso, INFN Legnaro, Italy G. Serianni, CNR e RFX, Padua, Italy P. Veltri, INFN Legnaro e RFX, Italy
Sponsored by
and Accelerator Application Network Activity Work Package 4 EuCARD-2 is an Integrating Activity Project for coordinated Research and Development on Particle Accelerators, co-funded by the European Commission under the FP7 Capacities Programme
Participants
• Alessandro Minarello
• Alessio Galatà
• Andrea Pisent
• Angeles Faus-Golfe
• Antonio Alessandro Rossi
• Augusto Lombardi
• Carlo Roncolato
• Chun Loong
• Daniela Campo
• Daniele Ceccato
• Dario Nicolosi
• David Bruton
• David Mascali
• Demetre Zafiropoulos
• Denis Conventi
• Diego Marcuzzi
• Dmytro Rafalskyi
• Emanuele Sartori
• Enrico Fagotti
• Ernesto Giuffreda
• Francesco Grespan
• Francesco Taccogna
• Freddy Poirier
• Frédéric Stichelbaut
• Gabriel Pantelias
• Gianluigi Serianni
• Giorgia Terzoudi
• Giuseppe Castro
• Juan Esposito
• Katasuyoshi Tsuomri
• Laszlo Sajo Bohus
• Lorenzo Neri
• Lorenzo Pranovi
• Luca Silvestrin
• Lucia Sarchiapone
• Luciano Calabretta
• Marco Barbisan
• Marco Cavenago
• Marco Ripani
• Maria Crista Buoso
• maria francesca moisio
• Mario Maggiore
• Massimiliano Rome'
• Massimo Ferrario
• Mauro Pavei
• Michela De Muri
• Micol Pasquali
• Mieko Kashiwagi
• Piergiorgio Antonini
• piergiorgio sonato
• Pierluigi Veltri
• Roberto Cherubini
• Roberto Piovan
• Ruggero Pengo
• Santo Gammino
• Stefan Briefi
• Stefania Canella
• Timur Kulevoy
• Ursel Fantz
• vanni antoni
• Vanni Toigo
• Vincenzo Variale
Contact
• Monday, 14 March
• 08:30 09:30
Registration 1h "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 09:30 09:40
Welcome and Introductory remarks (IPAB2016 Chair: M. Cavenago) 10m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 09:40 11:00
Morning Session (Chair: P. Veltri) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
Convener: Pierluigi Veltri (LNL)
• 09:40
Powerful RF ion sources for fusion 30m
The neutral beam injection system for the international fusion experiment ITER is based on large and powerful RF-driven ion sources which have to deliver extracted negative hydrogen ion currents of 66 A (H‾) and 57 A (D‾) being accelerated to 870 keV and 1 MeV, respectively. The hydrogen plasma is generated in eight individual cylindrical RF-drivers for which a total RF power of up to 800 kW at a frequency of 1 MHz will be available. The plasma expands in a rectangular expansion source with a width of 0.9 m and a height of 1.9 m. The negative hydrogen ions are created by utilizing the conversion of hydrogen atoms at a surface with low work function for which caesium is evaporated into the source. Extraction takes place from 1280 apertures with a diameter of 14 mm each resulting in an extraction area of 0.2 m2. In order to prevent damages of the grid system, the co-extracted electron current has to be kept below the extracted ion current. The source must be operated at a pressure of 0.3 Pa at maximum to limit the ion losses in the accelerator. Another challenging requirement concerns the beam duration and homogeneity: beams up to 3600 s have to be achieved with deviations in the uniformity over the large beam below 10%. The RF prototype source for ITER (1/8 scale) has been successfully developed in the last years. At the test facility BATMAN it was demonstrated that the required negative ion current density can be achieved at 0.3 Pa with an electron-to-ion ratio below for short pulses (4 s), whereas MANITU demonstrated long pulse operation up to one hour for hydrogen and deuterium but with reduced beam parameters. RADI was a size scaling experiment without extraction for the purpose to prove the modular driver concept and to illuminate homogeneously an area of half the size of the ITER source. As the required parameters have not been achieved simultaneously in a large source, the facility ELISE has been set up as part of the R&D roadmap of the European ITER domestic agency F4E. ELISE is dedicated to demonstrate the required negative hydrogen densities (extracted: 329 A/m2 H‾, 286 A/m2 D‾) at an electron-to-ion ratio of less than one for a source of the same width but only half the height of the ITER source (0.9 x 1 m2). Consequently, the ELISE source is driven by four RF drivers for which a total RF power of 360 kW is available. Since the first plasma in March 2013, ELISE has made enormous progress towards the ITER parameters: first stable one hour discharges with 10 s beam pulses every 3 min (limited by the available high voltage power supply) are demonstrated in hydrogen and deuterium, however at reduced RF power only. The limiting factor in the source performance is the amount and the temporal stability of co-extracted electrons, which is in particular a challenge for deuterium. Advanced beam diagnostics reveal that the requirements in beam uniformity of these large beams (about 1x1 m2) can be met.
Speaker: Ursel Fantz (Max-Planck-Institut fuer Plasmaphysik)
• 10:10
Development of a 1 MeV electrostatic accelerator for fusion application at JAEA 30m
This paper reports the activities on the development of negative ion accelerators for fusion applications at Japan Atomic Energy Agency (JAEA). In International Thermonuclear Experimental Reactor (ITER) and JT-60 Super Advised (JT-60 SA), high-current and high-energy negative ion beams are required to be produced. The beam currents and energies for ITER and JT-60SA are 40A (200A/m2), 1 MeV for 3600 s and 22A (130A/m2), 0.5 MeV for 100 s, respectively. In order to realize those accelerators, an electrostatic accelerator with multiple acceleration stages and apertures has been proposed by JAEA and adopted as the reference accelerators for those. The accelerator is featured by the use of a large grid with > 1000 apertures for high-current and multiple acceleration stages for high-energy. The major issues are a stable vacuum insulation and a suppression of direct interception loss of the negative ions. A careful experimental studies on the vacuum insulation clarified that the voltage holding is varied with a square root of the gap length between the grids and with a power law scaling of number of the apertures. From the result, the structure of the accelerator has been designed. As for the suppression of the direct interception, the beamlet deflection due to residual magnetic field and space charge are suppressed by an off-set aperture displacement and a “Kerb plate” forming a local compensation electric field. In the test with the ITER mockup accelerator with 9 apertures, a 980 keV, 185 A/m2 has been successfully accelerated for 0.4s in 2011. Since 2011, JAEA has concentrated on the extension of the pulse duration time. One of the issues is a reinforcement of the extraction grid (EXG), where the extracted electrons are dumped. For this, the cooling channels in the EXG are moved from the previous backside to the front side of the heat receiving surface. This new EXG allows the surface temperature to be reduced to 150 oC of allowable level. In addition, the power loading of the acceleration grids is reduced. The measurement of the power loading in the acceleration grids reveals that the secondary electrons induced by the direct interception of the negative ions with the EXG is one of the origins of the power loading of the acceleration grids in downstream. The EXG is further modified with enlargements of the aperture size and off-set displacement distance. Taking these measures for the long pulse duration time, the pulse duration time has been successfully extended to 60 s at 0.97 MeV and 190 A/m2. This is the first demonstration of long-pulse acceleration for the ITER- and JT-60SA-relevant intense negative ion beams in the world. The achieved pulse duration time is limited by the capability of the power supply. There is no limitation to extend the pulse duration, namely, no degradation of the voltage holding and beam acceleration have been observed up to 60 s. The further test is planned after the upgrade of the power supply.
Speaker: Mieko Kashiwagi (Japan Atomic Energy Agency)
• 10:40
Determination of plasma parameters via optical emission spectroscopy at CERN’s Linac4 H– ion source 20m
At the accelerator complex of CERN an upgrade of the LHC injector chain is being implemented. This upgrade includes the installation of a linear accelerator based on negative hydrogen ions, the Linac4. The ion source of Linac4 relies on inductive RF coupling with an external coil for discharge generation (RF frequency 2 MHz, maximum RF power 100 kW). In general, the H– ions can be generated via two processes: first, the volume process, where H– is created from vibrationally excited hydrogen molecules by electron impact dissociation. For the second one, the surface process, caesium is evaporated into the source in order to establish a surface with low work function. H– is produced from hydrogen ions and atoms impinging on that surface. As caesium is very reactive, the stability of the H– production rate is an issue for using the surface process. However, the H– production is generally enhanced strongly compared to the volume process and it is accompanied by a reduction of the co-extracted electron current. In order to optimize the H– yield for both processes, a detailed knowledge of the plasma parameters and the dominant control parameters is mandatory. Insight in the plasma parameters can be obtained via optical emission spectroscopy (OES) and the evaluation of the results with collisional radiative models. These models balance the de- and excitation processes of all relevant atomic or molecular states in the discharge. Hence, modelling the measured population densities yields plasma parameters like the electron density and temperature. For the Linac4 ion source, high resolution OES measurements of the hydrogen plasma have been carried out, considering the atomic Balmer radiation and the molecular Fulcher emission (a transition, located between 590 and 650 nm). The plasma parameters obtained from the evaluation of these measurements are presented for a variation of the gas pressure and RF power.
Speaker: Stefan Briefi (AG EPP, Universität Augsburg)
• 11:00 11:20
Coffee break 20m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 11:20 13:00
Morning Session (Chair: P. Veltri) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 11:20
Development, injection and diagnostics for LHD Injectors 40m
Neutral beam injector (NBI) is the most powerful and reliable plasma heating device in the nuclear fusion research. The NBI is essential to sustain plasma current generating the confinement magnetic field in TOKAMAK machines such as ITER and JT-60SA. In National Institute for Fusion Science (NIFS), two positive-ion-based NBI (p-NBI) and three negative-ion-based NBI (n-NBI) are installed to improve the performances of plasma confined in Large Helical Device (LHD). The p-NBI and n-NBI are designed to inject the beams to LHD hydrogen plasmas with the energies / powers of 40 keV / 6 MW and 180 keV / 5 MW, respectively. The farmer injectors generate low temperature background LHD plasmas and are also applied for diagnostic beams to measure ion temperature. The later injectors heat up the background plasmas generated with p-NBI. The designed beam energies and powers have been successfully achieved. Using those injectors, many important achievements related to current drive, high ion temperature, L-H transition and high β plasma experiments have been achieved. As the next phase of LHD experiment, we have scheduled deuterium plasma confinement Filament-arc discharge is applied to generate plasmas in the magnetic configuration. The energy of the p-NBI systems are increased up to 60 keV and 80 keV with the injection power of 9 MW. On the other hand, the beam energy of n-NBI systems are fixed their energy and so that the current densities of deuterium negative ions are needed to be enhanced. In order to understand the detailed mechanisms of negative ion production and its extraction, comprehensive diagnostics for the hydrogen plasma in an ion source has been started. In the beam extraction region, ion-ion plasma, whose electron density is less than 1 %, are formed by seeding caesium (Cs) in the plasma. Response of the ion-ion plasma to electrostatic field is very different from normal electron-positive ion plasma and the shielding character to the electrostatic field depends on the magnetic structure. In the diagnostics, we have measured the spatial distributions of densities, flows and temperatures of electrons, positive and negative ions. Taking into account the energy relation of incident and out going particles, parent particle of negative ion is proton using our experimental results. Electrons and positive hydrogenous ions transport via ambipolar diffusion from plasma generation region to caesieted surface for negative-ion production, plasma-grid surface. The energy of negative ion have very low energy of ~0.1 eV. The diagnostic results indicate that the negative-ion production is not governed with particle picture and the production rate is not controllable by bias potential applied to the plasma-grid surface. This suggests that geometric or magnetic structures are necessary to change to increase negative-ion yield. A new structure of plasma grid to produce negative ion will be discussed.
Speaker: Katasuyoshi Tsumori (National Institute for Fusion Science, The Graduate University for Advanced Studies)
• 12:00
Status of NBI for ITER and the related test facility 30m
Two Neutral Beam Injectors (NBI) will provide a substantial fraction of the heating power necessary to ignite thermonuclear fusion reactions in ITER. The development of the NBI system at unprecedented parameters (40 A of negative ion current accelerated up to 1 MV) requires a strong demonstration activity, which was endorsed by ITER to optimise the crucial components and systems. A test facility, PRIMA (Padova Research on ITER Megavolt Accelerator), is presently in the final phase of construction at Consorzio RFX (Padova, Italy) in the CNR research area and will house two experiments, named SPIDER and MITICA. A full-size negative ion source, SPIDER (Source for the Production of Ions of Deuterium Extracted from Rf plasma), will be operated in the facility to demonstrate the creation and extraction of a D-/H- current up to 50/60A on a wide surface (more than 1m2) with uniformity within 10%. The second experimental device is the prototype of the whole ITER injector, MITICA (Megavolt ITer Injector and Concept Advancement), aiming to develop the knowledge and the technologies to guarantee the successful operation of the two injectors to be installed in ITER, including the capability of 1MV voltage holding at low pressure. The beam source is the key component of the system, whose design results from a tradeoff between requirements of the optics and real grids with finite thickness and thermo-mechanical constraints due to the cooling needs and the presence of permanent magnets. The experimental effort is supplemented by numerical simulations devoted to the optimisation of the accelerator optics and to the estimation of heat loads and currents on the various surfaces. In this contribution the main physics aspects of NBIs and the requirements of the test facilities MITICA and SPIDER will be discussed and the design and the status of the main components and systems will be described. Particularly a review of the accelerator physics and a comparison between the designs of the SPIDER and MITICA accelerators will be presented.
Speaker: Gianluigi Serianni (Consorzio RFX (CNR, ENEA, INFN, UNIPD, Acciaierie Venete SpA))
• 12:30
Discussion D1: new concepts and spin-off of fusion injectors 30m
May the large effort in developing powerful ion sources and accelerator produce some spin-off? Which application area? Questions on simulation of multicomponents plasmas (H/D/Cs, 3rd day discussions) and other session topic are also welcomed. Minutes prepared by conveners (P. Veltri, M. Cavenago) may be posted on the website, as well as short presentations (3 slide max) received for the discussion.
Speakers: Marco Cavenago (LNL), Pierluigi Veltri (LNL)
• 13:00 14:00
Lunch 1h .. (..)
### ..
#### ..
• 14:00 14:10
Bus to Consorzio RFX 10m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 14:10 16:00
Visit to Consorzio RFX (convener: V. Antoni) 1h 50m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:00 16:10
Bus back to LNL (please be on time boarding bus) 10m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:10 16:35
Coffee break 25m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:35 19:15
Afternoon session (Chair: V. Antoni) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:35
Welcome and introduction to LNL (G. Fiorentini, LNL Director) 15m
Speaker: Giovanni Fiorentini (FE)
• 16:50
The Applications of Particle Accelerators in Europe (APAE) 30m
Originally developed to investigate the fundamental laws of nature, particle accelerators accelerate charged particles to incredibly high speeds before using them for a variety of purposes. Today, accelerators are far more than a tool for fundamental research and their significant role in industry and society means that they have a very important, but often unseen, impact on our everyday lives. Over 30,000 particle accelerators are in use all over the world. In fact, until recently, most people had one in their sitting room. They allow beams of particles to be produced and used for a range of applications in a number of different areas, including health, industry, energy production, security and environment. The key questions when we start to think about the project: ”The Applications of Particle Accelerators in Europe (APAE)” are: Why we need the accelerators? Where are they? What will be the impact of particle accelerators on tomorrow’s society? What are the needs for the future? etc. The aim of the project is to create a European document equivalent of the "Accelerators for America's Future", but focused on applications of interest in Europe and for which technology developed for research can have an impact. The document it is intended for policy makers, as a result, it will be in two parts: an executive summary, focussing on the main issues for each country and in the correct language and a supporting document in English. WP4 of EUCARD2 is organizing the project.
Speaker: Angeles Faus-Golfe (Instituto de Fisica Corpuscular)
• 17:20
Double beam satellite propulsion 30m
In this talk I present brief review on a dual beam sources used in space propulsion. Dual beam propulsion is currently represented by few innovative concepts being under development. These concepts promise several advantages to the propulsion, such as precise control of the spacecraft potential, reduced background plasma and removal of a dedicated neutralization system that increases general robustness and mass/dimension property. In addition, dual propulsion concepts have great ability to downscale enabling possibility to use efficient propulsion system for a small spacecraft such as CubeSats and nano-sats. One of the dual beam propulsion concepts is a PEGASES concept (acronym for “plasma propulsion with electronegative gases”) where an electronegative plasma discharge is used to create alternated beam packets of positive and negative ions. Efficient broad beam negative ion extraction is possible due to very high plasma electronegativity i.e. ratio between the negative ion and electron density (reaches 5000); under these conditions plasma response is similar to both positive and negative bias since the electron influence on the sheath formation is negligible. By applying square waveform to the gridded ion acceleration system, positive and negative ions are alternately accelerated up to high velocities (>40 km/s) that provide a thrust force in the direction opposite to the ion acceleration. The generated beam is quasi-neutral, and a spacecraft potential can be controlled by changing a duty cycle of the acceleration voltage waveform. Absence of electrons in the generated beam is expected to reduce background plasma formation, and in addition should decrease beam divergence in a presence of weak magnetic field that can be important for the future missions devoted to targeted space debris removal. Use of this source is however limited to very electronegative propellants, typically based on fluorine, iodine or fullerenes. Another successful dual beam propulsion concept which will be discussed here is based on an ion-electron source with RF acceleration principle. Briefly, this concept assumes using the plasma self-bias effect in the RF-powered gridded system, providing quasi-simultaneous ion-electron acceleration. Heavy ions are accelerated by an averagely dc electric field, while electrons are co-extracted using the same extraction system in a short moments when oscillating plasma potential reaches low values. First proof of concept is already achieved, demonstrating similar efficiency as for traditional gridded ion thrusters. The ion and electron fluxes emitted by the source are equal helping to achieve much better beam neutralization than in traditional system with neutralizer. The experiments demonstrate that emitted flow of electrons is highly directional, thus the thruster plume can be precisely localized. Strong advantage of this concept is significant technology heritage, because of similarity with the already operated ion thrusters.
Speaker: Dmytro Rafalskyi (LPP-Ecole Polytechnique)
• 17:50
Progresses about Microwave Discharge Ion Sources for high intensity protons and light ions’ beams production at INFN-LNS 30m
The diffusion of large-scale facilities based on high intensity linear accelerators for both fundamental or applied research has triggered the development of multi-mA proton/light ions sources. At INFN-LNS a great deal of work has been done on modelling, design and construction of several Microwave Discharge Ion Sources, according to the demands coming from different projects like TRASCO, ESS, Daeδalus, BNCT, etc. The availability of advanced simulation tools, as well as the synergies with research groups working in the thermonuclear fusion field, allowed to figure out innovative solutions in terms of RF coupling, magnetic field design, mechanics, thus improving in a relevant way the performances and the overall reliability of the systems. An overview of the high intensity proton sources developed since end of Nineties will be given, with particular emphasis to the innovative design of the Proton Source for the European Spallation Source, now entering the commissioning phase at LNS, which is based on a versatile magnetic field system. A specific attention will be paid to modelling and diagnostics efforts, which allow a more advanced mastering of wave-to-plasma interaction and beam formation processes.
Speaker: Santo Gammino (INFN-LNS)
• 18:20
Review of LNL Accelerators for Applied Physics: AN2000, CN and related experiments 20m
A review of the two LNL small accelerators mainly dedicated to applied physics (AN2000 and CN) is given. Their foreseen use in the next years is then presented. The experiments related to applied physics performed in the last 2 years at AN2000 and CN are also described, with special attention to those interesting for application of accelerator based analytical and diagnostic techniques (PIXE and others). Perspectives for LNL small accelerators in this field will be also given.
Speaker: Stefania Canella (INFN-LNL)
• 18:40
Discussion D2: Poster highlights; free discussion 35m
A space where brief oral summaries of poster may be shown, if desired, as well as other materials for discussion. Minutes prepared by convener(s) may be posted on the website, as well as short presentations (3 slide max) received for the discussion.
Speaker: Vanni Antoni (CNR)
• 16:35 19:15
Poster session Cafeteria Hall
### Cafeteria Hall
#### INFN Laboratori Nazionali di Legnaro
• 16:35
Beam optics and magnet studies for neutralizer storage rings 2h 40m
The design of efficient storage rings with large acceptance so that a neutralizer gas cell can be inserted requires both linear matrix formalism and full field tracking calculation. Moreover large magnet aperture must be considered. First an unbiased search of suitable lattices is needed. Matrix formalism is simple enough to allow use of symbolic manipulation programs, with s the beam direction, x,y the transverse coordinates, and Mx, My the corresponding transport matrices: the conditions that |trace(Mx)| < 2 and |trace(My)| < 2 can be reduced (automatically) to simple inequalities for lattice side lengths. Numerical optimizations are also discussed. Differently from usual storage rings, primary beam consumes in few passage through neutralizer cell, so angle injection seems possible. Field tracking simulation needs a rapid method to calculate field from pole footprints and shape, which preferably avoids the use of differential formulas. The method proposed is compared with analytic result for flat poles. After determining suitable magnet poles, full 3D magnets are designed, for verification. Analogies with Fixed-Field Alternating Gradient (FFAG) accelerators are noted.
Speaker: Marco Cavenago (INFN-LNL)
• 16:35
High current negative ion source with Planar Funnel extraction grid 2h 40m
High current large negative (H-, D-) ion sources will produce the neutral beams needed for fusion plasma heating in ITER. To reach very high negative ion beams a cesium coated surface of the plasma grid is usually used. The cesium coating however, poses some problem in the maintenance and operation of the ion source and for that alternative ways to enhance the ion current extraction should be investigated. An alternative way to enhance the beam current could be the use of a Planar Funnel Extraction (PFE) grid recently proposed for applications where a very high extraction efficiency are required [1]. In this contribution the idea of applying the PFE to a negative ion source instead of the cesium coating will be discussed. [1] A. Chaudhary et al., Review of Scientific Instruments 85, 105101 (2014); doi: 10.1063/1.4897480
Speaker: Vincenzo Variale (INFN-BA)
• 16:35
High current storage rings for neutral beam injectors 2h 40m
The gas neutralizer converter used to produce a neutral hydrogen (or deuterium) beam from a H- (or D-) beam has intrinsic limitation on conversion efficiency and requires a residual ion dump for H-, and produced H+. By recirculating H- into gas neutralizer N times (with N up to 4) conversion efficiency can be increased and the ion dump size is greatly reduced. In other words, the gas neutralizer becomes one element of a large acceptance H- storage ring, which is here studied both with linear theory and with numerical simulation. Among several practical solutions, the rectangular lattice with M=2 and M=4 bending dipole seems the more convenient for an initial studies; symmetry number (number of equal section per turn) is S=2 for both lattices. It is important to control secondary ion accumulation inside storage rings, which has beneficial effect (reduction of space charge) and unwanted effects (beam stripping in dipoles); clearing electrodes may be useful. Among advanced concepts, note first that controlling secondary plasma may also produce a plasma neutralizer, with higher efficiency. Second, adding a H+ storage ring, with a long straight section in common, is a convenient method to exploit the H+ and H- mutual neutralization, so that in principle conversion effiency may approach unity. Application to fusion and other use of dual beam technology are also reviewed.
Speaker: Marco Cavenago (INFN-LNL)
• 16:35
New thermal neutron source at LNL 2h 40m
In the framework of MUNES project, a new neutron source was developed at the CN electrostatic accelerator of Legnaro National Laboratories. Neutrons are produced thruogh Be(p,n) reaction using a thin foil beryllium target brazed on a copper base. A 5 MeV, 3 microA proton beam is focalized onto the target so as to reach a 500 W/cm^2 power density on beryllium, the same as MUNES high intensity accelerator. A heavy water-graphite moderator is used for neutrons thermalization. Preliminary results show that a 1.2*10^6 s^-1*cm^-2 neutron density can be generated at the extraction window with a uniformity better than 1% over a 25 cm diameter circular area. Neutron spectrum is more than 90% thermal with a very low gamma contamination.
Speaker: Enrico Fagotti (INFN-LNL)
• 16:35
Recent results of NIO1 negative ion source and future improvements (P. Veltri; see talk) 2h 40m
Speaker: Pierluigi Veltri (LNL)
• 16:35
Review of LNL Accelerators for Applied Physics: AN2000, CN and related experiments (S. Canella, see talk) 2h 40m
Speaker: Stefania Canella (LNL)
• 16:35
Surface dependence for laser - induced target current by plastic materials 2h 40m
Laser - matter interactions are a wide field of physics in which many parameters are involved. A small change in one of them can lead to a completely different time evolution for the physical system. In this work, the characterization of a plastic target subjected to a laser irradiation has been studied. A focus was particularly devoted to the interaction of the target with the whole grounded chamber, which has been tried to be understood through the change of the target - holder surface ratio. The resulting current and particle signals show an anomalous behaviour when this ratio is equal to 1.
Speaker: Ernesto Giuffreda (Università del Salento, Lecce)
• 19:15 19:30
Bus to the Restaurant 15m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 19:30 22:30
Working dinner 3h . (Restaurant)
### .
#### Restaurant
• 23:00 23:20
Bus back to Padova hotels 20m . (Bus)
### .
#### Bus
• Tuesday, 15 March
• 09:00 10:45
Morning session (Chair: A. Faus-Golfe) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 09:00
Compact FFAG for Radioisotope Production 35m
A design of a Fixed Field Alternating Gradient (FFAG) accelerator has been made for the production of radioisotopes, in particular 99m-Tc and a number of therapeutic isotopes currently in short supply. As well as fixed magnetic fields, this machine is isochronous at the level of 0.3% up to at least 28 MeV and hence able to operate in constant wave (CW) mode. Detailed tracking studies with the OPAL (Object Oriented Parallel Accelerator Library) code, including the effects of space charge, have demonstrated the ability to accelerate a beam with a current of up to 20 mA, significantly larger than achievable with any current cyclotrons. The accelerator is able to deliver beams of both protons and alpha particles. Two target options for the production of radioisotopes are being considered. The first uses a thin internal target. The huge acceptance of the accelerator allows the beam to be recirculated many times, the lost energy being restored on each cycle. In this way, the production of 99m-Tc for example, can take place at the optimum energy. The second option is to use an electrostatic deflector and septum for extraction. This will allow the clean extraction of high current beams, for example alphas for the production of therapeutic isotopes.
Speaker: David Bruton (University of Huddersfield)
• 09:35
High power-low energy accelerators for neutron production:MUNES and IFMIF-EVEDA case 35m
Two high power low energy accelerators for neutron production are under construction and testing at LNL-INFN. MUNES couples a 30 mA, 5 MeV proton beam to a Be target to generate a neutron flux of 10^14 n/s, with a spectrum centered in the 2 MeV region. This neutron flux can be moderated to generate a thermal or epithermal neutron source for different applications. Among them Boron Neutron Capture Therapy or nuclear waste characterization. The whole accelerator is produced by INFN in collaboration with local industry. IFMIF aims to produce an intense neutron flux to test and qualify materials suitable for the construction of fusion power plants. The final project will produce a 10^14 n/(s*cm^2) neutron flux with 14.1 MeV energy. It is based on an international collaboration between F4E and JAEA. In this framework INFN is producing the high intensity RFQ that can accelerate the 125 mA deuteron beam up to 5 MeV.
Speaker: Enrico Fagotti (INFN-LNL)
• 10:10
Particle accelerators for the production of medical radioisotopes 35m
Radioactive isotopes play a key role in biology to unearth fundamental cellular processes. By acting as labeling tags, gamma-emitting radionuclides are useful tools to visualize the interaction of molecular probes targeting specific biomolecules in living organisms by means of external detectors. This information is an essential component of the current paradigm of molecular imaging, a diagnostic approach aimed at elucidating the origin and intrinsic nature of diseases at the molecular level. In turn, this fundamental knowledge can be used to develop more efficient therapeutic strategies that are tailored to a single individual based on his/her chemical profile (chemotype). A key discovery has been that there exists a subset of radioisotopes that naturally manifest favorable biological properties for diagnostic and therapeutic purposes because their associated elements either have a recognized key biological role or mimic the behavior of biologically active elements. A classical example is provided by iodine radioisotopes widely employed for imaging thyroid function and for therapy of thyroid cancer as a result of the natural involvement of iodine in thyroid metabolism. Other relevant examples are offered by rubidium-82 mimicking potassium ions for imaging cardiac function, and strontium-89 and radium-223 employed in the treatment of bone cancer as analogs of calcium ions. Although some radioisotopes are obtained through nuclear reactions characterized by high cross sections for neutron or proton irradiation of suitable targets, some biologically relevant radionuclides are extremely difficult to obtain by conventional methods, relying on available nuclear reactors and low-energy, low-current cyclotrons, in sufficient amounts to allow their widespread medical use. These include, among others, copper, zinc, iron and manganese radioisotopes having highly interesting nuclear and biological properties suitable for both diagnosis and therapy. This lecture will review the crucial role played by nuclear physics in developing efficient methods for the production of medical radionuclides and current challenges in achieving satisfactory yields of formation of highly interesting and potentially useful radionuclides that are still not available in sufficient amounts to the medical community.
Speaker: Adriano Duatti (University of Ferrara)
• 09:00 10:45
Poster session (continuation .) Cafeteria Hall
### Cafeteria Hall
#### INFN Laboratori Nazionali di Legnaro
• 10:45 11:00
Conference photo 15m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 11:00 11:20
Coffee break 20m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 11:20 13:00
Morning session (Chair: A. Faus-Golfe) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 11:20
Experience with a high power cyclotron for radioisotope production 30m
The Arronax Public Interest Group (GIP) is a facility that hosts a multi-particle cyclotron and several laboratories, dedicated mainly to radio-isotopes productions and also to in-beam experiments for radiochemistry, radiobiology and physics. The multi-particle cyclotron has been running for these productions and experiments since the end of 2010. Its use has increased over the years reaching more than 4000 hours RF-time in 2015. This required extension of the operation range of the machine over several orders in intensity from 1 pA up to 350 uA for protons on target at several particle energies. The multi-particle capability of the machine is also abundantly used for radionuclide production and radiobiology. The Arronax facility as well as the cyclotron and its use will be detailed with the scope of radio-isotope production at high intensity. Also the on-going and needed adaptation will be presented. Acknowledgements: Several of the projects are supported in part by the “Agence National de la Recherche”, called “Investissements d’Avenir”, Equipex ArronaxPlus n°ANR-11-EQPX-0004
Speaker: Freddy Poirier (Arronax / CNRS)
• 11:50
Status of the High Intensity Proton Beam Facility at LNL 30m
In 2013 the SPES (Selective Production of Exotic Species) project has entered in the construction phase at Laboratori Nazionali di Legnaro (LNL). The project, whose main goal is the research in nuclear physics with Radioactive Beams, has foreseen the construction of a new building hosting the accelerator able to deliver protons up the energy of 70 MeV and 700 uA current (50kW of beam power). The new facility design has been expanded and upgraded for taking advantage of the dual simultaneous extraction of beams from the Cyclotron in order to provide a multipurpose high intensity irradiation facility. Today the new facility is partially completed and the Cyclotron supplied by BEST Theratronics company (CANADA) with the related beam transport lines are under commissioning. The status of the commissioning of the high power accelerator and the capabilities of the facility as multipurpose high intensity proton beam laboratory will be presented.
Speaker: Mario Maggiore (INFN-LNL)
• 12:20
Discussion D3: application of accelerators 40m
Which applications of accelerators will benefit from large current? Which perspectives for development? Which new collabaration themes (between partecipants) can be identified? Minutes prepared by convener(s) may be posted on the website, as well as short presentations (3 slide max) received for the discussion.
• 11:20 13:00
Poster session (continuation .) Cafeteria Hall
### Cafeteria Hall
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 13:00 14:00
Lunch 1h .. (..)
### ..
#### ..
• 14:00 14:30
Visit to LNL - High Intensity Proton Beam Facility (SPES) 30m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 14:30 16:00
Afternoon session (Chair: M. Cavenago) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 14:30
Perspectives about the production of multiply-charged ions at high intensities: Innovative schemes of microwave-to-plasma matching 30m
The production of multiply charged ions at medium-high intensity in Electron Cyclotron Resonance Ion Sources requires a trade-off between the plasma density ne and the ion confinement time τi (well known scaling-laws state that I∝ ne / τi while "∝ neτi ). Any additional boost of currents with respect to the state of the art (e.g. tens or hundreds μA of Ar14+, Xe34+, etc.), – especially for ions at intermediate charge state – will require a change of paradigm in the plasma generation mechanism. Nowadays the plasma is sustained by an electromagnetic wave resonantly interacting with the plasma electrons, but this implies an intrinsic limitation in density due to the well-known electromagnetic cut-off issue. In the next future, the inner-plasma modal conversion (i.e. microwaves triggering the formation of plasma waves) may support the generation of highly overdense plasmas in simplified magnetostatic field structures. The paper will include recent results obtained at LNS with a compact-size ECRIS prototype in which a highly overdense plasma (ten times the cutoff density) has been generated via O-X-B modal conversion at 3.75 GHz with a low RF power level (<100W). The same techniques may be applied to B-minimum trap with appropriate RF launchers. Advanced diagnostics tools for mastering the conversion mechanism and ensuring the best coupling of the incoming microwave radiation will be described as well.
Speaker: David Mascali (INFN-LNS)
• 15:00
Recent results of NIO1 negative ion source and future improvements 20m
Neutral Beam Injectors (NBI) based on negative ion conversion are fundamental to increase the plasma temperature in magnetic confinement fusion devices.. In the framework of the accompanying activities in support to the ITER NBI test facility a relatively compact radiofrequency (RF) ion source, named NIO1 (Negative Ion Optimization phase 1) is being developed and tested in Padua, Italy, in collaboration between Consorzio RFX and INFN. This contribution reports the recent status of the experiment, including the operation in air and oxygen and the first beam measurements. Future improvements to enhance the negative ion current and reduce the amount co-extracted electrons are also discussed.
Speaker: Pierluigi Veltri (INFN-LNL)
• 15:20
Discussion D4: industrial application of intense (negative) ion source 40m
Conclusive discussion on applications shown at the conference. Perspectives of ion sources for hydrogen and deuterium, and other elements (negative ion, and/or positive). Perspective of (new) collaborations. Minutes prepared by convener(s) may be posted on the website, as well as short presentations (3 slide max) received for the discussion.
Speaker: Marco Cavenago (LNL)
• 14:30 16:00
Poster session (continuation ..) Cafeteria Hall
### Cafeteria Hall
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:00 16:20
Cofffe break 20m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:20 17:20
Afternoon session (Chair: M. Cavenago) "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 16:20
Thruster for satellite propulsion and negative ions 30m
Low pressure ExB partly (electron) magnetized plasmas plays a key role not only in different plasma-based devices, such as Hall-effect thruster and negative ion source. In the first, the magnetic field allows a better electron confinment increasing the propellant ionization and ion acceleration efficiencies, while in the second the magnetic filter field allows the electron temperature and density reduction increasing the survival probability of negative ion and reducing the coextracted electron current. Nevertheless, unwanted phenomena occur related to self-orginized structures formed in the region of high magnetic field: azimuthal fluctuation in Hall-effect thruster and plasma asymmetry in negative ion source lead to increased electron cross-field transport. In this contribution results from self-consisted particle-based models will be presented and discussed considering optional alternative configurations.
Speaker: Francesco Taccogna (CNR-Nanotec-PLasMI)
• 16:50
Discussion D5: dissemination of result and/or next workshop 30m
Future publication of the conference results can be discussed, as well as ideas for new workshop themes in the Accelerator Application Network of the EUCARD2 project, and general questions. Closing remarks will follow. Minutes prepared by convener(s) may be posted on the website, as well as short presentations (3 slide max) received for the discussion.
• 16:20 17:20
Poster session (continuation ..) Cafeteria Hall ()
### Cafeteria Hall
• 17:20 17:30
Closing remarks 10m "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 17:40 18:10
Bus back to Padova hotels 30m .. (..)
### ..
#### ..
• Wednesday, 16 March
• 09:30 12:00
Workgroup on simulations (conveners: F. Taccogna, M. Cavenago): plasmas vs high current beams. "Ceolin" Meeting Room
### "Ceolin" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
• 12:00 14:30
Workgroups (conveners: F. Taccogna, M. Cavenago, P. Veltri): cesium in simulations; new concepts; free discussion. "Villi" Meeting Room
### "Villi" Meeting Room
#### INFN Laboratori Nazionali di Legnaro
Viale dell'Università, 2 Legnaro (Padova), Italy
• 13:00 14:00
Lunch 1h .. (..)
### ..
#### ..
###### Your browser is out of date!
Update your browser to view this website correctly. Update my browser now
×
|
|
# Math Help - derivaties of Trigonometric Functions
1. ## derivaties of Trigonometric Functions
Here is another one I am stuck on.
5sinx/(1+cosx)
this is what I did:
(1+cosx)d/dx[5sinx]-(5sinx)d/dx[1+cosx]/(1+cosx)^2
([1+cos(x)]*(-5)*cos(x)-[5+sin(x)]*[-sin(x)])/([1+cos(x)]^2)
Where did I go wrong here?
Thank you!
Keith
2. Originally Posted by keith
5sinx/(1+cosx)
$f(x)=\frac{5\sin x}{1+\cos x}=5\sin x\cdot(1+\cos x)^{-1}$
Use the product rule.
3. Originally Posted by Krizalid
$f(x)=\frac{5\sin x}{1+\cos x}=5\sin x\cdot(1+\cos x)^{-1}$
Use the product rule.
I looked over the problem and noticed it should have been:
(1+cosx)(5cosx) not -(5cosx)
Thanx anyway
4. Originally Posted by keith
(1+cosx)(5cosx)
If you wanna derive this, then just expand $5\cos x+5\cos^2x$
Then apply simple rules of derivation.
|
|
# source:doc/proposals/concurrency/text/concurrency.tex@27dde72
aaron-thesisarm-ehcleanup-dtorsdeferred_resndemanglerjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-exprnew-envno_listpersistent-indexerresolv-newwith_gc
Last change on this file since 27dde72 was 27dde72, checked in by Thierry Delisle <tdelisle@…>, 5 years ago
Major update to the concurrency proposal to be based on multiple files
• Property mode set to 100644
File size: 42.8 KB
Line
1% ======================================================================
2% ======================================================================
3\chapter{Concurrency}
4% ======================================================================
5% ======================================================================
7
8\section{Basics}
9The basic features that concurrency tools neet to offer is support for mutual-exclusion and synchronisation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is the group of instructions on an associated portion of data that requires the limited access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools are used to guarantee that event \textit{X} always happens before \textit{Y}.
10
11\subsection{Mutual-Exclusion}
12As mentionned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solution exists for mutual exclusion which vary in terms of performance, flexibility and ease of use. Methods range from low level locks, which are fast and flexible but require significant attention to be correct, to higher level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Often by either guaranteeing some problems cannot occur (e.g. being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} which offer an easy way to express mutual-exclusion on a restricted set of features (.e.g: reading/writing large types atomically). Another challenge with low level locks is composability. Locks are said to not be composable because it takes careful organising for multiple locks to be used and once while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
13
14\subsection{Synchronization}
15As for mutual-exclusion, low level synchronisation primitive often offer great performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, for example message passing, or offering simple solution to otherwise involved challenges. An example of this is barging. As mentionned above synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time synchronisation happens around a critical section, where threads most acquire said critical section in a certain order. However, it may also be desired to be able to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. This is called barging, where event \textit{X} tries to effect event \textit{Y} but anoter thread races to grab the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
16
17% ======================================================================
18% ======================================================================
19\section{Monitors}
20% ======================================================================
21% ======================================================================
22A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :
23\begin{cfacode}
24 typedef /*some monitor type*/ monitor;
25 int f(monitor & m);
26
27 int main() {
28 monitor m; //Handle m
29 f(m); //Routine using handle
30 }
31\end{cfacode}
32
33% ======================================================================
34% ======================================================================
35\subsection{Call semantics} \label{call}
36% ======================================================================
37% ======================================================================
38The above monitor example displays some of the intrinsic characteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable.
39
40Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter :
41
42\begin{cfacode}
43 monitor counter_t { /*...see section $\ref{data}$...*/ };
44
45 void ?{}(counter_t & nomutex this); //constructor
46 size_t ++?(counter_t & mutex this); //increment
47
48 //need for mutex is platform dependent here
49 void ?{}(size_t * this, counter_t & mutex cnt); //conversion
50\end{cfacode}
51
52Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading an \code{size_t} is an atomic operation.
53
54Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)} then it is reasonable that it should default to the safest option \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. In fact \code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routines neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without a doubt wheter or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword.
55
56
57The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations:
58\begin{cfacode}
59int f1(monitor & mutex m);
60int f2(const monitor & mutex m);
61int f3(monitor ** mutex m);
62int f4(monitor *[] mutex m);
63int f5(graph(monitor*) & mutex m);
64\end{cfacode}
65The problem is to indentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C, and even then making sure objects are only acquired once becomes none-trivial. This can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To keep everyone as sane as possible~\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behavior because only the first element of the array is acquired. This is specially true for non-copyable objects like monitors, where an array of pointers is simplest way to express a group of monitors. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed:
66
67\begin{cfacode}
68int f1(monitor & mutex m); //Okay : recommanded case
69int f2(monitor * mutex m); //Okay : could be an array but probably not
70int f3(monitor [] mutex m); //Not Okay : Array of unkown length
71int f4(monitor ** mutex m); //Not Okay : Could be an array
72int f5(monitor *[] mutex m); //Not Okay : Array of unkown length
73\end{cfacode}
74
75Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls.
76\begin{cfacode}
77int f(MonitorA & mutex a, MonitorB & mutex b);
78
79MonitorA a;
80MonitorB b;
81f(a,b);
82\end{cfacode}
83The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{group-acquire}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use multi-acquisition locks, users can effectively force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order:
84\begin{cfacode}
85 void foo(A & mutex a, B & mutex b) { //acquire a & b
86 ...
87 }
88
89 void bar(A & mutex a, B & /*nomutex*/ b) { //acquire a
90 ... foo(a, b); ... //acquire b
91 }
92
93 void baz(A & /*nomutex*/ a, B & mutex b) { //acquire b
94 ... foo(a, b); ... //acquire a
95 }
96\end{cfacode}
97The multi-acquisition monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order.
98
99However, such use leads the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires:
100\begin{enumerate}
101 \item Dynamically tracking of the monitor-call order.
102 \item Implement rollback semantics.
103\end{enumerate}
104While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time.
105
106Finally, for convenience, monitors support multiple acquiring, that is acquiring a monitor while already holding it does not cause a deadlock. It simply increments an internal counter which is then used to release the monitor after the number of acquires and releases match up. This is particularly usefull when monitor routines use other monitor routines as helpers or for recursions. For example:
107\begin{cfacode}
108monitor bank {
109 int money;
110 log_t usr_log;
111};
112
113void deposit( bank & mutex b, int deposit ) {
114 b.money += deposit;
115 b.usr_log | "Adding" | deposit | endl;
116}
117
118void transfer( bank & mutex mybank, bank & mutex yourbank, int me2you) {
119 deposit( mybank, -me2you );
120 deposit( yourbank, me2you );
121}
122\end{cfacode}
123
124% ======================================================================
125% ======================================================================
126\subsection{Data semantics} \label{data}
127% ======================================================================
128% ======================================================================
129Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contain shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appropriate protection. For example, here is a complete version of the counter showed in section \ref{call}:
130\begin{cfacode}
131monitor counter_t {
132 int value;
133};
134
135void ?{}(counter_t & this) {
136 this.cnt = 0;
137}
138
139int ?++(counter_t & mutex this) {
140 return ++this.value;
141}
142
143//need for mutex is platform dependent here
144void ?{}(int * this, counter_t & mutex cnt) {
145 *this = (int)cnt;
146}
147\end{cfacode}
148
149This counter is used as follows:
150\begin{center}
151\begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c}
152\begin{cfacode}
153//shared counter
154counter_t cnt1, cnt2;
155
160 ...
162\end{cfacode}
163\end{tabular}
164\end{center}
165Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting.
166
167% ======================================================================
168% ======================================================================
169\subsection{Implementation Details: Interaction with polymorphism}
170% ======================================================================
171% ======================================================================
172Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be complex to support. However, it is shown that entry-point locking solves most of the issues.
173
174First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines.
175
176Before looking into complex control flow, it is important to present the difference between the two acquiring options : \gls{callsite-locking} and \gls{entry-point-locking}, i.e. acquiring the monitors before making a mutex routine call or as the first instruction of the mutex routine call. For example:
177\begin{center}
178\setlength\tabcolsep{1.5pt}
179\begin{tabular}{|c|c|c|}
180Code & \gls{callsite-locking} & \gls{entry-point-locking} \\
181\CFA & pseudo-code & pseudo-code \\
182\hline
183\begin{cfacode}[tabsize=3]
184void foo(monitor& mutex a){
185
186
187
188 //Do Work
189 //...
190
191}
192
193void main() {
194 monitor a;
195
196
197
198 foo(a);
199
200}
201\end{cfacode} & \begin{pseudo}[tabsize=3]
202foo(& a) {
203
204
205
206 //Do Work
207 //...
208
209}
210
211main() {
212 monitor a;
213 //calling routine
214 //handles concurrency
215 acquire(a);
216 foo(a);
217 release(a);
218}
219\end{pseudo} & \begin{pseudo}[tabsize=3]
220foo(& a) {
221 //called routine
222 //handles concurrency
223 acquire(a);
224 //Do Work
225 //...
226 release(a);
227}
228
229main() {
230 monitor a;
231
232
233
234 foo(a);
235
236}
237\end{pseudo}
238\end{tabular}
239\end{center}
240
241\Gls{callsite-locking} is inefficient, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking} calling a monitor routine becomes exactly the same as calling it from anywhere else. Note that the \code{mutex} keyword relies on the resolver rather than another form of language, which mean that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait. This is possible because monitors are designed in terms a trait. For example:
242\begin{cfacode}
243//Incorrect
244//T is not a monitor
245forall(dtype T)
246void foo(T * mutex t);
247
248//Correct
249//this function only works on monitors
250//(any monitor)
251forall(dtype T | is_monitor(T))
252void bar(T * mutex t));
253\end{cfacode}
254
255
256% ======================================================================
257% ======================================================================
258\section{Internal scheduling} \label{insched}
259% ======================================================================
260% ======================================================================
261In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronisation. With monitors, this is generally achieved with internal or external scheduling as in\cit. Since internal scheduling of single monitors is mostly a solved problem, this proposal concentraits on extending internal scheduling to multiple monitors at once. Indeed, like the \gls{group-acquire} semantics, internal scheduling extends to multiple monitors at once in a way that is natural to the user but requires additional complexity on the implementation side.
262
263First, Here is a simple example of such a technique:
264
265\begin{cfacode}
266 monitor A {
267 condition e;
268 }
269
270 void foo(A & mutex a) {
271 ...
272 // Wait for cooperation from bar()
273 wait(a.e);
274 ...
275 }
276
277 void bar(A & mutex a) {
278 // Provide cooperation for foo()
279 ...
280 // Unblock foo at scope exit
281 signal(a.e);
282 }
283\end{cfacode}
284
285There are two details to note here. First, there \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This is needed to respect mutual-exclusion. Second, in \CFA, \code{condition} have no particular need to be stored inside a monitor, beyond any software engineering reasons. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering.
286
287An important aspect to take into account here is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, foo is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency.
288
289% ======================================================================
290% ======================================================================
291\subsection{Internal Scheduling - multi monitor}
292% ======================================================================
293% ======================================================================
294It easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors.
295
296\begin{multicols}{2}
298\begin{pseudo}
299acquire A
300 wait A
301release A
302\end{pseudo}
303
304\columnbreak
305
307\begin{pseudo}
308acquire A
309 signal A
310release A
311\end{pseudo}
312\end{multicols}
313
314The previous example shows the simple case of having two threads (one for each column) and a single monitor A. One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. There is an important thing to note here, both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This restriction is hidden on the user side in \uC, as it is a logical requirement for barging prevention.
315
316A direct extension of the previous example is the \gls{group-acquire} version:
317
318\begin{multicols}{2}
319\begin{pseudo}
320acquire A & B
321 wait A & B
322release A & B
323\end{pseudo}
324
325\columnbreak
326
327\begin{pseudo}
328acquire A & B
329 signal A & B
330release A & B
331\end{pseudo}
332\end{multicols}
333
334This version uses \gls{group-acquire} (denoted using the \& symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate.
335
336While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nested is done correctly. For example, the next pseudo-code snippet acquires monitors A then B before waiting while only acquiring B when signalling, effectively avoiding the nested monitor problem.
337
338\begin{multicols}{2}
339\begin{pseudo}
340acquire A
341 acquire B
342 wait B
343 release B
344release A
345\end{pseudo}
346
347\columnbreak
348
349\begin{pseudo}
350
351acquire B
352 signal B
353release B
354
355\end{pseudo}
356\end{multicols}
357
358The next example is where \gls{group-acquire} adds a significant layer of complexity to the internal signalling semantics.
359
360\begin{multicols}{2}
362\begin{pseudo}[numbers=left]
363acquire A
364 // Code Section 1
365 acquire A & B
366 // Code Section 2
367 wait A & B
368 // Code Section 3
369 release A & B
370 // Code Section 4
371release A
372\end{pseudo}
373
374\columnbreak
375
377\begin{pseudo}[numbers=left, firstnumber=10]
378acquire A
379 // Code Section 5
380 acquire A & B
381 // Code Section 6
382 signal A & B
383 // Code Section 7
384 release A & B
385 // Code Section 8
386release A
387\end{pseudo}
388\end{multicols}
389
390It is particularly important to pay attention to code sections 8 and 3, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{group-acquire} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 17), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs the monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. We are therefore left with three options:
391
392\subsubsection{Delaying signals}
393The first more obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is the correct time to transfer ownership when the last lock is no longer needed is what fits most closely to the behaviour of single monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single groupd of object. Effectively making the existing single monitor semantic viable by simply changing monitors to monitor collections.
394\begin{multicols}{2}
395Waiter
396\begin{pseudo}[numbers=left]
397acquire A
398 acquire A & B
399 wait A & B
400 release A & B
401release A
402\end{pseudo}
403
404\columnbreak
405
406Signaller
407\begin{pseudo}[numbers=left, firstnumber=6]
408acquire A
409 acquire A & B
410 signal A & B
411 release A & B
412 //Secretly keep B here
413release A
414//Wakeup waiter and transfer A & B
415\end{pseudo}
416\end{multicols}
417However, this solution can become much more complicated depending on what is executed while secretly holding B. Indeed, nothing prevents a user from signalling monitor A on a different condition variable:
418\begin{multicols}{2}
420\begin{pseudo}
421acquire A
422 acquire A & B
423 wait A & B
424 release A & B
425release A
426\end{pseudo}
427
429\begin{pseudo}
430acquire A
431 wait A
432release A
433\end{pseudo}
434
435\columnbreak
436
438\begin{pseudo}
439acquire A
440 acquire A & B
441 signal A & B
442 release A & B
443 //Secretly keep B here
444 signal A
445release A
447//Who wakes up the other thread?
448\end{pseudo}
449\end{multicols}
450
451The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example since \TODO
452
453\subsubsection{Dependency graphs}
454In the previous pseudo-code, there is a solution which would statisfy both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases A and then the waiter transfers back ownership of A when it releases it then the problem is solved. Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example the following code which is just a direct extension to three monitors requires at least three ownership transfer and has multiple solutions:
455
456\begin{multicols}{2}
457\begin{pseudo}
458acquire A
459 acquire B
460 acquire C
461 wait A & B & C
462 release C
463 release B
464release A
465\end{pseudo}
466
467\columnbreak
468
469\begin{pseudo}
470acquire A
471 acquire B
472 acquire C
473 signal A & B & C
474 release C
475 release B
476release A
477\end{pseudo}
478\end{multicols}
479Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one.
480
481\subsubsection{Partial signalling}
482Finally, the solution that was chosen for \CFA is to use partial signalling. Consider the following case:
483
484\begin{multicols}{2}
485\begin{pseudo}[numbers=left]
486acquire A
487 acquire A & B
488 wait A & B
489 release A & B
490release A
491\end{pseudo}
492
493\columnbreak
494
495\begin{pseudo}[numbers=left, firstnumber=6]
496acquire A
497 acquire A & B
498 signal A & B
499 release A & B
500 // ... More code
501release A
502\end{pseudo}
503\end{multicols}
504
505The partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated in to only two actions, passing monitors to the next owner when they should be release and conditionnaly waking threads if all conditions are met. Contrary to the other solutions, this solution quickly hits an upper bound on complexity of implementation.
506
507% ======================================================================
508% ======================================================================
509\subsection{Signalling: Now or Later}
510% ======================================================================
511% ======================================================================
512An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation. This is achieved using the \code{signal_block} routine\footnote{name to be discussed}.
513
514For example here is an example highlighting the difference in behaviour:
515\begin{center}
516\begin{tabular}{|c|c|}
517\code{signal} & \code{signal_block} \\
518\hline
519\begin{cfacode}
520monitor M { int val; };
521
522void foo(M & mutex m ) {
523 m.val++;
524 sout| "Foo:" | m.val |endl;
525
526 wait( c );
527
528 m.val++;
529 sout| "Foo:" | m.val |endl;
530}
531
532void bar(M & mutex m ) {
533 m.val++;
534 sout| "Bar:" | m.val |endl;
535
536 signal( c );
537
538 m.val++;
539 sout| "Bar:" | m.val |endl;
540}
541\end{cfacode}&\begin{cfacode}
542monitor M { int val; };
543
544void foo(M & mutex m ) {
545 m.val++;
546 sout| "Foo:" | m.val |endl;
547
548 wait( c );
549
550 m.val++;
551 sout| "Foo:" | m.val |endl;
552}
553
554void bar(M & mutex m ) {
555 m.val++;
556 sout| "Bar:" | m.val |endl;
557
558 signal_block( c );
559
560 m.val++;
561 sout| "Bar:" | m.val |endl;
562}
563\end{cfacode}
564\end{tabular}
565\end{center}
566Assuming that \code{val} is initialized at 0, that each routine are called from seperate thread and that \code{foo} is always called first. The previous code would yield the following output:
567
568\begin{center}
569\begin{tabular}{|c|c|}
570\code{signal} & \code{signal_block} \\
571\hline
572\begin{pseudo}
573Foo: 0
574Bar: 1
575Bar: 2
576Foo: 3
577\end{pseudo}&\begin{pseudo}
578Foo: 0
579Bar: 1
580Foo: 2
581Bar: 3
582\end{pseudo}
583\end{tabular}
584\end{center}
585
586As mentionned, \code{signal} only transfers ownership once the current critical section exits, resulting in the second "Bar" line to be printed before the second "Foo" line. On the other hand, \code{signal_block} immediately transfers ownership to \code{bar}, causing an inversion of output. Obviously this means that \code{signal_block} is a blocking call, which will only be resumed once the signalled function exits the critical section.
587
588% ======================================================================
589% ======================================================================
590\subsection{Internal scheduling: Implementation} \label{insched-impl}
591% ======================================================================
592% ======================================================================
593\TODO
594
595
596% ======================================================================
597% ======================================================================
598\section{External scheduling} \label{extsched}
599% ======================================================================
600% ======================================================================
601An alternative to internal scheduling is to use external scheduling.
602\begin{center}
603\begin{tabular}{|c|c|}
604Internal Scheduling & External Scheduling \\
605\hline
606\begin{ucppcode}
607_Monitor Semaphore {
608 condition c;
609 bool inUse;
610public:
611 void P() {
612 if(inUse) wait(c);
613 inUse = true;
614 }
615 void V() {
616 inUse = false;
617 signal(c);
618 }
619}
620\end{ucppcode}&\begin{ucppcode}
621_Monitor Semaphore {
622
623 bool inUse;
624public:
625 void P() {
626 if(inUse) _Accept(V);
627 inUse = true;
628 }
629 void g() {
630 inUse = false;
631
632 }
633}
634\end{ucppcode}
635\end{tabular}
636\end{center}
637This method is more constrained and explicit, which may help users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g. \uC) or in terms of data (e.g. Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages.
638
639In the case of internal scheduling, the call to \code{wait} only guarantees that \code{g} is the last routine to access the monitor. This entails that the routine \code{f} may have acquired mutual exclusion several times while routine \code{h} was waiting. On the other hand, external scheduling guarantees that while routine \code{h} was waiting, no routine other than \code{g} could acquire the monitor.
640
641% ======================================================================
642% ======================================================================
643\subsection{Loose object definitions}
644% ======================================================================
645% ======================================================================
646In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented it becomes both more difficult to implement but also less clear for the user:
647
648\begin{cfacode}
649 monitor A {};
650
651 void f(A & mutex a);
652 void g(A & mutex a) { accept(f); }
653\end{cfacode}
654
655However, external scheduling is an example where implementation constraints become visible from the interface. Indeed, since there is no hard limit to the number of threads trying to acquire a monitor concurrently, performance is a significant concern. Here is the pseudo code for the entering phase of a monitor:
656
657\begin{center}
658\begin{tabular}{l}
659\begin{pseudo}
660 if monitor is free
661 enter
662 elif monitor accepts me
663 enter
664 else
665 block
666\end{pseudo}
667\end{tabular}
668\end{center}
669
670For the \pscode{monitor is free} condition it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure:
671
672\begin{center}
673{\resizebox{0.4\textwidth}{!}{\input{monitor}}}
674\end{center}
675
676There are other alternatives to these pictures but in the case of this picture implementing a fast accept check is relatively easy. Indeed simply updating a bitmask when the acceptor queue changes is enough to have a check that executes in a single instruction, even with a fairly large number (e.g. 128) of mutex members. However, this relies on the fact that all the acceptable routines are declared with the monitor type. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. Its important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units.
677The alternative would be to have a picture more like this one:
678
679\begin{center}
680{\resizebox{0.4\textwidth}{!}{\input{ext_monitor}}}
681\end{center}
682
683Not storing the queues inside the monitor means that the storage can vary between routines, allowing for more flexibility and extensions. Storing an array of function-pointers would solve the issue of uniquely identifying acceptable routines. However, the single instruction bitmask compare has been replaced by dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling may now require additionnal searches on calls to accept to check if a routine is already queued in.
684
685At this point we must make a decision between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed.
686
687In either cases here are a few alternatives for the different syntaxes this syntax : \\
688\begin{center}
689{\renewcommand{\arraystretch}{1.5}
690\begin{tabular}[t]{l @{\hskip 0.35in} l}
691\hline
692\multicolumn{2}{ c }{\code{accept} on type}\\
693\hline
694Alternative 1 & Alternative 2 \\
695\begin{lstlisting}
696mutex struct A
697accept( void f(A & mutex a) )
698{};
699\end{lstlisting} &\begin{lstlisting}
700mutex struct A {}
701accept( void f(A & mutex a) );
702
703\end{lstlisting} \\
704Alternative 3 & Alternative 4 \\
705\begin{lstlisting}
706mutex struct A {
707 accept( void f(A & mutex a) )
708};
709
710\end{lstlisting} &\begin{lstlisting}
711mutex struct A {
712 accept :
713 void f(A & mutex a) );
714};
715\end{lstlisting}\\
716\hline
717\multicolumn{2}{ c }{\code{accept} on routine}\\
718\hline
719\begin{lstlisting}
720mutex struct A {};
721
722void f(A & mutex a)
723
724accept( void f(A & mutex a) )
725void g(A & mutex a) {
726 /*...*/
727}
728\end{lstlisting}&\\
729\end{tabular}
730}
731\end{center}
732
733Another aspect to consider is what happens if multiple overloads of the same routine are used. For the time being it is assumed that multiple overloads of the same routine should be scheduled regardless of the overload used. However, this could easily be extended in the future.
734
735% ======================================================================
736% ======================================================================
737\subsection{Multi-monitor scheduling}
738% ======================================================================
739% ======================================================================
740
741External scheduling, like internal scheduling, becomes orders of magnitude more complex when we start introducing multi-monitor syntax. Even in the simplest possible case some new semantics need to be established:
742\begin{cfacode}
743 accept( void f(mutex struct A & mutex this))
744 mutex struct A {};
745
746 mutex struct B {};
747
748 void g(A & mutex a, B & mutex b) {
749 accept(f); //ambiguous, which monitor
750 }
751\end{cfacode}
752
753The obvious solution is to specify the correct monitor as follows:
754
755\begin{cfacode}
756 accept( void f(mutex struct A & mutex this))
757 mutex struct A {};
758
759 mutex struct B {};
760
761 void g(A & mutex a, B & mutex b) {
762 accept( b, f );
763 }
764\end{cfacode}
765
766This is unambiguous. Both locks will be acquired and kept, when routine \code{f} is called the lock for monitor \code{a} will be temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{b}). This behavior can be extended to multi-monitor accept statment as follows.
767
768\begin{cfacode}
769 accept( void f(mutex struct A & mutex, mutex struct A & mutex))
770 mutex struct A {};
771
772 mutex struct B {};
773
774 void g(A & mutex a, B & mutex b) {
775 accept( b, a, f);
776 }
777\end{cfacode}
778
779Note that the set of monitors passed to the \code{accept} statement must be entirely contained in the set of monitor already acquired in the routine. \code{accept} used in any other context is Undefined Behaviour.
780
781% ======================================================================
782% ======================================================================
783\subsection{Implementation Details: External scheduling queues}
784% ======================================================================
785% ======================================================================
786To support multi-monitor external scheduling means that some kind of entry-queues must be used that is aware of both monitors. However, acceptable routines must be aware of the entry queues which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement a systematic algorithm of disambiguating which queue is relavant regardless of user ordering. The proposed algorithm is to fall back on monitors lock ordering and specify that the monitor that is acquired first is the lock with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order arrival, they also contain the second mutex. Therefore, another thread with the same highest priority monitor but a different lowest priority monitor may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it may not be known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that half the multi-monitor queues will go unused for the entire duration of the program.
787
788% ======================================================================
789% ======================================================================
790\section{Other concurrency tools}
791% ======================================================================
792% ======================================================================
793% \TODO
Note: See TracBrowser for help on using the repository browser.
|
|
anonymous one year ago For the reaction PClÂ(g) + heat Ý PClÀ(g) + Cl¿(g), what will happen when the volume is increased?
• This Question is Open
1. anonymous
I can't really figure out what you mean by your equation...
2. aaronq
i think it's: $$\sf PCl_{5~(g)} + heat \rightarrow PCl_{3~(g)} + Cl_{2~(g)}$$
|
|
# For which fields K is every subring of K…?
This question was inspired by
How to prove that the subrings of the rational numbers are noetherian?
which some people found too routine to be of interest. So I have decided to liven things up a bit with the following questions. In the interest of full disclosure, I have not thought seriously about these questions, and I think that I probably could answer at least some of them myself, but I do think they are interesting and, if I may say so, educational.
Find all (commutative!) fields $K$ such that every (unital!) subring $R$ of $K$ is:
a) a principal ideal domain.
b) a Dedekind domain.
c) a Noetherian domain.
I mean here to be asking three different questions, one for each condition. Evidently the classes of such fields are nondecreasing from a) to b) and from b) to c).
If you would like to answer the question with a), b) or c) replaced by some other standard property of commutative rings -- especially if it yields a different class of fields than in the first three questions -- please feel free.
d) a Dedekind domain if it is integrally closed?
e) a PID if it is integrally closed?
-
@Prof. Clark: The problem I see with this question is that there are going to be five correct answers, and you will only be able to accept one of them. Would you be averse to splitting it up into several questions? – Harry Gindi Mar 27 '10 at 21:26
@fpqc: I don't think anything is gained by splitting up five very closely related questions into separate posts. As for choosing an answer, I'll do what people always do when there are multiple correct answers: I'll pick the one that seems best to me. (The point of my CW answer is to perform the service of compiling different contributions into a single answer. But I will not choose this one as correct.) – Pete L. Clark Mar 27 '10 at 22:18
On the whole, I don't think it's really a matter for "moderator attention". To the extent that my personal opinion counts, I think it's a difficult question to decide how to split up questions, but it is better addressed by asking "Which choice best serves someone interested in this subject?" than by any consideration involving the mechanics of the site (i.e. reputation, accepted answers, etc.), and it seems that this is what Pete has tried to do. – Scott Morrison Mar 27 '10 at 23:32
@fpqc: The appropriate thing to do is to start a thread on meta, not to flag for moderator attention. The ♦ comes with some super powers, but it doesn't mean we automatically know what community standards should be. Such things are best answered by discussing the pros and cons, not by appealing to some oracle. – Anton Geraschenko Mar 28 '10 at 0:39
My (first look) opinion: this is best as a single question, especially since the implicit goal is to compare the answers to the different questions. If this were split into five different questions, each one would seem kinda random without the context of the others. In the same way that it would be silly to ask two separate questions: "what are necessary conditions for X?" and "what are sufficient conditions for X?" – Anton Geraschenko Mar 28 '10 at 0:39
Regarding question (c), I can tell you exactly which integral domains have only Noetherian subrings by quoting the aptly titled Integral domains with Noetherian subrings by Robert Gilmer:
If $K$ is the field of fractions and $char(K)=0$, we just need $[K:\mathbb{Q}]<\infty$.
If $char(K)=p$ with prime subfield $k$, we need $K$ to be either finite or a finite algebraic extension of a $k[X]$ for some transcendental $X$.
I guess this pretty much restricts the answers to questions (a) and (b)...
-
Sounds good. With those fields as an upper bound, the other two questions should be easier to answer. Would you consider sketching the proof? – Pete L. Clark Mar 27 '10 at 17:37
Also, when the characteristic is positive, I believe the correct condition is that either $K$ is algebraic (not necessarily finite) over $\mathbb{F}_p$ or is a finite extension of $\mathbb{F}_p(t)$. – Pete L. Clark Mar 27 '10 at 17:56
Ha yes, of course, I used the wrong brackets and inadvertently missed infinitely many examples. I need to work on my copying skills... – dke Mar 28 '10 at 15:23
Let me put together the previous two answers (plus epsilon) to give an answer to all three questions.
Step 1: By Gilmer's theorem, a field $K$ has all its subrings Noetherian iff:
(i) It is a finite extension of $\mathbb{Q}$, or
(ii) It is an algebraic extension of $\mathbb{F}_p$ or a finite extension of $\mathbb{F}_p(t)$.
Step 2: Suppose $K$ is a number field which is not $\mathbb{Q}$. We may write $K = \mathbb{Q}[\alpha]$ for some algebraic integer $\alpha$. Then $R = \mathbb{Z}[2\alpha]$ is a non-integrally closed subring of $K$ so is not a Dedekind domain. So the only field of characteristic $0$ which has every subring a Dedekind domain is $\mathbb{Q}$, in which case (by the previous question) every subring is a PID.
Step 3: Suppose $K$ has characteristic $p > 0$. If $K$ is algebraic over $\mathbb{F}_p$, then every subring is a field, hence also Dedekind and a PID. If $K$ is a finite extension of $\mathbb{F}_p(t)$ then it admits a subring of the form $\mathbb{F}_p[t^2,t^3]$, which is not integrally closed.
So the fields for which every subring is a Dedekind ring are $\mathbb{Q}$ and the algebraic extensions of $\mathbb{F}_p$. For all such fields, every subring is in fact a PID.
-
For (a) and (b), when the characteristic is positive and K is not algebraic over the prime field, then there is a subring of the form k[t^2,t^3] which is not a PID and is not Dedekind.
When the characteristic is zero, for (b), since a Dedekind domain is required to be integrally closed by defition, once K is a number field different from ℚ, one can find proper subrings of the ring of integers, and these rings will not be integrally closed.
For (d), note that if R is Dedekind with field of fractions K, and if R' is any ring between R and K, then the local rings of R' (localisation with respect to a maximal ideal) will be a subset of the local rings of R. Thus they will all be DVRs. Since the ring of integers in an algebraic number field is Dedekind, this shows that for any number field K, we have that every integrally closed subring is a Dedekind domain.
-
oh dearie me, some poor soul decided to increase the load time by latexifying unnecessarily. i've fixed this. – Peter McNamara Sep 12 '14 at 13:25
|
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Sib. J. Pure and Appl. Math.: Year: Volume: Issue: Page: Find
Vestn. Novosib. Gos. Univ., Ser. Mat. Mekh. Inform., 2006, Volume 6, Issue 2, Pages 33–56 (Mi vngu230)
The scales of spaces $L_p$ and their connection with Orlicz spaces
A. E. Mamontov
Abstract: We describe the classes of measurable functions which are estimated in the spaces $L_p$ with the norm $\omega(p)$ for all $p\in(\alpha,\beta)$. It is well-known for some simple functions $\omega$ and $\beta=+\infty$ that such classes are embedded into the appropriate Orlicz spaces. In this article we study the connection between these classes and other symmetric (Lorentz, Marcinkiewicz and Orlicz) spaces for arbitrary $\omega$, $\alpha$ and $\beta$. Our main goal is to show two-sided embedding or coincidence with Orlicz spaces.
Full text: PDF file (337 kB)
References: PDF file HTML file
Document Type: Article
UDC: 517.982
Citation: A. E. Mamontov, “The scales of spaces $L_p$ and their connection with Orlicz spaces”, Vestn. Novosib. Gos. Univ., Ser. Mat. Mekh. Inform., 6:2 (2006), 33–56
Citation in format AMSBIB
\Bibitem{Mam06} \by A.~E.~Mamontov \paper The scales of spaces $L_p$ and their connection with Orlicz spaces \jour Vestn. Novosib. Gos. Univ., Ser. Mat. Mekh. Inform. \yr 2006 \vol 6 \issue 2 \pages 33--56 \mathnet{http://mi.mathnet.ru/vngu230}
|
|
# Chapter 4 - Section 4.1 - Properties of a Parallelogram - Exercises: 4a
17.9
#### Work Step by Step
A parallelogram consists of two sets of parallel sides, therefore the opposite sides of the parallelogram have equal lengths. QP is opposite from MN, and the length of MN is given as 17.9, therefore the length of QP is 17.9.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# A grave misunderstanding
When I started using Scala, I was very confused by “for comprehensions” for a very long time. My mistake was to treat <- as an “unwrapper”. For example, in the following, <- “unwraps” the collection/container/monad, as expected.
def main(args: Array[String]): Unit = {
for { x <- Option(1) } { println(x) } // prints 1
for { x <- List(2) } { println(x) } // prints 2
for { x <- Try(3) } { println(x) } // prints 3
}
We also know that for comprehension is sugar for flatMap, map and withFilter1. So we can combine enumerators like the following to produce very succinct and elegant code.
val _: List[Int] = for {
x <- List(1)
y <- List(x + 1)
} yield y
// Is sugar for
val z: List[Int] = List(1).flatMap(x => List(x + 1).map(y => y))
So far so good. Sometimes, we want to do some computation over the enumerators:
def isOdd(x: Int): Option[Int] = if ((x % 2) == 0) None else Some(x)
val z: List[Int] = for {
x <- List(1)
y <- isOdd(x)
} yield y
This makes sense at first if we treat <- as unwrapping the Option[Int]. However, that is a wrong mental modal. If we try to replace Option with Try, we see that the code fails to compile.
def isOdd(x: Int): Try[Int] = Try(x + 1) // Note that this is now Try instead of Option
val z: List[Int] = for {
x <- List(1)
y <- isOdd(x) // This will fail to compile
} yield y
This took me by surprize for an embarassingly long time. I was confused as to why it was fine to use for comprehension for List and Option, but not List and Try. Similarly Try and Option Fails too2.
Recall that the type signature of flatMap is (or equivalently) F[A] => (A => F[B]) => F[B] and that F stays the same. So it makes sense that combining Try and Option wouldn’t work. However, why is it that List and Option works? It turns out that the implementation of flatMap for List takes a GenTraversableOnce and an there is an implicit conversion from Option to an Iterator which satisfy that trait.
I think this is an example of some of the things people moving to Scala will be caught up by and probably should be emphasized by books introducing the language.
1. If a linguist were to look and tried to derive the syntactic rule, she’d be confused too. Let’s replace List, Option, and Try with the symbols A, B, C, and let combining them with “for comprehension” be the binary operator +. Then the observed rules are: A + A, B + B, C + C, A + B are fine, but A + C, B + C does not work. It is unclear why things break for C. ↩︎
|
|
# The European Physical Journal C
## List of Papers (Total 3,658)
#### Momentum dissipation and holographic transport without self-duality
We explore the response of the momentum dissipation introduced by spatial linear axionic fields in a holographic model without self-duality, which is broke by Weyl tensor coupling to Maxwell field. It is found that for the positive Weyl coupling parameter $$\gamma >0$$, the momentum dissipation, characterized by parameter $${\hat{\alpha }}$$, drives an incoherent metallic state...
#### A note on thin-shell wormholes with charge in F(R)-gravity
In their recent work (Eiroa and Aguirre in Eur Phys J C 76:132, 2016), Eiroa and Aguirre introduced thin-shell wormholes in $$F\left( R\right) =R+\alpha R^{2}$$-gravity coupled with the Maxwell electromagnetic field. Here in this note we shall address an interesting feature of their results which has been missed. It will be shown that thin-shell wormhole can not be formed in the...
We perform a non-perturbative study of the scale-dependent renormalisation factors of a complete set of dimension-six four-fermion operators without power subtractions. The renormalisation-group (RG) running is determined in the continuum limit for a specific Schrödinger Functional (SF) renormalisation scheme in the framework of lattice QCD with two dynamical flavours ($$N_... #### Discovery potential for directional Dark Matter detection with nuclear emulsions Direct Dark Matter searches are nowadays one of the most fervid research topics with many experimental efforts devoted to the search for nuclear recoils induced by the scattering of Weakly Interactive Massive Particles (WIMPs). Detectors able to reconstruct the direction of the nucleus recoiling against the scattering WIMP are opening a new frontier to possibly extend Dark Matter... #### Quasilocal first law of black hole dynamics from local Lorentz transformations Quasilocal formulations of black hole are of immense importance since they reveal the essential and minimal assumptions required for a consistent description of black hole horizon, without relying on the asymptotic boundary conditions on fields. Using the quasilocal formulation of Isolated Horizons, we construct the Hamiltonian charges corresponding to local Lorentz... #### Holevo bound of entropic uncertainty in Schwarzschild spacetime For a pair of incompatible quantum measurements, the total uncertainty can be bounded by a state-independent constant. However, such a bound can be violated if the quantum system is entangled with another quantum system (called memory); the quantum correlation between the systems can reduce the measurement uncertainty. On the other hand, in a curved spacetime, the presence of the... #### Mass and angular momentum of black holes in 3D gravity theories with first order formalism We apply the Wald formalism to obtain masses and angular momenta of black holes in three dimensional gravity theories using the first order formalism. Wald formalism suggests that the entropy of a black hole can be defined by an integration of a conserved charge on the bifurcation horizon, and mass and angular momentum of a black hole as an integration of some charge variation... #### Stratified scalar field theories of gravitation with self-energy term and effective particle Lagrangian We construct a general stratified scalar theory of gravitation from a field equation that accounts for the self-interaction of the field and a particle Lagrangian, and calculate its post-Newtonian parameters. Using this general framework, we analyze several specific scalar theories of gravitation and check their predictions for the solar system post-Newtonian effects. #### Measurement of prompt and nonprompt charmonium suppression in \(\text {PbPb}$$ collisions at 5.02 $$\,\text {Te}\text {V}$$
The nuclear modification factors of $${\mathrm {J}/\psi }$$ and $$\psi \text {(2S)}$$ mesons are measured in $$\text {PbPb}$$ collisions at a centre-of-mass energy per nucleon pair of $$\sqrt{\smash [b]{s_{_{\text {NN}}}}} = 5.02\,\text {Te}\text {V}$$. The analysis is based on $$\text {PbPb}$$ and $$\mathrm {p}\mathrm {p}$$ data samples collected by CMS at the LHC in 2015...
#### Precise determination of $$\alpha _{S}(M_Z)$$ from a global fit of energy–energy correlation to NNLO+NNLL predictions
We present a comparison of the computation of energy–energy correlation in $$e^{+}e^{-}$$ collisions in the back-to-back region at next-to-next-to-leading logarithmic accuracy matched with the next-to-next-to-leading order perturbative prediction to LEP, PEP, PETRA, SLC and TRISTAN data. With these predictions we perform an extraction of the strong coupling constant taking into...
#### Analysis of b quark pair production signal from neutral 2HDM Higgs bosons at future linear colliders
In this paper, the b quark pair production events are analyzed as a source of neutral Higgs bosons of the two Higgs doublet model type I at linear colliders. The production mechanism is $$e^{+}e^{-} \rightarrow Z^{(*)} \rightarrow HA \rightarrow b{\bar{b}}b{\bar{b}}$$ assuming a fully hadronic final state. The analysis aim is to identify both CP-even and CP-odd Higgs bosons in...
#### Dark matter direct detection of a fermionic singlet at one loop
The strong direct detection limits could be pointing to dark matter – nucleus scattering at loop level. We study in detail the prototype example of an electroweak singlet (Dirac or Majorana) dark matter fermion coupled to an extended dark sector, which is composed of a new fermion and a new scalar. Given the strong limits on colored particles from direct and indirect searches we...
#### Gaussian processes reconstruction of dark energy from observational data
In the present paper, we investigate the dark energy equation of state using the Gaussian processes analysis method, without confining a particular parametrization. The reconstruction is carried out by adopting the background data including supernova and Hubble parameter, and perturbation data from the growth rate. It suggests that the background and perturbation data both...
#### Some results on a black hole with a global monopole in Poincaré gravity
The aim of this work is to study the thermodynamics and spin current of a system corresponding to a black hole containing a global monopole in the context of Poincaré gravity theory which is an extension of general relativity, in the sense that the intrinsic angular momentum of matter is also a source of gravitational interaction. Thus, in this work we find the solution...
#### Palatini formulation of f(R, T) gravity theory, and its cosmological implications
We consider the Palatini formulation of f(R, T) gravity theory, in which a non-minimal coupling between the Ricci scalar and the trace of the energy-momentum tensor is introduced, by considering the metric and the affine connection as independent field variables. The field equations and the equations of motion for massive test particles are derived, and we show that the...
#### On stability of a neutron star system in Palatini gravity
We formulate the generalized Tolman–Oppenheimer–Volkoff equations for the $$f(\hat{R})$$ Palatini gravity in the case of static and spherical symmetric geometry. We also show that a neutron star can be a stable system independently of the form of the functional $$f(\hat{R})$$.
#### Non-perturbative to perturbative QCD via the FFBRST
Recently a new type of quadratic gauge was introduced in QCD in which the degrees of freedom are suggestive of a phase of abelian dominance. In its simplest form it is also free of Gribov ambiguity. However this gauge is not suitable for usual perturbation theory. The finite field dependent BRST (FFBRST) transformation is a method established to interrelate generating functionals...
#### The minimal axion minimal linear $$\sigma$$ model
The minimal SO(5) / SO(4) linear $$\sigma$$ model is extended including an additional complex scalar field, singlet under the global SO(5) and the Standard Model gauge symmetries. The presence of this scalar field creates the conditions to generate an axion à la KSVZ, providing a solution to the strong CP problem, or an axion-like-particle. Different choices for the PQ charges...
#### White dwarfs with a surface electrical charge distribution: equilibrium and stability
The equilibrium configuration and the radial stability of white dwarfs composed of charged perfect fluid are investigated. These cases are analyzed through the results obtained from the solution of the hydrostatic equilibrium equation. We regard that the fluid pressure and the fluid energy density follow the relation of a fully degenerate electron gas. For the electric charge...
#### Stephani cosmology: entropically viable but observationally challenged
Inhomogeneous cosmological models such as the Stephani universes could, in principle, provide an explanation for the observed accelerated expansion of the Universe. Working with a concrete, popular model of the Stephani cosmology – the Stephani-Da̧browski model, we found that it is entropically viable. We also comment on the energy conditions and the two-sheeted geometry of the...
This paper deals with the cancellation mechanism, which identifies the energy density of space-time expansion in an empty universe with the zero-point energy density and avoids the scale discrepancy with the observed energy density (cosmological constant problem). Using an intrinsic degree of freedom which describes the coupling of a variable cosmological term $$\varLambda... #### Search for a new heavy gauge-boson resonance decaying into a lepton and missing transverse momentum in 36 fb \(^{-1}$$ of pp collisions at $$\sqrt{s} = 13$$ TeV with the ATLAS experiment
The results of a search for new heavy $$W^\prime$$ bosons decaying to an electron or muon and a neutrino using proton–proton collision data at a centre-of-mass energy of $$\sqrt{s}~=~13$$ TeV are presented. The dataset was collected in 2015 and 2016 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 36.1 $$\text{ fb }^{-1}$$. As...
#### Bouncing cosmological solutions from $$f(\mathsf{R,T})$$ gravity
In this work we study classical bouncing solutions in the context of $$f(\mathsf{R},\mathsf{T})=\mathsf{R}+h(\mathsf{T})$$ gravity in a flat FLRW background using a perfect fluid as the only matter content. Our investigation is based on introducing an effective fluid through defining effective energy density and pressure; we call this reformulation as the “effective picture...
#### Scalar pair production in a magnetic field in de Sitter universe
The production of scalar particles by the dipole magnetic field in de Sitter expanding universe is analyzed. The amplitude and probability of transition are computed using perturbative methods. A graphical study of the transition probability is performed obtaining that the rate of pair production is important in the early universe. Our results prove that in the process of pair...
#### Analysis of the structure of $$\Xi (1690)$$ through its decays
The mass and pole residue of the first orbitally and radially excited $$\Xi$$ state as well as the ground state residue are calculated by means of the two-point QCD sum rules. Using the obtained results for the spectroscopic parameters, the strong coupling constants relevant to the decays $$\Xi (1690)\rightarrow \Sigma K$$ and $$\Xi (1690) \rightarrow \Lambda K$$ are calculated...
|
|
September 18, 2019, 05:51:04 AM
Forum Rules: Read This Before Posting
### Topic: Question about selective saturation (STD and 1D-NMR) (Read 1336 times)
0 Members and 1 Guest are viewing this topic.
#### riboswitch
• Regular Member
• Posts: 30
• Mole Snacks: +1/-1
• Gender:
• Molecular Biology Student
##### Question about selective saturation (STD and 1D-NMR)
« on: November 20, 2018, 10:09:13 AM »
I am having a bit of a problem understanding what "selective saturation" is supposed to mean when it comes to the one-dimensional NMR technique which is called "Saturation Transfer Difference" or STD. I only understood the concept of saturation when I was studying what is the process of spin relaxation in NMR. If I remember correctly, saturation is a phenomenon that occurs when the Boltzmann equilibrium distribution of nuclear spins are perturbed such that the population of the α spins (Nα) are equal to the population of the β spins (Nβ). However, I have no idea whether this is the same "saturation" concept used in Saturation Transfer Difference, used to detect transient protein-ligand binding.
What little I understand about STD is the following: on-resonance spectrum and off-resonance spectrum of a sample are collected and a difference between them is determined. Subtraction between the two spectra will contain the resonances of the small molecule when there is interaction between the ligand and the protein. But I just don't understand how the phenomenon of saturation plays a role in all of this.
Thanks in advance for the help.
#### Corribus
• Chemist
• Sr. Member
• Posts: 2722
• Mole Snacks: +438/-20
• Gender:
• A lover of spectroscopy and chocolate.
##### Re: Question about selective saturation (STD and 1D-NMR)
« Reply #1 on: November 20, 2018, 11:00:16 AM »
I am by no means an NMR theory specialist. With that in mind, from my understanding, the proton signals in the protein are selectively saturated (based on the definition you provided). Spin diffusion results in desaturation, if you will, of the protein signals faster than intrinsic relaxation timescale and simultaneous activation of nearby coupled proton spins. This process depends on the distance of the ligands from the activated protein protons. A somewhat crude analogy would be FRET in fluorescence spectroscopy, if you are familiar with that.
Have you seen this paper published in J. Chem. Educ.?
https://pubs.acs.org/doi/10.1021/ed101169t
What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent? - Richard P. Feynman
#### riboswitch
• Regular Member
• Posts: 30
• Mole Snacks: +1/-1
• Gender:
• Molecular Biology Student
##### Re: Question about selective saturation (STD and 1D-NMR)
« Reply #2 on: November 20, 2018, 12:05:01 PM »
Have you seen this paper published in J. Chem. Educ.?
https://pubs.acs.org/doi/10.1021/ed101169t
Thank you! So, quick definitions from that paper:
• Off-resonance spectrum - This refers to the spectrum that is recorded without selective saturation of the protein protons. The signal intensities of this spectrum is referred by the paper as I0;
• On-resonance spectrum - This refers to the spectrum that is recorded with selective saturation of the protein protons. The signal intensities of this spectrum is referred by the paper as ISAT;
From the two spectra, a difference spectrum is determined through subtraction between the signal intensities of the off-resonance spectrum and on-resonance spectrum:
$$I_{STD} = I_{0}-I_{SAT}$$
In the difference spectrum, only the signals of the ligand that received saturation transfer from the protein will remain. According to the paper, the saturation transfer from protein to ligand occurs via spin diffusion, through the so-called nuclear Overhauser effect. If I remember correctly, NOE is the modification of the signal intensity of a resonance by saturation of another. For example, in a very simple AX system in which the two spins interact through a magnetic dipole-dipole interaction, if we saturate the transition of X (that is, we equalize the populations of the X levels), we observe that the signal intensity of A is either enhanced or diminished.
Also, according to the paper, in order to observe STD effects, the dissociation constant or KD of the protein-ligand complex must be lower than 10-3 M but higher than 10-8 M. If the ligand binds more tightly than the acceptable range of the KD required for the experiment to work, then according to the paper, relaxation occurs and saturation transfer does not occur.
Have I understood and summarized the concepts well?
Also, right now I'm confused about why saturation transfer doesn't occur when relaxation occurs. Why? (Sorry for asking stupid questions...)
#### Corribus
• Chemist
• Sr. Member
• Posts: 2722
• Mole Snacks: +438/-20
• Gender:
• A lover of spectroscopy and chocolate.
##### Re: Question about selective saturation (STD and 1D-NMR)
« Reply #3 on: November 21, 2018, 12:08:02 PM »
Actually, I cannot access that paper. By the title it just sounded useful. Your summary sounds reasonable to me, though.
Regarding your final question - basically in NMR you are creating a nonequilibrium situation when you hit the sample with the Rf pulse. The protons then relax back down to the equilibrium state (reference state imposed by the static background magnetic field) with a characteristic amount of time once the Rf pulse is over. This relaxation, and the associated energy difference between the magnetic spin states, is what creates the NMR spectrum. In the STD experiment, the rate of relaxation competes with the rate of spin transfer to nearby nuclei in the ligands: if the transfer process is slower than the characteristic relaxation time, because - say - the ligands are too far away, then you won't observe any transfer. On the other hand, if the transfer process is faster than relaxation, then you will observe changes in the nuclear spins of the ligands. You basically have two process that lead to two end-points, and which process is faster determines what you observe in the experiment.
A crude analogy might be this: suppose you get a paycheck of 100 dollars, which creates a nonequilibrium state. The drive back to equilibrium (not having spending money) happens with a characteristic rate (average time it takes you to find something on Amazon to buy). There is a chance you may deposit the money in the bank, and the chance depends on how far away the bank is. You could express this also as an average rate of transfer of funds to the bank. If the bank is close, you are more likely to drive to the bank and deposit the money before you spend it somewhere (rate of transfer exceeds the rate of spending), and you will observe money appearing in your bank account. If the bank is far, the average amount of time it takes to get to the bank and deposit money exceeds the amount of time it takes to spend the money on junk (rate of spending exceeds the rate of transfer), and you rarely see a transfer of money to your bank. It's all about what rate dominates the overall process to determine what is the outcome you observe.
« Last Edit: November 21, 2018, 12:40:56 PM by Corribus »
What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent? - Richard P. Feynman
|
|
# Advanced SQL: Approximating π Using the Monte Carlo Method in Postgres
Databases are commonly used as dumb storage bins for CRUD application data. However, this doesn’t do them justice: database systems such as PostgreSQL are much more capable and versatile than that, supporting a wide range of operations across many different data types.
The standard way of interfacing with Postgres is SQL – you know, that thing you may have briefly learned in webdev class where you SELECT stuff FROM a set of tables WHERE some condition is met. But that’s a far cry from what you can achieve when taking advantage of the full feature set of this declarative and – surprise! – Turing complete programming language.
In this post, I’ll describe how to approximate $\pi$ with a fairly compact SQL query. 1
This series of posts is loosely based on what I learned in the Advanced SQL lecture held by Torsten Grust in the summer of 2017 at the University of Tübingen. Take a look at the lecture slides for in-depth explanations and a wide array of examples.
## Theory
Some math first:2 You might remember that the area $A$ of a circle with radius $r$ is
We can divide by $r^2$ to get an equation for $\pi$:
That means that as long as we know the radius and find a way of estimating $A$, we can estimate $\pi$!
Now let’s imagine our circle centered in a tight square box:
Note that the square’s side length is $2r$, yielding the square area
$B = (2r)^2 = 4r^2$.
If we generate $n$ random points in this square, we can count how many points $m$ fall inside the circle. That’s the Monte Carlo method – generating random samples and estimating based on the observed distribution: 3
This gives us an approximation of the fraction of the square that is occupied by the circle, which in turn lets us approximate $\pi$:
That means that $\pi$ is approximately four times the number of points inside the circle divided by the total number of points within the bounds of our square. This is starting to sound easy to implement!
To make things a bit more straightforward later on, let’s agree to set $r = 0.5$ and place the circle’s center at $m = (0.5, 0.5)$. This conveniently turns our $2r \times 2r$ square into a unit square:
Counting the random points in this example, we notice that $m = 79$ out of $n = 100$ points fall inside the circle. Let’s plug these values into the formula we derived above and see what we get:
Not too far off!
## Implementation
Before showing you the query, here are some “advanced” SQL features that will come in handy:
• The random() function returns a pseudo-randomly generated floating-point value between 0 and 1. 4
• Postgres happens to have built-in support for geometric shapes: point(x, y) represents a point with the given x and y coordinates, and circle(p, r) creates a circle with radius r around the point p. 5
We can thus create a random point using point(random(), random()). Our unit circle from above can be generated using circle(point(0.5, 0.5), 0.5).
• Natively representing shapes is not very useful without some common operations on them. For example, the @> operator checks if the shape given in the left argument envelops the right argument – you could read it as “contains”.
We’ll use this to check if our unit circle contains each random point: circle(point(0.5, 0.5), 0.5) @> point(random(), random()) returns a boolean ready to be used in our query’s WHERE clause.
• In case you haven’t used it before: generate_series(min, max) generates a single-column table containing the range of numbers from min to max.
We won’t actually need these numbers, instead we just want to ensure that the WHERE clause of our query is executed n times, so by convention we’ll call the returned table _ in our FROM clause.
• Postgres supports setting global parameters using \set name value and retrieving the value using :name. We’ll use this to store our sample size n because we need to reference it twice in our query.
Putting all of this together and combining it with our approximation formula for $\pi$, we end up with something along the lines of the following query:
\set n 1000000
SELECT 4 * count(*) :: float / :n AS pi
FROM generate_series(1, :n) AS _
WHERE circle(point(0.5, 0.5), 0.5) @> point(random(), random());
To run it, simply spin up psql, paste the query, press return and bask in the glory of what we’ve achieved today:
$psql psql=# \set n 1000000 psql=# SELECT 4 * count(*) :: float / :n AS pi psql-# FROM generate_series(1, :n) AS _ psql-# WHERE circle(point(0.5, 0.5), 0.5) @> point(random(), random()); +----------+ | pi | +----------+ | 3.140892 | +----------+ (1 row) Time: 682.926 ms ## Addendum: Accidental LaTeX Implementation If you’re only here for the SQL query, go away. The visualizations in the “theory” section above have been drawn in LaTeX/TikZ6, including the random points used to approximate $\pi$ in the SQL query. At some point I realized that it wouldn’t be too hard to add a counter to the \foreach loop that’s responsible for filling the points with different shades of gray depending on whether they fall inside the circle. 7 This enables keeping track of the number of points inside the circle. Because the iteration count is known, computing the corresponding approximation for $\pi$ works the same way as in the SQL query’s SELECT clause. Displaying the result as part of the drawing then was trivial, as you can see once you scroll down a bit. First, here’s the code: \RequirePackage{luatex85} \documentclass{standalone} \usepackage{fontenc,unicode-math} \setmainfont[Ligatures=TeX]{TeX Gyre Pagella} \setmathfont[Ligatures=TeX]{TeX Gyre Pagella Math} \usepackage{tikz} \usetikzlibrary{calc} \begin{document} \scalebox{2}{ \begin{tikzpicture}[scale=3.5] % axes \draw [<->,thick] (0,1.2) node (yaxis) [above] {$y$} |- (1.2,0) node (xaxis) [right] {$x$}; % draw circle \coordinate (m) at (0.5,0.5); \draw (m) circle (0.5cm) node (mlabel) [right] {$m$}; \fill[black] (m) circle (0.4pt); \draw[dashed] (yaxis |- m) node[left] {$0.5$} -| (xaxis -| m) node[below] {$0.5$}; \draw[dashed] (m) -- node[right] {$r$} (0.23,0.08); % draw rectangle \coordinate (one) at (1,1); \draw (0,0) rectangle (one); \draw[dashed] (yaxis |- one) node[left] {$1$} -| (xaxis -| one) node[below] {$1$}; % draw random points \pgfmathsetmacro{\i}{100} \newcounter{inpoints} \setcounter{inpoints}{0} \pgfmathsetseed{3455632} \def\incolor{gray!50!black} \def\outcolor{gray!50!white} \foreach \p in {1,...,\i} { \pgfmathsetmacro{\x}{0.5*rand+0.5} \pgfmathsetmacro{\y}{0.5*rand+0.5} \pgfmathparse{(\x-0.5)^2+(\y-0.5)^2} \pgfmathsetmacro{\dist}{\pgfmathresult} \ifdim\dist pt < 0.25pt \addtocounter{inpoints}{1} \fill[fill=\incolor] (\x,\y) circle (0.25pt); \else \fill[fill=\outcolor] (\x,\y) circle (0.25pt); \fi } \pgfmathparse{int(\i-\theinpoints)} \pgfmathsetmacro{\theoutpoints}{\pgfmathresult} \pgfmathparse{(4*\theinpoints/\i)} \node[above of=m,yshift=1.15cm,xshift=0.18cm] {$\pi \approx \frac{4 \cdot \textcolor{\incolor}{\theinpoints}}{\textcolor{\outcolor}{\theoutpoints} + \textcolor{\incolor}{\theinpoints}} \approx \pgfmathresult\$};
\end{tikzpicture}
}
\end{document}
Simply adjust the random seed 3455632 and optionally the iteration count \i in the LaTeX document above, compile8 and you’ll observe a different distribution of the points and, most likely, a slightly different approximation of $\pi$ below the plot. You can also change the colors of the points to your liking:
Turning this visualization into an animation by making the points appear one by one (while continually adjusting the approximation) would be interesting. Consider that an exercise for the reader. 😉
1. Note that we’ll use some PostgreSQL-specific functions, however due to SQL’s Turing completeness, everything we do here is theoretically possible in any standards-compliant RDBMS.
2. Bear with me here.
3. If points are too boring, you could do the same with darts. As long as you’re not too good at darts.
4. As usual, the range is actually $[0, 1)$, meaning that $1$ is not included. As usual, this doesn’t really matter for the problem at hand.
5. Other RDBMSes have similar features, see here or here
6. The excellent pdf2svg utility (which can be installed via Homebrew) was used to convert to a web-accessible format.
7. Which can be tested by checking whether (\x-0.5)^2+(\y-0.5)^2 (where 0.5 is both the x and y coordinate of the circle’s center) is smaller than 0.25
8. If you remove lines 1 and 3-5, you can use any LaTeX engine, otherwise you’ll be constrained to LuaLaTeX.
|
|
# Infinity Product Equality.
Let $\{I_n\}_{n\in\mathbb{N}}$ be a sequence of intervals in the form
$$I_n = \Big [ \frac{q_n}{b_n}, \frac{q_n + 1}{b_n} \Big),$$
where $q_{n}$ is some integer, for all $n\in\mathbb{N}$. Define the sequences of real numbers $\{ y_n \}_{n\in\mathbb{N}}$ and $\{ z_n \}_{n\in\mathbb{N}}$ by
$$y_n = \frac{q_n + 1}{b_n}\;\;\;\;\text{ and }\;\;\;\; z_n = \frac{q_n + 3/2}{b_n}.$$
I want to to find a upper bound for
$$\Delta_n = \Big | \prod_{i=0}^{n-1}[1+a_i\sin(b_i\pi z_n) -\prod_{i=0}^{n-1}[1+a_i\sin(b_i\pi y_n)- \sigma_i]\Big|$$
Where $\{a_n\}_{n\in\mathbb{N}}$ an $\{b_n\}_{n\in\mathbb{N}}$ are real number sequences such that
$$\sum a_n < \infty\;\;\;\;\text{and }\;\;\;\; b_n = \prod_{i=0}^{n}p_i,$$
$\{p_n\}_{n\in\mathbb{N}}$ is a prime sequence such that
$$\lim_{n\to\infty}\frac{2^n}{a_n p_n} = 0.$$
Johan Thim proved [2] that $$\Delta_n \leq \frac{b\pi}{p_n}2^{n-2},$$
by the following: Define the constants
$$a = \prod_{i=0}^{\infty}(1 - a_i)\;\;\;\;\text{ and }\;\;\;\; b=\prod_{i=0}^{\infty}(1+a_i).$$
and the partial products $$w_n(x) = \prod_{i=0}^{n}[1+a_i\sin(b_i\pi x)].$$
Given a natural $k>n$, the quotient $b_k/b_n$ is a even integer. Thus, for some $r_k\in\mathbb{Z}$,
$$\sin(b_k\pi y_n) = \sin(2r_k\pi(q_n+1)) = 0 \;\text{ and }\; \sin(b_k\pi z_n) = \sin(2r_k\pi(q_n + 3/2)) = 0.$$
And $k=n$, we have $$\sin(b_k \pi y_n) = 0\;\;\text{ and }\;\; \sin(b_k\pi z_n) = -(-1)^{q_n}.$$
Therefore, $$\Delta_n = W_{n-1}(z_n)[(1+a_n\sin(b_n\pi z_n))] - W_{n-1}(y_n)[(1+a_n\sin(b_n\pi y_n))]=$$ $$= W_{n-1}(z_n) - W_{n-1}(y_n) - (-1)^{q_n}(z_n)[1+a_n\sin(b_n \pi z_n)]$$
I understood everything until here. The trouble comes with the following steps:
Thus, there is a $\sigma_k\in\mathbb{R}$ such that
$$a_k\sin(b_k\pi z_n) = a_k\sin(b_k\pi y_n) + \sigma_k.$$
and for $|W_{n-1}(z_n) - W_{n-1}(y_n)|$, it's true that
$$|W_{n-1}(z_n) - W_{n-1}(y_n)| = \Big | \prod_{i=0}^{n-1}[1+a_i\sin(b_i\pi z_n) -\prod_{i=0}^{n-1}[1+a_i\sin(b_i\pi y_n)- \sigma_i]\Big| =$$ $$= \Big | \sum_{i=0}^{2^{n-1}-1}\sigma_ {l_i}\big( \prod_{j\in I_i}\sigma_j\big)\big( \prod_{j\in J_i}[1+a_j\sin(b_j\pi y_n)]\big)\Big | \leq \frac{\pi}{2p_n} \sum_{i=0}^{2^{n-1}-1} \Big (\prod_{j\in J_i}|1+a_j|\Big)\leq \frac{b\pi}{2p_n}(2^{n-1}-1)\leq\frac{b\pi}{p_n}2^{n-2}$$
For $I_i, J_i\subseteq \mathbb{N}$ and some natural $l_i$. Therefore $$|\Delta_n|\leq a_n\Big(a - \frac{2^{n-2}}{a_n p_n}b\pi\Big)$$
My question is concerning the step in which Thim states the equality
$$\Big | \prod_{i=0}^{n-1}[1+a_i\sin(b_i\pi z_n) -\prod_{i=0}^{n-1}[1+a_i\sin(b_i\pi y_n)- \sigma_i]\Big| =$$ $$= \Big | \sum_{i=0}^{2^{n-1}-1}\sigma_ {l_i}\big( \prod_{j\in I_i}\sigma_j\big)\big( \prod_{j\in J_i}[1+a_j\sin(b_j\pi y_n)]\big)\Big |.$$
Why is that true?
[1] - Wen, Liu : A Nowhere Differentiable Continuous Function Constructed by Infinite Products - The American Mathematical Monthly, Vol. 109, No. 4 (Apr., 2002), pp. 378-380.
[2] - Thim, Johan: Continuous Nowhere Differentiable Functions - December 2003, Departament of Mathematics, Luleå University of Technology.
-
This is not an answer, but you might like to look at this handout (math.cmu.edu/~bobpego/21131/nowhdiff.pdf) slightly simplifying Liu Wen's nice construction, which I prepared for a basic analysis class. – Bob Pego Mar 22 '12 at 16:57
Dear @BobPego, the page is off. – Paulo Henrique Mar 23 '12 at 12:19
Sorry! Try this one. – Bob Pego Mar 24 '12 at 15:04
|
|
# What is derivative of a vector respect to another vector?
1. Sep 17, 2010
### yungman
I am confused. I never seen derivative of a vector respect to another vector. When I go on the web, the article just show divergence, curl, gradient etc. But not derivative of a vector respect to another vector?
For example what is
$$\frac{d(\vec{x}-\vec{x_0})^2}{d \vec{x}} ?$$
For $$\vec{x_0}$$ is a constant vector.
The book seems to imply:
$$\frac{d[(\vec{x}-\vec{x_0})^2]}{d \vec{x}} = 2(\vec{x}-\vec{x_0}) \frac{d \vec{x}}{d \vec{x}} = 2(\vec{x}-\vec{x_0})$$
I guess I don't know how to do a derivative like this. Can anyone help? I have looked through the multiple variable book and nothing like this. The variable is always scalar. The closest I seen is:
$$\int_C \vec{F} \cdot d\vec{r} \;=\; \int_C \vec{F} \cdot \hat{r}dr$$
But this is not exactly what the book discribed.
The only one that is remotely close is Directional Derivative which I don't think so.
Last edited: Sep 18, 2010
2. Sep 18, 2010
### CompuChip
Wow, I agree that is really confusing notation.
Probably, they mean
$$(\vec x - \vec x_0)^2 = (\vec x - \vec x_0) \cdot (\vec x - \vec x_0)$$
so the square is actually a scalar.
Then in components, you could write
$$\left( \frac{\mathrm d [(\vec x - \vec x_0)^2] }{ \mathrm d\vec x } \right)_j = \frac{\mathrm d [(\vec x - \vec x_0)^2] }{ \mathrm d\vec x_j } = 2(\vec x - \vec x_0)_j = 2(\vec x_j - (\vec{x_0})_j) )$$
If you doubt this, you can write out
$$(\vec x - \vec x_0) \cdot (\vec x - \vec x_0) = \left( \sum_{i = 1}^n (\vec x_i)^2 \right) + 2 \left( \sum_{i = 1}^n (\vec x_i) (\vec x_0)_i \right) + \left( \sum_{i = 1}^n ((\vec{x_0})_i)^2 \right)$$
and use that
$$\frac{\mathrm d}{\mathrm d \vec x_j} \left( \sum_{i = 1}^n (\vec x_i)^2 \right) = 2 \vec x_j$$
etc
3. Sep 18, 2010
### George Jones
Staff Emeritus
What book?
4. Sep 18, 2010
### D H
Staff Emeritus
The derivative of a vector function of a vector $$\vec f(\vec x)$$ with respect to a vector $$\vec x$$ is a 1-1 tensor, with the i,j element being
$$\frac{\partial f_i(\vec x)}{x_j}}$$
However, $(\vec x - \vec x_0)^2$ is not a vector function. It is a scalar. You are just calculating the gradient:
$$\nabla f(\vec x) = \sum_j \frac{\partial f(\vec x)}{x_j}}\hat x_j$$
Note that the gradient looks a lot like a vector. It is better thought of as being a covector.
So what about the gradient of $f(\vec x) = (\vec x - \vec x_0)^2$? Expanding this, we get
$$f(\vec x) = (\vec x - \vec x_0)\cdot (\vec x - \vec x_0) = \sum_i (x_i - x_{0,i})^2$$
Taking the gradient, the jth of the gradient is
$$\left(\nabla f(\vec x)\right)_j = \sum_i 2 (x_i - x_{0,i}) \frac{\partial x_i}{\partial x_j} = \sum_i 2 (x_i - x_{0,i})\delta_{ij} = 2(x_j - x_{0,j})$$
5. Sep 18, 2010
### yungman
The book is PDE by Strauss p194 to p195. It is part of the derivation of the Green's Function for sphere. The part is about normal derivative of G. It talked about derivation respect to $\vec{x}$ and some very funcky statement I still don't understand. But the later part just went back to the ordinary definition of normal derivative:
$$\frac{\partial G}{\partial n} = \nabla G \cdot \hat{n}$$
and derive the equation accordinary as if nothing happened!!!! So it is a non question at this point. Strauss is not a good book in any stretch. I just cannot find any PDE book that cover the Green's Function and the EM book that I ordered is still in shipment!!!
Thanks
Alan
6. Sep 19, 2010
### George Jones
Staff Emeritus
amazon.com's search function lets me look at some, but not all, of the pages in the book. Do you mean the statement
on page 185?
Notice that equation (10) on page 185 gives $G$ as a function of both $\bold{x}$ and $\bold{x}_0$, so the quoted statement just means that normal partial derivatives, gradients, divergences, etc., are with respect to the coordinates of $\bold{x}$ and not with respect to the coordinates of $\bold{x}_0$. The quoted statement does not actually mean "take the derivative with respect to a vector."
7. Sep 19, 2010
### yungman
Yes, that is the sentence I refer to. I just took it literally that derivative with respect to $\vec{x}$. I have absolutely no issue on the normal derivative. That is the reason I reposted that I have no question on the complete derivation.
I have not gone into the exercise yet. I still have one more question regarding to a zero vector in my other post that I got stuck!!! If you can help, that would be really appreciated.
Thanks.
|
|
Which of the following statements are IMPOSSIBLE? Choose all that apply.
The rocket's speed was measured to be 1.78c.
The rocket's rest length is 689 m. An observer flying by measured the rocket to be 341 m long.
A rocket flying towards the Sun at 0.56c measured the speed of the photons (particles of light) emitted by the Sun to be c.
An inertial reference frame had an acceleration of 3 m/s2.
The proper time interval between two events was measured to be 235 s. The time interval between the same two events (as measured by an observer not in the proper frame) was 188 s.
|
|
TheInfoList
A geographic information system (GIS) is a conceptualized framework that provides the ability to capture and analyze spatial and geographic data. GIS applications (or GIS apps) are computer-based tools that allow the user to create interactive queries (user-created searches), store and edit spatial and non-spatial data, analyze spatial information output, and visually share the results of these operations by presenting them as maps.[1][2][3]
Geographic information science (or, GIScience)—the scientific study of geographic concepts, applications, and systems—is commonly initialized as GIS, as well.[4]
Geographic information systems are utilized in multiple technologies, processes, techniques and methods. It is attached to various operations and numerous applications, that relate to: engineering, planning, management, transport/logistics, insurance, telecommunications, and business.[2] For this reason, GIS and location intelligence applications are at the foundation of location-enabled services, that rely on geographic analysis and visualization.
GIS provides the capability to relate previously unrelated information, through the use of location as the "key index variable". Locations and extents that are found in the Earth's spacetime, are able to be recorded through the date and time of occurrence, along with x, y, and z coordinates; representing, longitude (x), latitude (y), and elevation (z). All Earth-based, spatial–temporal, location and extent references, should be relatable to one another, and ultimately, to a "real" physical location or extent. This key characteristic of GIS, has begun to open new avenues of scientific inquiry and studies.
|
|
# Changing Scales
##### This Changing Scales assessment also includes:
Pupils determine scale factors from one figure to another and the scale factor in the reverse direction. Scholars compute the percent changes between three figures.
CCSS: Designed
##### Instructional Ideas
• Lead a discussion on how to figure out scale factors when only given percentages
• Give the class the scale factor as a percent and ask them to draw two images that represent the scale factor
##### Classroom Considerations
• Learners should be familiar with finding scale factors
• 14th installment in a 20-part series
##### Pros
• The answer keys show the arithmetic steps needed to arrive at the solution
• Highlights the places where the mathematical practices are used within the lesson
• None
|
|
# Overview
Code4rena (C4) is an open organization consisting of security researchers, auditors, developers, and individuals with domain expertise in smart contracts.
A C4 code contest is an event in which community participants, referred to as Wardens, review, audit, or analyze smart contract logic in exchange for a bounty provided by sponsoring projects.
During the code contest outlined in this document, C4 conducted an analysis of Sandclock contest smart contract system written in Solidity. The code contest took place between January 6—January 12 2022.
## Wardens
36 Wardens contributed reports to the Sandclock contest:
1. WatchPug (jtp and ming)
2. camden
3. jayjonah8
4. pauliax
5. Dravee
6. harleythedog
7. kenzo
8. leastwood
9. cmichel
10. hickuphh3
11. palina
12. defsec
13. danb
14. sirhashalot
15. pedroais
16. 0x1f8b
17. hyh
18. gzeon
19. Ruhum
20. Tomio
21. bugwriter001
22. shenwilly
23. cccz
24. p4st13r4 (0xb4bb4 and 0x69e8)
25. hubble (ksk2345 and shri4net)
26. ACai
27. pmerkleplant
28. ye0lde
29. Fitraldys
30. onewayfunction
31. certora
32. robee
33. tqts
This contest was judged by LSDan (ElasticDAO).
Final report assembled by itsmetechjay, CloudEllie, and liveactionllama.
# Summary
The C4 analysis yielded an aggregated total of 41 unique vulnerabilities and 58 total findings. All of the issues presented here are linked back to their original finding.
Of these vulnerabilities, 5 received a risk rating in the category of HIGH severity, 15 received a risk rating in the category of MEDIUM severity, and 21 received a risk rating in the category of LOW severity.
C4 analysis also identified 17 non-critical recommendations.
# Scope
The code under review can be found within the C4 Sandclock contest repository, and is composed of 9 smart contracts written in the Solidity programming language and includes 1400 lines of Solidity code.
# Severity Criteria
C4 assesses the severity of disclosed vulnerabilities according to a methodology based on OWASP standards.
Vulnerabilities are divided into three primary risk categories: high, medium, and low.
High-level considerations for vulnerabilities span the following key areas when conducting assessments:
• Malicious Input Handling
• Escalation of privileges
• Arithmetic
• Gas use
Further information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website.
# High Risk Findings (5)
## [H-01] forceUnsponsor() may open a window for attackers to manipulate the _totalShares and freeze users’ funds at a certain deposit amount
Submitted by WatchPug
if (_force && sponsorAmount > totalUnderlying()) {
} else if (!_force) {
require(
"Vault: not enough funds to unsponsor"
);
}
underlying.safeTransfer(_to, sponsorToTransfer);
When sponsorAmount > totalUnderlying(), the contract will transfer totalUnderlying() to sponsorToTransfer, even if there are other depositors and totalShares > 0.
After that, and before others despoiting into the Vault, the Attacker can send 1 wei underlying token, then cal deposit() with 0.1 * 1e18 , since newShares = (_amount * _totalShares) / _totalUnderlyingMinusSponsored and _totalUnderlyingMinusSponsored is 1, with a tiny amount of underlying token, newShares will become extremly large.
As we stated in issue #166, when the value of totalShares is manipulated precisely, the attacker can plant a bomb, and the contract will not work when the deposit/withdraw amount reaches a certain value, freezing the user’s funds.
However, this issue is not caused by lack of reentrancy protection, therefore it cant be solved by the same solution in issue #166.
#### Recommendation
Consider adding a minimum balance reserve (eg. 1e18 Wei) that cannot be withdrawn by anyone in any case. It can be transferred in alongside with the deployment by the deployer.
This should make it safe or at least make it extremely hard or expensive for the attacker to initiate such an attack.
naps62 (Sandclock) confirmed and commented:
@gabrielpoca @ryuheimat is this new?
ryuheimat (Sandclock) commented:
it’s new
gabrielpoca (Sandclock) commented:
yap, it’s interesting. The sponsor really is an issue
## [H-02] Withdrawers can get more value returned than expected with reentrant call
Submitted by camden, also found by cmichel and harleythedog
The impact of this is that users can get significantly more UST withdrawn than they would be alotted if they had done non-reentrant withdraw calls.
#### Proof of Concept
Here’s an outline of the attack:
Assume the vault has 100 UST in it. The attacker makes two deposits of 100UST and waits for them to be withdrawable. The attacker triggers a withdraw one of their deposit positions. The vault code executes until it reaches this point: https://github.com/code-423n4/2022-01-sandclock/blob/a90ad3824955327597be00bb0bd183a9c228a4fb/sandclock/contracts/Vault.sol#L565 Since the attacker is the claimer, the vault will call back to the attacker. Inside onDepositBurned, trigger another 100 UST deposit. Since claimers.onWithdraw has already been called, reducing the amount of shares, but the UST hasn’t been transferred yet, the vault will compute the amount of UST to be withdrawn based on an unexpected value for _totalUnderlyingMinusSponsored (300). https://github.com/code-423n4/2022-01-sandclock/blob/a90ad3824955327597be00bb0bd183a9c228a4fb/sandclock/contracts/Vault.sol#L618
After the attack, the attacker will have significantly more than if they had withdrawn without reentrancy.
Here’s my proof of concept showing a very similar exploit with deposit, but I think it’s enough to illustrate the point. I have a forge repo if you want to see it, just ping me on discord. https://gist.github.com/CamdenClark/abc67bc1b387c15600549f6dfd5cb27a
#### Tools Used
Forge
Reentrancy guards.
Also, consider simplifying some of the shares logic.
naps62 (Sandclock) resolved:
## [H-03] Vaults with non-UST underlying asset vulnerable to flash loan attack on curve pool
Submitted by camden, also found by cccz, cmichel, danb, defsec, harleythedog, hyh, kenzo, leastwood, palina, pauliax, pmerkleplant, Ruhum, WatchPug, and ye0lde
In short, the NonUSTStrategy is vulnerable to attacks by flash loans on curve pools.
Here’s an outline of the attack:
• Assume there is a vault with DAI underlying and a NonUSTStrategy with a DAI / UST curve pool
• Take out a flash loan of DAI
• Exchange a ton of DAI for UST
• The exchange rate from DAI to UST has gone up (!!)
• Withdraw or deposit from vault with more favorable terms than market
• Transfer back UST to DAI
• Repay flash loan
#### Proof of Concept
Here is my proof of concept: https://gist.github.com/CamdenClark/932d5fbeecb963d0917cb1321f754132
I can provide a full forge repo. Just ping me on discord.
Forge
Use an oracle
## [H-04] deposit() function is open to reentrancy attacks
Submitted by jayjonah8, also found by bugwriter001, camden, cccz, cmichel, danb, defsec, Fitraldys, harleythedog, hickuphh3, jayjonah8, kenzo, leastwood, onewayfunction, pedroais, and WatchPug
In Vault.sol the deposit() function is left wide open to reentrancy attacks. The function eventually calls \_createDeposit() => \_createClaim() which calls depositors.mint() which will then mint an NFT. When the NFT is minted the sender will receive a callback which can then be used to call the deposit() function again before execution is finished. An attacker can do this minting multiple NFT’s for themselves. claimers.mint() is also called in the same function which can also be used to call back into the deposit function before execution is complete. Since there are several state updates before and after NFT’s are minted this can be used to further manipulate the protocol like with newShares which is called before minting. This is not counting what an attacker can do with cross function reentrancy entering into several other protocol functions (like withdraw) before code execution is complete further manipulating the system.
#### Proof of Concept
Reentrancy guard modifiers should be placed on the deposit(), withdraw() and all other important protocol functions to prevent devastating attacks.
ryuheimat (Sandclock) confirmed
## [H-05] sponsor() function in open to reentrancy attacks
Submitted by jayjonah8, also found by camden
In Vault.sol the sponsor() function does not have a reentrancy guard allowing an attacker to reenter the function because the depositors.mint() function has as callback to the msg.sender. Since there are state updates after the call to depositors.mint() function this is especially dangerous. An attacker can make it so the totalSponsored amount is only updated once after calling mint() several times since the update takes place after the callback. The same will be true for the Sponsored event that is emitted.
#### Proof of Concept
https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/Vault.sol#L244
A reentrancy guard modifier should be added to the sponsor() function in Vault.sol
naps62 (Sandclock) confirmed and resolved:
# Medium Risk Findings (15)
## [M-01] Late users will take more losses than expected when the underlying contract (EthAnchor) suffers investment losses
Submitted by WatchPug
Even though it’s unlikely in practice, but in theory, the underlying contract (EthAnchor) may suffer investment losses and causing decreasing of the PPS of AUST token. (There are codes that considered this situation in the codebase. eg. handling of depositShares > claimerShares).
However, when this happens, the late users will suffer more losses than expected than the users that withdraw earlier. The last few users may lose all their funds while the first users can get back 100% of their deposits.
#### Proof of Concept
// ### for deposits: d1, d2, d3, the beneficiary are: c1, c2, c2
depositAmount claimerShares
d1: + 100e18 c1: + 100e36
d2: + 100e18 c2: + 100e36
d3: + 100e18 c2: + 100e36
depositAmount of d1, d2, d3 = 100e18
c1 claimerShares: 100e36
c2 claimerShares: 200e36
total shares: 300e36
// ### when the PPS of AUST drop by 50%
// ### d2 withdraw
c2 claimerShares: 200e36
d2 depositAmount: 100e18
d2 depositShares: 300e36 * 100e18 / 150e18 = 200e36
Shares to reduce: 200e36
c2 claimerShares: 200e36 -> 0
c2 totalPrincipal: 200e18 -> 100e18
totalShares: 300e36 -> 100e36
underlying.safeTransfer(d2, 100e18)
totalUnderlyingMinusSponsored: 150e18 -> 50e18
#### Root Cause
When the strategy is losing money, share / underlying increases, therefore the computed depositShares: depositAmount * share / underlying will increase unexpectedly.
While totalShares remain unchanged, but the computed depositShares is increasing, causing distortion of depositShares / totalShares, eg, ∑ depositShares > totalShares.
#### Recommendation
In order to properly handle the investment loss of the strategy, consider adding a new storage variable called totalLoss to maintain a stable value of share / adjustedUnderlying.
adjustedUnderlying = underlying + totalLoss
CrisBRM (Sandclock) confirmed and disagreed with severity
dmvt (judge) changed severity and commented:
This is a classic medium risk when using the definition provided by Code4rena:
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
## [M-02] NonUSTStrategy.sol Improper handling of swap fees allows attacker to steal funds from other users
Submitted by WatchPug
NonUSTStrategy will swap the deposited non-UST assets into UST before depositing to EthAnchor. However, the swap fee is not attributed to the depositor correctly like many other yield farming vaults involving swaps (ZapIn).
An attacker can exploit it for the swap fees paid by other users by taking a majority share of the liquidity pool.
#### Root Cause
The swap fee of depositing is not paid by the depositor but evenly distributed among all users.
#### Proof of Concept
Given:
• A NonUST vault and strategy is created for FRAX;
• The liquidity in FRAX-UST curve pool is relatively small (<$1M). The attacker can do the following: 1. Add$1M worth of liquidity to the FRAX-UST curve pool, get >50% share of the pool;
2. Deposit 1M FRAX to the vault, get a depositAmount of 1M;
3. The strategy will swap 1M FRAX to UST via the curve pool, paying a certain amount of swap fee;
4. Withdraw all the funds from the vault.
5. Remove the liquidity added in step 1, profit from the swap fee. (A majority portion of the swap fee paid in step 3 can be retrieved by the attacker as the attacker is the majority liquidity provider.)
If the vault happens to have enough balance (from a recent depositor), the attacker can now receive 1M of FRAX.
A more associated attacker may combine this with issue #160 and initiate a sandwich attack in step 3 to get even higher profits.
As a result, all other users will suffer fund loss as the swap fee is essentially covered by other users.
#### Recommendation
Consider changing the way new shares are issued:
1. Swap from Vault asset (eg. FRAX) to UST in deposit();
2. Using the UST amount out / total underlying UST for the amount of new shares issued to the depositor.
In essence, the depositor should be paying for the swap fee and slippage.
CrisBRM (Sandclock) confirmed and disagreed with severity:
This is only an issue if we support low liquidity Curve pools We are also adding slippage control as per some other issue which would cause massive transfers using low liquidity pools to revert, fully mitigating this. Likelihood of this happening would also be quite low given that profitability would go down tremendously as curve LPs would move to that pool in order to capture higher base fees, dissuading the attacker from continuing.
That being said, I do agree that the curve swap fee (0.04%) should be paid by each individual depositor.
dmvt (judge) changed severity and commented:
This requires a number of external factors to line up just right. It is a medium risk according to the definition provided by Code4rena.
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
## [M-03] Centralization Risk: Funds can be frozen when critical key holders lose access to their keys
Submitted by WatchPug
The current implementation requires trusted key holders (isTrusted[msg.sender]) to send transactions (initRedeemStable()) to initialize withdrawals from EthAnchor before the users can withdraw funds from the contract.
This introduces a high centralization risk, which can cause funds to be frozen in the contract if the key holders lose access to their keys.
#### Proof of Concept
Given:
• investPerc = 80%
• 1,000 users deposited 1M UST in total (1000 each user in avg), 800k invested into AUST (EthAnchor) If the key holders lose access to their keys (“hit by a bus”). The 800k will be frozen in EthAnchor as no one can initRedeemStable(). #### Recommendation See the recommendation on issue #157. CrisBRM (Sandclock) confirmed and disagreed with severity: Agree that there should be a way for users to call the uninvest functions themselves, subject to certain rules. Again, not sure I agree with the severity given the likelihood of the event transpiring. Consensus is for UST vaults, allow depositors to call uninvest. For nonUST vaults that pay per curve swap, add trusted multisig instead of just the backend’s EOA. dmvt (judge) changed severity and commented: This issue requires external factors to align in a very negative way, but it would result in a potentially significant loss of funds. Because there is no direct attack path, it doesn’t qualify as a high risk issue, but a medium risk per Code4rena definitions. 2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements. ## [M-04] unsponsor, claimYield and withdraw might fail unexpectedly Submitted by danb, also found by ACai, cmichel, harleythedog, leastwood, palina, pedroais, and WatchPug totalUnderlying() includes the invested assets, they are not in the contract balance. when a user calls withdraw, claimYield or unsponsor, the system might not have enough assets in the balance and the transfer would fail. especially, force unsponsor will always fail, because it tries to transfer the entire totalUnderlying(), which the system doesn’t have: https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/Vault.sol#L391 when the system doesn’t have enough balance to make the transfer, withdraw from the strategy. gabrielpoca (Sandclock) confirmed: I’m not sure this is an issue. We are aware of it, and redeeming from the strategy won’t fix it because it is asynchronous. This is why we have an investment percentage. dmvt (judge) changed severity and commented: This one is a hard issue to size, but I’m going to go with the medium risk rating provided by other wardens reporting this issue. This seems to amount to a bank run like issue similar to what can happen with DeFi lending protocols. 2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements. If the invested assets are compromised or locked, this could result in a loss of funds. Users of the protocol should be made aware of the risk. This risk exists with many DeFi protocols and probably shouldn’t be a surprise to most users. ## [M-05] Add a timelock to BaseStrategy:setPerfFeePct Submitted by Dravee To give more trust to users: functions that set key/critical variables should be put behind a timelock. #### Proof of Concept https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/strategy/BaseStrategy.sol#L249-L253 #### Tools Used VS Code Add a timelock to setter functions of key/critical variables. naps62 (Sandclock) acknowledged: While this is a valid suggestion, it doesn’t necessarily indicate a vulnerability in the existing approach. A timelock can indeed increase trust, but it never truly eliminates the same risk (i.e.: once the timelock finishes, the same theoretical attacks from a malicious operator could happen anyway) ryuheimat (Sandclock) commented: We will set admin as a timelock ## [M-06] totalUnderlyingMinusSponsored() may revert on underflow and malfunction the contract Submitted by WatchPug https://github.com/code-423n4/2022-01-sandclock/blob/a90ad3824955327597be00bb0bd183a9c228a4fb/sandclock/contracts/Vault.sol#L290-L293 function totalUnderlyingMinusSponsored() public view returns (uint256) { // TODO no invested amount yet return totalUnderlying() - totalSponsored; } As a function that many other functions depended on, totalUnderlyingMinusSponsored() can revert on underflow when sponsorAmount > totalUnderlying() which is possible and has been considered elsewhere in this contract: https://github.com/code-423n4/2022-01-sandclock/blob/a90ad3824955327597be00bb0bd183a9c228a4fb/sandclock/contracts/Vault.sol#L390-L392 if (_force && sponsorAmount > totalUnderlying()) { sponsorToTransfer = totalUnderlying(); } #### Proof of Concept • Underlying token = USDT • Swap Fee = 0.04% • Sponsor call sponsor() and send 10,000 USDT • totalSponsored = 10,000 • NonUSTStrategy.sol#doHardWork() swapped USDT for UST • pendingDeposits = 9,996 • totalUnderlying() = 9,996 • Alice tries to call deposit(), the tx will revet due to underflow in totalUnderlyingMinusSponsored(). #### Recommendation Change to: function totalUnderlyingMinusSponsored() public view returns (uint256) { uint256 _totalUnderlying = totalUnderlying(); if (totalSponsored > _totalUnderlying) { return 0; } return _totalUnderlying - totalSponsored; } naps62 (Sandclock) confirmed ## [M-07] Vault can’t receive deposits if underlying token charges fees on transfer Submitted by Ruhum, also found by harleythedog, Tomio, and WatchPug Some ERC20 tokens charge a fee for every transfer. If the underlying token of a vault is such a token any deposit to the protocol will fail. Some tokens have the possibility of adding fees later on, e.g. USDT. So those have to be covered too. Generally, the user would also receive fewer tokens on withdrawing in such a scenario but that’s not the protocol’s fault. I rated the issue as medium since part of the protocol become unavailable in such a situation. #### Proof of Concept https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/Vault.sol#L583-L585 _transferAndCheckUnderlying() is used to deposit and sponsor the vault. It checks that after a safeTransferFrom() the same exact amount is sent to the balance of the vault. But, if fees are enabled the values won’t match, causing the function to revert. Thus, it won’t be able to deposit or sponsor the vault in any way. One possibility would be to simply not use ERC20 tokens with fees. ryuheimat (Sandclock) disputed: We don’t use tokens with fees naps62 (Sandclock) commented: The only place where we mention USDT is on an old pitch deck (not up to date anymore). The codebase itself doesn’t mention it, and all tests are done with USDC and DAI as examples dmvt (judge) commented: I’m going to let this issue stand given that #164 is also valid. Supported or not, fee on transfer tokens would cause a loss of funds in the scenario described. As the USDT example shows (in both issues), many stables can be upgraded and add a fee later. ## [M-08] Medium: Consider alternative price feed + ensure _minLockPeriod > 0 to prevent flash loan attacks Submitted by hickuphh3, also found by 0x1f8b It is critical to ensure that _minLockPeriod > 0 because it is immutable and cannot be changed once set. A zero minLockPeriod will allow for flash loan attacks to occur. Vaults utilising the nonUST strategy are especially susceptible to this attack vector since the strategy utilises the spot price of the pool to calculate the total asset value. #### Proof of Concept Assume the vault’s underlying token is MIM, and the curve pool to be used is the MIM-UST pool. Further assume that both the vault and the strategy holds substantial funds in MIM and UST respectively. 1. Flash loan MIM from the Uniswap V3 MIM-USDC pool (currently has ~3.5M in MIM at the time of writing).
2. Convert half of the loaned MIM to UST to inflate and deflate their prices respectively.
3. Deposit the other half of the loaned MIM into the vault. We expect curvePool.get_dy_underlying(ustI, underlyingI, ustAssets); to return a smaller amount than expected because of the previous step. As a result, the attacker is allocated more shares than expected.
4. Exchange UST back to MIM, bringing back the spot price of MIM-UST to a normal level.
5. Withdraw funds from the vault. The number of shares to be deducted is lower as a result of (4), with the profit being accounted for as yield.
6. Claim yield and repay the flash loan.
Ensure that _minLockPeriod is non-zero in the constructor. Also, given how manipulatable the spot price of the pool can be, it would be wise to consider an alternative price feed.
// in Vault#constructor
require(_minLockPeriod > 0, 'zero minLockPeriod');
ryuheimat (Sandclock) disputed:
we don’t think it’s an issue.
dmvt (judge) commented:
This does potentially open assets up to flash loan risk. It is probably a good idea to have this variable guarded.
## [M-09] no use of safeMint() as safe guard for users
Submitted by jayjonah8, also found by bugwriter001, camden, palina, and sirhashalot
In Vault.sol the deposit() function eventually calls claimers.mint() and depositers.mint(). Calling mint this way does not ensure that the receiver of the NFT is able to accept them. \_safeMint() should be used with reentrancy guards as a guard to protect the user as it checks to see if a user can properly accept an NFT and reverts otherwise.
#### Proof of Concept
Use \_safeMint() instead of mint()
ryuheimat (Sandclock) disagreed with severity:
I think _safeMint check if the recipient contract is able to accept NFT, it does not involves any issues. However we will use _safeMint.
gabrielpoca (Sandclock) commented:
@ryuheimat this is a non-issue. The mint functions called in the Vault’s deposit function are implemented by us, they just happen to be called mint.
dmvt (judge) commented:
The Depositors contract does use _safeMint, but the Claimers contract does not.
The deposit function on Vault also appears to lack reentrancy guards. The issue is valid and should be addressed, despite the fact that the warden clearly did not look at the Depositors contract to see that it already used _safeMint.
## [M-10] No setter for exchangeRateFeeder, whose address might change in future
Submitted by kenzo
EthAnchor’s docs state that “the contract address of ExchangeRateFeeder may change as adjustments occur”. BaseStrategy does not have a setter to change exchangeRateFeeder after deployment.
#### Impact
Inaccurate/unupdated values from exchangeRateFeeder when calculating vault’s total invested assets.
While the strategy’s funds could be withdrawn from EthAnchor and migrated to a new strategy with correct exchangeRateFeeder, during this process (which might take time due to EthAnchor’s async model) the wrong exchangeRateFeeder will be used to calculate the vault’s total invested assets. (The vault’s various actions (deposit, claim, withdraw) can not be paused.)
#### Proof of Concept
The exchangeRateFeeder is being used to calculate the vault’s invested assets, which is used extensively to calculate the correct amount of shares and amounts: (Code ref)
function investedAssets() external view virtual override(IStrategy) returns (uint256) {
uint256 underlyingBalance = _getUnderlyingBalance() + pendingDeposits;
uint256 aUstBalance = _getAUstBalance() + pendingRedeems;
* aUstBalance) / 1e18);
}
EthAnchor documentation states that unlike other contracts, exchangeRateFeeder is not proxied and it’s address may change in future: “the contract address of ExchangeRateFeeder may change as adjustments occur. ” (ref)
## [M-11] Changing a strategy can be bricked
Submitted by kenzo, also found by danb and harleythedog
A vault wouldn’t let the strategy be changed unless the strategy holds no funds.
Since anybody can send funds to the strategy, a griefing attack is possible.
#### Impact
Strategy couldn’t be changed.
#### Proof of Concept
setStrategy requires strategy.investedAssets() == 0. (Code ref) investedAssets contains the aUST balance and the pending redeems: (Code ref)
uint256 aUstBalance = _getAUstBalance() + pendingRedeems;
So if a griefer sends 1 wei of aUST to the strategy before it is to be replaced, it would not be able to be replaced. The protocol would then need to redeem the aUST and wait for the process to finish - and the griefer can repeat his griefing. As they say, griefers gonna grief.
Consider keeping an internal aUST balance of the strategy, which will be updated upon deposit and redeem, and use it (instead of raw aUST balance) to check if the strategy holds no aUST funds.
Another option is to add capability for the strategy to send the aUST to the vault.
ryuheimat (Sandclock) confirmed
CloudEllie (C4) commented:
Warden kenzo requested that I add the following:
“Additionally, impact-wise: EthAnchor does not accept redeems of less than 10 aUST. This means that if a griefer only sends 1 wei aUST, the protocol would have to repeatedly send additional aUST to the strategy to be able to redeem the griefer’s aUST.”
## [M-12] investedAssets() Does Not Take Into Consideration The Performance Fee Charged On Strategy Withdrawals
Submitted by leastwood, also found by danb
The investedAssets() function is implemented by the vault’s strategy contracts as a way to express a vault’s investments in terms of the underlying currency. While the implementation of this function in BaseStrategy.sol and NonUSTStrategy.sol is mostly correct. It does not account for the performance fee charged by the treasury as shown in finishRedeemStable().
Therefore, an attacker could avoid paying their fair share of the performance fee by withdrawing their assets before several calls to finishRedeemStable() are made and reenter the vault once the fee is charged.
#### Proof of Concept
https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/strategy/BaseStrategy.sol#L180-L204
function finishRedeemStable(uint256 idx) public virtual {
require(redeemOperations.length > idx, "not running");
Operation storage operation = redeemOperations[idx];
uint256 aUstBalance = _getAUstBalance() + pendingRedeems;
uint256 originalUst = (convertedUst * operation.amount) / aUstBalance;
uint256 ustBalanceBefore = _getUstBalance();
ethAnchorRouter.finishRedeemStable(operation.operator);
uint256 redeemedAmount = _getUstBalance() - ustBalanceBefore;
uint256 perfFee = redeemedAmount > originalUst
? (redeemedAmount - originalUst).percOf(perfFeePct)
: 0;
if (perfFee > 0) {
ustToken.safeTransfer(treasury, perfFee);
emit PerfFeeClaimed(perfFee);
}
convertedUst -= originalUst;
pendingRedeems -= operation.amount;
operation.operator = redeemOperations[redeemOperations.length - 1]
.operator;
operation.amount = redeemOperations[redeemOperations.length - 1].amount;
redeemOperations.pop();
}
https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/strategy/BaseStrategy.sol#L263-L277
function investedAssets()
external
view
virtual
override(IStrategy)
returns (uint256)
{
uint256 underlyingBalance = _getUnderlyingBalance() + pendingDeposits;
uint256 aUstBalance = _getAUstBalance() + pendingRedeems;
return
underlyingBalance +
aUstBalance) / 1e18);
}
https://github.com/code-423n4/2022-01-sandclock/blob/main/sandclock/contracts/strategy/NonUSTStrategy.sol#L120-L136
function investedAssets()
external
view
override(BaseStrategy)
returns (uint256)
{
uint256 underlyingBalance = _getUnderlyingBalance();
uint256 aUstBalance = _getAUstBalance() + pendingRedeems;
uint256 ustAssets = ((exchangeRateFeeder.exchangeRateOf(
true
) * aUstBalance) / 1e18) + pendingDeposits;
return
underlyingBalance +
curvePool.get_dy_underlying(ustI, underlyingI, ustAssets);
}
#### Tools Used
Manual code review. Discussions with the Sandclock team (mostly Ryuhei).
When calculating the investedAssets() amount (expressed in the underlying currency), consider calculating the expected performance fee to be charged if all the strategy’s assets are withdrawn from the Anchor protocol. This should ensure that investedAssets() returns the most accurate amount, preventing users from gaming the protocol.
## [M-13] Incompatibility With Rebasing/Deflationary/Inflationary tokens
Submitted by defsec
The Strategy contracts do not appear to support rebasing/deflationary/inflationary tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest.
#### Proof of Concept
• Make sure token vault accounts for any rebasing/inflation/deflation
• Add support in contracts for such tokens before accepting user-supplied tokens
• Consider to check before/after balance on the vault.
naps62 (Sandclock) disputed:
we did not intend to support those currencies in the first place
dmvt (judge) commented:
As with issues #55 and #164, this oversight can cause a loss of funds and therefor constitutes a medium risk. Simply saying you don’t support something does not mean that thing doesn’t exist or won’t cause a vulnerability in the future.
## [M-14] A Single Malicious Trusted Account Can Takeover Parent Contract
Submitted by leastwood, also found by hickuphh3
The requiresTrust() modifier is used on the strategy, vault and factory contracts to prevent unauthorised accounts from calling restricted functions. Once an account is considered trusted, they are allowed to add and remove accounts by calling setIsTrusted() as they see fit.
However, if any single account has its private keys compromised or decides to become malicious on their own, they can remove all other trusted accounts from the isTrusted mapping. As a result, they are effectively able to take over the trusted group that controls all restricted functions in the parent contract.
#### Proof of Concept
abstract contract Trust {
event UserTrustUpdated(address indexed user, bool trusted);
isTrusted[initialUser] = true;
emit UserTrustUpdated(initialUser, true);
}
function setIsTrusted(address user, bool trusted) public virtual requiresTrust {
isTrusted[user] = trusted;
emit UserTrustUpdated(user, trusted);
}
modifier requiresTrust() {
require(isTrusted[msg.sender], "UNTRUSTED");
_;
}
}
Consider utilising Rari Capital’s updated Auth.sol contract found here. This updated contract gives the owner account authority over its underlying trusted accounts, preventing any single account from taking over the trusted group. The owner account should point to a multisig managed by the Sandclock team or by a community DAO.
naps62 (Sandclock) confirmed
dmvt (judge) changed severity and commented:
If this were to happen, funds would definitely be lost. Accordingly, this is a medium risk issue.
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
## [M-15] Check _to is not empty
Submitted by pauliax
functions claimYield, \_withdraw, and \_unsponsor should validate that \_to is not an empty 0x0 address to prevent accidental burns.
Consider implementing the proposed validation: require \_to != address(0)
dmvt (judge) commented:
In this case assets are at risk due to external factors. A zero address check makes sense.
# Disclosures
C4 is an open organization governed by participants in the community.
C4 Contests incentivize the discovery of exploits, vulnerabilities, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Contest submissions are judged by a knowledgeable security researcher and solidity developer and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.
C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.
|
|
# How do you simplify -2(3x-2)-2x?
Dec 30, 2016
See explanation below:
#### Explanation:
First, expand the term in parenthesis (color(blue)(3x) - color(blue)(2)) by multiplying it by the term outside the parenthesis - $\textcolor{red}{- 2}$. Be careful to manage the signs correctly:
$\left(\textcolor{red}{- 2} \cdot \textcolor{b l u e}{3 x}\right) + \left(\textcolor{red}{- 2} \cdot \textcolor{b l u e}{- 2}\right) - 2 x \to$
$\left(- 6 x\right) + \left(+ 4\right) - 2 x \to - 6 x + 4 - 2 x$
Next group like terms:
$- 6 x - 2 x + 4$
Now, combine like terms:
$\left(- 6 - 2\right) x + 4$
$- 8 x + 4$ or $4 \left(2 - x\right)$
|
|
# Champernowne constant
The Champernowne constant is the real number $$C_{10}=0.12345678910111213141516171819202122232425\ldots$$
|
|
# Generate Toothpick Sequence
## What is Toothpick Sequence?
According to Wikipedia
In geometry, the toothpick sequence is a sequence of 2-dimensional patterns which can be formed by repeatedly adding line segments ("toothpicks") to the previous pattern in the sequence.
The first stage of the design is a single "toothpick", or line segment. Each stage after the first is formed by taking the previous design and, for every exposed toothpick end, placing another toothpick centered at a right angle on that end.
This process results in a pattern of growth in which the number of segments at stage n oscillates with a fractal pattern between 0.45n2 and 0.67n2. If T(n) denotes the number of segments at stage n, then values of n for which T(n)/n2 is near its maximum occur when n is near a power of two, while the values for which it is near its minimum occur near numbers that are approximately 1.43 times a power of two. The structure of stages in the toothpick sequence often resemble the T-square fractal, or the arrangement of cells in the Ulam–Warburton cellular automaton.
All of the bounded regions surrounded by toothpicks in the pattern, but not themselves crossed by toothpicks, must be squares or rectangles. It has been conjectured that every open rectangle in the toothpick pattern (that is, a rectangle that is completely surrounded by toothpicks, but has no toothpick crossing its interior) has side lengths and areas that are powers of two, with one of the side lengths being at most two.
You must make a program or function that take input from STDIN, function argument, or commandline argument and make a tootpick fractal at that stage. Leading and trailing newline is prohibited except if it is unavoidable. Bounding box must be at minimum, including leading and trailing space. For inital line, we make two \ diagonal in space. The input is guaranteed to be less than two thousand. At least one line have non-space character. Trailing space is allowed.
## Test Cases
1
\
\
5
\
/\
/\
/ /\
\/\/\ \ \
\ \ \/\/\
\/ /
\/
\/
\
## CJam, 99 93 bytes
This got rather long...
"\ "_W%]{{Sf+W%z}4*2ew{2fewz{_sa"\//\\"4S**4/^_,3={:.e>W%2/\}&;}%z{\)a@.e>+}:Ff*}%{F}*}q~(*N*
Test it here. If you want to test larger inputs, like the 89 on Wikipedia, Dennis's TryItOnline uses the much faster Java interpreter under the hood and can handle inputs like that in a few seconds.
I'm sure there is a lot of room for improvement, and I'll add an explanation once I'm happier with the score...
Here is the output for N = 13:
\
/\
/\
/ /\
\/\/\ \ \
\ \/\/\/\
/\/\/
/ /\ \
\ /\/\ \ \
/\ / \/\/\ /\
/\ /\ /\/ \ \/\
/ /\/ /\/ /\ /\ \/\
\/\/\/\/\/\/\ \/\ \/\ \ \
\ \ \/\ \/\ \/\/\/\/\/\/\
\/\ \/ \/ /\/ /\/ /
\/\ \ /\/ \/ \/
\/ \/\/\ / \/
\ \ \/\/ \
\ \/ /
/\/\/
\/\/\/\ \
\ \ \/\/\
\/ /
\/
\/
\
For my own reference when golfing this further, some other ideas:
"\ "_W%]{{Sf+W%z}4*2few2ew::.{+_a"\//\\"4S**4/^_,3={:.e>W%\}&;2/}:z{\)a@.e>+}ff*{\)a@..e>+}*}ri(*N*
"\ "_W%]{{Sf+W%z}4*2ew{2fewz{_sa"\//\\"4S**4/^_,3={:.e>W%2/\}&;}%{.{\)a@.e>+}}*}%{\)a@.e>+}*}q~(*N*
# JavaScript (ES6), 263 bytes
n=>(o=(o=[..." ".repeat(n*2)]).map(_=>o.map(_=>s=c=" ")),(g=a=>s++<n&&g(q=[],a.map(p=>o[p[4]][p[3]]==c&&(o[y=p[1]][x=p[0]]=o[y-1][(b=+p[2])?x-1:x+1]="/\\"[b],q.push([x,++y,!b,b?x+1:x-1,y],[b?x-=2:x+=2,y-2,!b,x,y-3])))))([[n,n,1,n,n]]),o.map(r=>r.join).join
)
## Explanation
n=>( // n = desired stage
o= // o = output grid
// [ [ "\\", " " ], [ " ", "\\" ], etc... ]
(o=[..." ".repeat(n*2)]) // create an array the size of the grid
.map(_=>o.map(_=> // loop over it and return the output grid
s= // s = current stage (" " acts the same as 0)
c= // c = blank character
" " // initialise each element to " "
)),
(g= // g = compute stage function
a=> // a = positions to place toothpicks
// [ x, y, isBackslash, checkX, checkY ]
s++<n&& // do nothing if we have reached the desired stage
g(q=[], // q = positions for the next stage's toothpicks
a.map(p=> // p = current potential toothpick position
o[p[4]][p[3]]==c&&( // check the position to make sure it is clear
o[y=p[1]][x=p[0]]= // place bottom toothpick, x/y = position x/y
o[y-1][ // place top toothpick
(b=+p[2]) // b = isBackslash
?x-1:x+1 // top toothpick x depends on direction
]="/\\"[b], // set the location to the appropriate character
// Add the next toothpick positions
q.push([x,++y,!b,b?x+1:x-1,y],
[b?x-=2:x+=2,y-2,!b,x,y-3])
)
)
)
)([[n,n,1,n,n]]), // place the initial toothpicks
o.map(r=>r.join).join
// return the grid converted to a string
)
## Test
Stages: <input type="number" oninput='result.innerHTML=(
n=>(o=(o=[..." ".repeat(n*2)]).map(_=>o.map(_=>s=c=" ")),(g=a=>s++<n&&g(q=[],a.map(p=>o[p[4]][p[3]]==c&&(o[y=p[1]][x=p[0]]=o[y-1][(b=+p[2])?x-1:x+1]="/\\"[b],q.push([x,++y,!b,b?x+1:x-1,y],[b?x-=2:x+=2,y-2,!b,x,y-3])))))([[n,n,1,n,n]]),o.map(r=>r.join).join
)
)(+this.value)' /><pre id="result"></pre>
# Ruby, 151 bytes
Golfed version uses only one loop, j, with i and k calculated on the fly.
->n{m=n*2
s=(' '*m+$/)*m l=m*m+m s[l/2+n]=s[l/2-n-2]=?\\ (n*l-l).times{|j|(s[i=j%l]+s[i-m-2+2*k=j/l%2]).sum==124-k*45&&s[i-m-1]=s[i-1+2*k]="/\\"[k]} s} Ungolfed in test program This version uses 2 nested loops. A rarely used builtin is sum which returns a crude checksum by adding all the bytes of an ascii string. f=->n{ m=n*2 #calculate grid height / width s=(' '*m+$/)*m #fill grid with spaces, separated by newlines
l=m*m+m #calculate length of string s
s[l/2+n]=s[l/2-n-2]=?\\ #draw the first toothpick
(n-1).times{|j| #iterate n-1 times
l.times{|i| #for each character in the string
(s[i]+s[i-m-2+2*k=j%2]).sum==124-k*45&& #if checksum of current character + character diagonally above indicates the end of a toothpick
s[i-m-1]=s[i-1+2*k]="/\\"[k] #draw another toothpick at the end
}
}
s} #return value = s
puts f[gets.to_i]
|
|
Copied to
clipboard
## G = C3×Dic18order 216 = 23·33
### Direct product of C3 and Dic18
Series: Derived Chief Lower central Upper central
Derived series C1 — C18 — C3×Dic18
Chief series C1 — C3 — C9 — C18 — C3×C18 — C3×Dic9 — C3×Dic18
Lower central C9 — C18 — C3×Dic18
Upper central C1 — C6 — C12
Generators and relations for C3×Dic18
G = < a,b,c | a3=b36=1, c2=b18, ab=ba, ac=ca, cbc-1=b-1 >
Smallest permutation representation of C3×Dic18
On 72 points
Generators in S72
(1 13 25)(2 14 26)(3 15 27)(4 16 28)(5 17 29)(6 18 30)(7 19 31)(8 20 32)(9 21 33)(10 22 34)(11 23 35)(12 24 36)(37 61 49)(38 62 50)(39 63 51)(40 64 52)(41 65 53)(42 66 54)(43 67 55)(44 68 56)(45 69 57)(46 70 58)(47 71 59)(48 72 60)
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72)
(1 55 19 37)(2 54 20 72)(3 53 21 71)(4 52 22 70)(5 51 23 69)(6 50 24 68)(7 49 25 67)(8 48 26 66)(9 47 27 65)(10 46 28 64)(11 45 29 63)(12 44 30 62)(13 43 31 61)(14 42 32 60)(15 41 33 59)(16 40 34 58)(17 39 35 57)(18 38 36 56)
G:=sub<Sym(72)| (1,13,25)(2,14,26)(3,15,27)(4,16,28)(5,17,29)(6,18,30)(7,19,31)(8,20,32)(9,21,33)(10,22,34)(11,23,35)(12,24,36)(37,61,49)(38,62,50)(39,63,51)(40,64,52)(41,65,53)(42,66,54)(43,67,55)(44,68,56)(45,69,57)(46,70,58)(47,71,59)(48,72,60), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72), (1,55,19,37)(2,54,20,72)(3,53,21,71)(4,52,22,70)(5,51,23,69)(6,50,24,68)(7,49,25,67)(8,48,26,66)(9,47,27,65)(10,46,28,64)(11,45,29,63)(12,44,30,62)(13,43,31,61)(14,42,32,60)(15,41,33,59)(16,40,34,58)(17,39,35,57)(18,38,36,56)>;
G:=Group( (1,13,25)(2,14,26)(3,15,27)(4,16,28)(5,17,29)(6,18,30)(7,19,31)(8,20,32)(9,21,33)(10,22,34)(11,23,35)(12,24,36)(37,61,49)(38,62,50)(39,63,51)(40,64,52)(41,65,53)(42,66,54)(43,67,55)(44,68,56)(45,69,57)(46,70,58)(47,71,59)(48,72,60), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72), (1,55,19,37)(2,54,20,72)(3,53,21,71)(4,52,22,70)(5,51,23,69)(6,50,24,68)(7,49,25,67)(8,48,26,66)(9,47,27,65)(10,46,28,64)(11,45,29,63)(12,44,30,62)(13,43,31,61)(14,42,32,60)(15,41,33,59)(16,40,34,58)(17,39,35,57)(18,38,36,56) );
G=PermutationGroup([[(1,13,25),(2,14,26),(3,15,27),(4,16,28),(5,17,29),(6,18,30),(7,19,31),(8,20,32),(9,21,33),(10,22,34),(11,23,35),(12,24,36),(37,61,49),(38,62,50),(39,63,51),(40,64,52),(41,65,53),(42,66,54),(43,67,55),(44,68,56),(45,69,57),(46,70,58),(47,71,59),(48,72,60)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72)], [(1,55,19,37),(2,54,20,72),(3,53,21,71),(4,52,22,70),(5,51,23,69),(6,50,24,68),(7,49,25,67),(8,48,26,66),(9,47,27,65),(10,46,28,64),(11,45,29,63),(12,44,30,62),(13,43,31,61),(14,42,32,60),(15,41,33,59),(16,40,34,58),(17,39,35,57),(18,38,36,56)]])
C3×Dic18 is a maximal subgroup of
C6.D36 C3⋊Dic36 D12.D9 C12.D18 Dic18⋊S3 D12⋊D9 Dic9.D6 C3×Q8×D9
63 conjugacy classes
class 1 2 3A 3B 3C 3D 3E 4A 4B 4C 6A 6B 6C 6D 6E 9A ··· 9I 12A ··· 12H 12I 12J 12K 12L 18A ··· 18I 36A ··· 36R order 1 2 3 3 3 3 3 4 4 4 6 6 6 6 6 9 ··· 9 12 ··· 12 12 12 12 12 18 ··· 18 36 ··· 36 size 1 1 1 1 2 2 2 2 18 18 1 1 2 2 2 2 ··· 2 2 ··· 2 18 18 18 18 2 ··· 2 2 ··· 2
63 irreducible representations
dim 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 type + + + + - + + - + - image C1 C2 C2 C3 C6 C6 S3 Q8 D6 D9 C3×S3 C3×Q8 Dic6 D18 S3×C6 C3×D9 Dic18 C3×Dic6 C6×D9 C3×Dic18 kernel C3×Dic18 C3×Dic9 C3×C36 Dic18 Dic9 C36 C3×C12 C3×C9 C3×C6 C12 C12 C9 C32 C6 C6 C4 C3 C3 C2 C1 # reps 1 2 1 2 4 2 1 1 1 3 2 2 2 3 2 6 6 4 6 12
Matrix representation of C3×Dic18 in GL2(𝔽37) generated by
10 0 0 10
,
32 0 0 22
,
0 1 36 0
G:=sub<GL(2,GF(37))| [10,0,0,10],[32,0,0,22],[0,36,1,0] >;
C3×Dic18 in GAP, Magma, Sage, TeX
C_3\times {\rm Dic}_{18}
% in TeX
G:=Group("C3xDic18");
// GroupNames label
G:=SmallGroup(216,43);
// by ID
G=gap.SmallGroup(216,43);
# by ID
G:=PCGroup([6,-2,-2,-3,-2,-3,-3,72,169,79,3604,208,5189]);
// Polycyclic
G:=Group<a,b,c|a^3=b^36=1,c^2=b^18,a*b=b*a,a*c=c*a,c*b*c^-1=b^-1>;
// generators/relations
Export
×
𝔽
|
|
## Precalculus (6th Edition) Blitzer
The solution set of the equations is $\left\{ \left( -2,1 \right),\left( -1,2 \right) \right\}$.
Consider the pair of equations, \left\{ \begin{align} & x-y=-3 \\ & {{x}^{2}}+{{y}^{2}}=5 \end{align} \right. Consider $x-y=-3$ and solve for x in terms of y. $x=y-3$ Now, substitute the value of x in the equation ${{x}^{2}}+{{y}^{2}}=5$. ${{\left( y-3 \right)}^{2}}+{{y}^{2}}=5$ Factorize the equation using mathematical operations as follows: \begin{align} & {{y}^{2}}+9-6y+{{y}^{2}}=5 \\ & 2{{y}^{2}}-6y+4=0 \\ & {{y}^{2}}-3y+2=0 \\ & {{y}^{2}}-2y-y+2=0 \end{align} Solve this further as, $\left( y-2 \right)\left( y-1 \right)=0$ Put the factors $\left( y-2 \right),\left( y-1 \right)$ equal to zero and solve for y: \begin{align} & \left( y-2 \right)=0 \\ & y=2 \end{align} Or \begin{align} & \left( y-1 \right)=0 \\ & y=1 \end{align} Substitute the value of $y=2,y=1$ in the equation $x=y-3$ to obtain the values of x. For $y=2$ \begin{align} & x=2-3 \\ & =-1 \end{align} For $y=1$ \begin{align} & x=1-3 \\ & =-2 \end{align} From here, the solution set is $\left( -2,1 \right)$ and $\left( -1,2 \right)$ Now substitute the values of x and y obtained in the equations \left\{ \begin{align} & x-y=-3 \\ & {{x}^{2}}+{{y}^{2}}=5 \end{align} \right. to check if they satisfy the equation. Substitute $\left( -2,1 \right)$ in $x-y=-3$ \begin{align} -2-\left( 1 \right)\overset{?}{\mathop{=}}\,-3 & \\ -3=-3 & \\ \end{align} Which is true. So, $\left( -2,1 \right)$ satisfy the equation $x-y=-3$ Substitute $\left( -2,1 \right)$ in ${{x}^{2}}+{{y}^{2}}=5$ \begin{align} {{\left( -2 \right)}^{2}}+{{1}^{2}}\overset{?}{\mathop{=}}\,5 & \\ 4+1\overset{?}{\mathop{=}}\,5 & \\ 5=5 & \\ \end{align} Which is true. So, $\left( -2,1 \right)$ satisfy the equation ${{x}^{2}}+{{y}^{2}}=5$ Substitute $\left( -1,2 \right)$ in $x-y=-3$ \begin{align} -1-2\overset{?}{\mathop{=}}\,-3 & \\ -3=-3 & \\ \end{align} Which is true. So, $\left( -1,2 \right)$ satisfy the equation $x-y=-3$ Substitute $\left( -1,2 \right)$ in ${{x}^{2}}+{{y}^{2}}=5$ \begin{align} {{\left( -1 \right)}^{2}}+{{2}^{2}}\overset{?}{\mathop{=}}\,5 & \\ 1+4\overset{?}{\mathop{=}}\,5 & \\ 5=5 & \\ \end{align} Which is true. So, $\left( -1,2 \right)$ satisfy the equation ${{x}^{2}}+{{y}^{2}}=5$ Therefore, the solution set of the system of linear equations is $\left\{ \left( -2,1 \right),\left( -1,2 \right) \right\}$.
|
|
# Grade 7 Metrobank – MTAP Math Challenge Sample Problems Part 4
1. In teacher Ella’s class, a student receives a final grade of A if the student garners an average of at least 92% in the five long tests. After four long tests, Jonathan got an average of 91%. At least how much should he get in the last long test to get a final grade of A?
2. A class of 47 students took examinations in algebra and in Geometry. If 20 passed Algebra, 26 passed Geometry and four failed in both subjects, how many passed both subjects?
3. A runner started a course at a steady rate of 8 kph. Five minutes later, a second runner started the same course at 10 kph. How long did it take for the second runner to overtake the first?
4. A rectangle has sides (2x+3) cm and (4x+5) cm. how many squares of side x cm can be cut from it?
5. Let ABCDE is a regular Pentagon. What is the measure of $\angle{CAD}$?
6. The average of five numbers is 20. If the sum of two numbers is 23, what is the average of other 3 numbers?
7. A long steel bar is to be cut in the ratio of 2:3:5. If the middle piece is 7 , how long is the steel bar?
8. Marvin is 10% taller than Homer and homer is 10% taller than August. How much (in percent) is Marvin taller than August?
9. Which is the largest? $a=2^{48}, b=3^{28}, c=5^{24}$
10. When 3n is divided by 7 the remainder is 4. What is the remainder when 2n is divided by 7?
11. Which is smaller? $A=(2015)(2014)(2013)(2012)(2011) or B=2013^5$ ?
12. If $\displaystyle\frac{-12}{5}\leq x\leq\frac{-1}{2}$ and $3\leq y\le\displaystyle \frac{9}{2}$, what is the largest possible value of $\displaystyle\frac{x-y}{x+y}$
13. If $x^2-3x+1=0$, find the value of $x^2+\displaystyle\frac{1}{x^2}$
14. If $(x+3)(x-3)(x+1)=(x+2)Q(x)+(x+3)(x-2)$, what is $Q(x)$?
15. All faces of a 4-inch cube have been painted. If the cube is cut into 1-inch smaller cubes, how many of them have no paint on all their faces?
16. If $x=-1$ find the value of $2013x^{2013}+2012x^{2012}+2011x^{2011}+\ldots+2x^2+x$?
17. The sum of two numbers is 20 and their product is 15. Find the sum of their cubes?
18. How many positive factors does $62(63^3+63^2+63+1)+1$ have?
19. If the sides of the cube are tripled, what percent of the original volume is the new volume?
20. If the number 100 is expressed as a sum of 100 consecutive positive odd integers, what is the largest among all numbers?
### Dan
Blogger and a Math enthusiast. Has no interest in Mathematics until MMC came. Aside from doing math, he also loves to travel and watch movies.
### 7 Responses
1. Tracie says:
Hello I am so grateful I found your web site, I really found you by error, while I was browsing on Aol for something else, Nonetheless I am here now and would just like to say thank you for a marvelous post and a all round exciting blog (I also love the theme/design), I donít have time to go through it all at the minute but I have book-marked it and also included your RSS feeds, so when I have time I will be back to read much more, Please do keep up the superb job.
3. Kristofer says:
Great beat ! I wish to apprentice while you amend your website, how could i subscribe for a blog site? The account aided me a acceptable deal. I had been tiny bit acquainted of this your broadcast provided brilliant transparent idea
4. Excellent publish from specialist also it will probably be a fantastic know how to me and thanks really much for posting this helpful data with us all.
5. I’m also writing to let you know of the cool experience my daughter had checking yuor web blog. She learned several issues, including what it’s like to possess an amazing giving mindset to get most people without difficulty fully understand chosen tortuous issues. You really exceeded people’s expected results. Thank you for churning out such great, safe, educational and also unique tips on the topic.
6. I’m also writing to let you know of the cool experience my daughter had checking yuor web blog. She learned several issues, including what it’s like to possess an amazing giving mindset to get most people without difficulty fully understand chosen tortuous issues. You really exceeded people’s expected results. Thank you for churning out such great, safe, educational and also unique tips on the topic.
1. April 18, 2014
penis extender review
hi!,I really like your writing very so much! proportion we communicate more approximately your post on AOL? I need an expert on this space to solve my problem. Maybe that is you! Taking a look forward to look you.
|
|
# 木卫五
≈ 0.058 km/s[a]
[7] 120 K 165 K
## 參考資料
1. Calculated on the basis of other parameters.
2. ^ Calculated on the basis of known distances, sizes, periods and visual magnitudes as visible from the Earth. Visual magnitudes as seen from Jupiter mj are calculated from visual magnitudes on Earth mv using the formula mj=mv−log2.512(Ij/Iv), where Ij and Iv are respective brightnesses (see visual magnitude), which scale according to the inverse square law. For visual magnitudes see http://www.oarval.org/ClasSaten.htm and Jupiter (planet).
3. ^ Calculated from the known sizes and distances of the bodies, using the formula 2*arcsin(Rb/Ro), where Rb is the radius of the body and Ro is the radius of Amalthea's orbit or distance from the Jovian surface to Amalthea.
1. ^ Basil Montagu (1848) The works of Francis Bacon, vol. 1, p. 303
2. ^ Isaac Asimov (1969) "Dance of the Satellites", The Magazine of Fantasy and Science Fiction, vol. 36, p. 105–115
3. Cooper Murray et al. 2006.
4. Thomas Burns et al. 1998.
5. Anderson Johnson et al. 2005.
6. ^ Simonelli Rossier et al. 2000.
7. ^ Simonelli 1983.
8. ^ Observatorio ARVAL.
9. ^ Lick Observatory. A Brief Account of the Lick Observatory of the University of California. The University Press. 1894: 7– [2022-06-06]. (原始内容存档于2021-11-14).
|
|
## Biofuel that's better than carbon neutralComments>>
The race is on to create a biofuel that sucks carbon out of the sky and locks it away where it can't warm the planet
THE green sludge burbles away quietly in its tangle of tubes in the Spanish desert. Soaking up sunshine and carbon dioxide from a nearby factory, it grows quickly. Every day, workers skim off some sludge and take it away to be transformed into oil. People do in a single day what it took geology 400 million years to accomplish.
Indeed, this is no ordinary oil. It belongs to a magical class of "carbon negative" fuels, ones that take carbon out of the atmosphere and lock it away for good. The basic idea is fairly simple. You grow plants, in this case algae, which naturally draw CO2 from the atmosphere. After you extract the oil, you're left with a residue that holds a substantial portion of the carbon. This residue is the key to carbon negativity. If you can store the carbon where it won't decompose and return to the air, more CO2 is taken out of the atmosphere than the fuel emits.
Such carbon negative fuels are no accounting sleight of hand - they could be the most realistic short-term solution we have to curb climate change. And although it is still early days, companies like General Electric, BP and Google are putting their money behind the idea.
Every time you drive your car or hop on a plane to somewhere sunny you're adding a little more carbon to the atmosphere and bringing a global warming crisis just a little bit closer. Biofuels are one way of reducing the problem, as plants draw CO2 from the atmosphere as they grow, thereby not adding to the carbon footprint. Today, the most popular biofuel is ethanol made from corn.
In theory, such a fuel should be carbon neutral: that's to say, for every 100 carbon atoms it draws from the atmosphere, it returns exactly 100 when burned. Unfortunately, however, it's not that simple. By the time farmers have tilled the soil, poured on fertiliser and harvested the crop - not to mention the natural gas and coal burned to run the ethanol plant itself - they've used an awful lot of fossil fuel, leaving them well short of carbon neutral.
You might think the problem could be simply solved by capturing the carbon emitted during the biofuels production process. The fermentation process used to produce ethanol, for example, generates an almost pure stream of CO2 as a by-product. So, earlier this year, agricultural giant Archer Daniels Midland (ADM) started building the US's first large-scale carbon capture and storage project in Decatur, Illinois. It will siphon CO2 from the company's ethanol plant, compress it and store it underground nearby. It plans to store over 1 million tonnes of CO2 annually (see diagram).
However, ADM's ethanol still isn't carbon neutral: instead, thanks to all the energy costs of making the ethanol, it's likely to reduce emissions by only about 20 or 30 per cent compared with fossil fuel.
You might be able to solve the problem if you replaced all the fossil fuels used to run the ethanol plant with renewable energy. But that doesn't solve the other major issue for crop-based biofuels: they compete with food crops for land. In 2010, corn-based ethanol accounted for 8 per cent of US transport fuel, but consumed almost 40 per cent of the country's corn. If ethanol replaced all fossil fuels, it would either push food prices into the stratosphere or force farmers onto new land - most likely both. To make a dent in the amount of greenhouse gas in the atmosphere, we need to find ways around this. "The question is how many of these situations we can find without infringing on other services that the biomass or the land is supplying," says Johannes Lehmann, a soil scientist at Cornell University in Ithaca, New York.
This is exactly why algae is so promising, notably the single-celled, blue-green variant now referred to as cyanobacteria. They grow much faster than terrestrial crops, potentially yielding 20 times more biomass per day than soybeans; their oil production is easy to ramp up through genetic engineering; best of all, they can grow in seawater or brackish groundwater on non-arable land, so they don't take land away from food production or forest (Science, vol 314, p 1598).
These qualities were especially appealing to Bio Fuel Systems, (BFS) a small company in Alicante, Spain, that uses cyanobacteria to make its "Blue Petroleum". The company's prototype plant, in the Spanish coastal desert, is piggybacked on a cement factory, which emits the CO2 the algae need to grow.
# Blue petroleum
The numbers given to New Scientist by BFS president Bernard Stroiazzo illustrate the fraction of carbon that can be trapped by the process. To make a single barrel of oil, the algae suck a little over 2 tonnes of CO2 from the smokestack of the cement works. Not all of that stays out of the atmosphere, though. The algal cultures need regular mixing, which takes energy, as does supplying fertiliser and creating the oil through a patented process involving high heat and pressure. All the fossil fuels needed for these processes release about 700 kilogrammes of CO2. Burning the oil itself - in car engines, say - emits another 450 kg. The rest of the carbon - the equivalent of about 900 kg of CO2 - stays in the leftovers, an inorganic carbonate sludge that can be buried or mixed into concrete. "That will never go back in the atmosphere," says Stroiazzo.
BFS's pilot plant produces about 2.5 barrels of crude oil per hectare of algae each day. At that rate, Stroiazzo says, a system like BFS's could replace the world's entire crude oil consumption, using an area just a quarter the size of the Libyan desert. Thirty-five million hectares is a lot of land, to be sure, but not overwhelming if it replaces the 90 million barrels of oil we use each day. It is also about 1 per cent of the world's pasture area; spread over many plants worldwide it quickly becomes feasible.
But there are a few more factors to consider. Though they are not selling the oil yet, cost will likely be an issue: BFS's equipment is by no means cheap. The polycarbonate tubes that house the cultures cost upwards of $1 million per hectare, and stirring the algae requires large amounts of electricity. This is likely to push the cost of algal biofuel to at least$5 per litre, according to a2010 International Energy Agency report.
To stay solvent, BFS sells its high-value algal by-products as nutritional supplements, such as omega-3 fatty acids. While this may work in a nascent biofuels industry, demand for nutritional supplements will falter when the products flood the market, and anyway it doesn't get to the heart of the problem.
Other companies are trying to do that, though. Algae Systems, near San Francisco, suggests cutting costs by culturing its algae in the ocean, in 25-metre plastic bags floating near the shore. The bags keep the algae at the surface, where the light is most intense, and natural wave action does the mixing. The firm plans to pipe in nitrogen-rich wastewater to fertilise the algal growth.
Algae Systems is now constructing a pilot plant covering several hectares in Mobile Bay, off the coast of Alabama, which should be operational early next year. If all the component processes work as well as they have in the research lab, the result should be carbon-negative fuels, says company president Matthew Atwood. This fuel should be able to undercut fossil petroleum prices within three or four years, he adds.
However, they will need to solve another problem for algal biofuels: fertiliser. Algae are gorge on expensive nutrients like nitrogen and phosphorus. At relatively small scales, wastewater from cities and croplands can easily supply these, as in Algae Systems's design. But scale up and there simply isn't enough wastewater to go around. "Human nutrient loading is simply not sufficient," says Stefan Unnasch, an energy analyst and engineer at California consultancy Life Cycle Associates. "You put more in your car every day than into your toilet." Indeed, producing even a tenth of the US's liquid fuel from algae would consume more than the entire US supply of both nitrogen and phosphorus, according to calculations by Ronald Pate, an algal biofuels specialist at Sandia National Laboratory in New Mexico (Applied Energy, vol 88, p 3377).
Researchers may some day find a way to solve the nutrient problem by extracting and reusing nitrogen and phosphorus from the algal residue, but the biggest difficulty to scaling up is more intractable: how do you get your hands on all that CO2? Even if algae-growers could tap every last smokestack in the US, that would only be enough to produce about 75 billion litres of algal biofuel per year, according to Pate's calculations. That's less than 10 per cent of the world's current transport fuel needs. Moreover, tying biofuel production to fossil-fuel-burning industrial smokestacks merely wrings a second round of energy out of CO2. "This just postpones emissions," says Jonas Helseth, director of Bellona Europa, an environmental foundation based in Brussels, Belgium.
As yet, this problem has no robust solution. A few companies are developing technologies to extract and concentrate CO2 from the air. Global Thermostat, based in New York, has patented a process that uses chemicals and low-temperature waste heat - about 90 °C - to capture CO2 from a stream of air. Its pilot plant has been operating near San Francisco for more than a year, and a second is on the way, says co-founder Graciela Chichilnisky. The company has already signed an agreement to supply its technology to Algae Systems and is in talks with several other algal biofuel companies, she says.
# Biofuels franchise
Solve these problems, and algae may yet be vindicated as the most promising path to carbon negative biofuels. But until then, a less glamorous method is poised to take off.
The cheapest, most low-maintenance feedstock for biofuels is waste biomass, such as the cobs and straw left over after corn harvest, perennial grasses such as giant miscanthus, or dead trees. This raw material has been used to make ethanol, but its efficiency has been stymied by the difficulty of breaking down the materials. Cool Planet Energy Systems, based just north of Los Angeles in Camarillo, California, has found a better way to process it. It has developed a variant of a process called pyrolysis, in which heat, pressure and catalysts convert the biomass directly into the hydrocarbons found in gasoline, diesel oil and jet fuel. This means the company's fuel can be mixed into regular gasoline to reduce the overall amount of fossil fuel, or in other words, it lowers the carbon intensity of the gasoline.
Earlier this year, researchers at Google - one of the company's investors - road-tested a blend of 5 per cent Cool Planet fuel and 95 per cent gasoline in its GRide cars at its headquarters in Mountain View, California. The mix reduced the carbon intensity of gasoline by 10 per cent, says vice-president Mike Rocke, meeting California's 2020 Low Carbon Fuel Standard eight years early.
Better yet, carbon gets sequestered. Along with fuel, Cool Planet's pyrolysis process yields large amounts of biochar, a carbon-rich compound that resembles charcoal. Instead of burying this residue deep underground like ADM or mixing it into cement, however, Cool Planet returns the biochar to the soil.
This has several advantages. It does not depend on the presence of suitable geological formations, and it is easier to transport. Best of all, the biochar enriches the soil and enhances crop yields because its high surface area helps hold water and nutrients. "It's like a molecular sponge," says Rocke. Lehmann, a biochar expert, says the stuff can persist in the soil for centuries, which qualifies as carbon sequestration as set by the Intergovernmental Panel on Climate Change.
That's not the only trick that makes the biofuel carbon negative. Instead of wasting fossil fuel on transporting the biomass to a centralised factory to be made into fuel, Cool Planet will build 400 modular units, each capable of producing between 40 and 200 million litres of gasoline per year. These will use whatever biomass is available within about a 50-kilometre radius. "Wherever the biomass is, we're going to roll out these plants," says Rocke. "They're like a Starbucks."
Cool Planet's process only returns half the carbon to the atmosphere and stores the other half as biochar, making the fuel what Rocke terms "100 per cent carbon-negative". To break into the market, however, the company plans to make a version that is 60 per cent carbon-negative, storing only about a third of the carbon in the plant matter. At this sweet spot, Rocke reckons the company should be able to sell its fuel for about 40 cents a litre.
To date, the research facility has produced only a few thousand litres of fuel. However, a pilot plant - bankrolled by investors including Google, BP and GE - will start operation near Los Angeles this month, producing nearly a million litres per year. And within 20 years, they intend to build 2000 of their modules, enough to supply about 10 per cent of the world's current liquid fuel needs.
Cool Planet's results are encouraging. In 2007, the IPCC reported that for the world to escape catastrophic climate change, carbon emissions would have to begin declining by 2015, with an 85 per cent reduction by 2050. We haven't even started.
Since we can't seem to keep the CO2 from entering the atmosphere, we're left with only two ways to avoid trouble. We could embark on grand geoengineering schemes to cool the planet, all of which bring huge risks of unintended consequences (New Scientist, 22 September, p 30). Or we could try to pull some of the CO2 back out of the atmosphere, one car trip at a time. "Even if carbon-negative biofuels turns out to be just a bit player, they will have done at least a little to reduce carbon emissions," says Lehmann. "It's a no-regret strategy."
Bob Holmes is a consultant for New Scientist
0
### 4 Responses to “Biofuel that's better than carbon neutral”
1. Illusiwind说道:
目测被埋下去的碳到后来说不定会有人出高价再买走。
2. wonderful post rpbqonqvvx click here :-D vpggmaweonur, :-* jojprjazkx [url="http://www.rtjulamnjadp.net"]or here[/url] :'( writkmto, 3:-) larmuagkct http://rtjulamnjadp.info 8-| writkmto, :-0 hdkvggywlx [url=http://rtjulamnjadp.ru]dklzaqyfod[/url] O:-) ov说道:
xpjwrwnisafw
3. amazing site thanks mgbrdywubb click here :V lqmzf, B-| zrpgljiuea [url="http://www.idsgocuxtrpz.net"]or here[/url] =( siikkezsnvrzulr, =) wbrkhxhpdm http://idsgocuxtrpz.info :[ dmlbvhmszmte, O:-) tqrdyllmlf [url=http://idsgocuxtrpz.ru]bolirbueli[/url] &g说道:
yrgbhjlfpvmr
4. adoring the website hvomquzqrw click here :D itqwusxnppqmvee, :V emfirydbua [url="http://www.uiiauqvsnjke.net"]or here[/url] :/ itqwusxnppqmvee, :[ omosqmzqua http://uiiauqvsnjke.info :-* itqwusxnppqmvee, 8| bqmrgftuzk [url=http://uiiauqvsnjke.ru]jmkxkoer说道:
gmnfkdcrjkaq
|
|
# How do I operate on a spin state with a sigma operator?
For any arbitrary spin state $|s\rangle$. How do I operate on it with the Pauli spin matrix, $\hat{\sigma_z}$? Does this have something to do with a Bloch sphere?
Application 1:
It's the $z$-component of the vector valued angular momentum observable for a spin $\frac{1}{2}$ particle, when the basis states are the $z$-component angular momentum eigenstates. If this sounds a bit circular and tautological, it is the reason why $\sigma_z$ is diagonal.
So the $n^{th}$ moment of the probability distribution of an angular momentum measurement in the $z$ direction is $\frac{\hbar}{2}\langle s|\sigma_z^n| s\rangle$.
Not surpsisingly, the $n^{th}$ moments of the probability distributions of an angular momentum measurement in the $x$ and $y$ directions are $\frac{\hbar}{2}\langle s|\sigma_x^n| s\rangle$ and $\frac{\hbar}{2}\langle s|\sigma_y^n| s\rangle$, respectively.
Application 2:
When the basis of Minkowski (or Euclidean) space is rotated, our spatial co-ordinates $\vec{x}$ transform according to the rule $\vec{x}\mapsto R(\theta,\,\gamma_x,\,\gamma_y,\,\gamma_z)\,\vec{x}$, where the rotation matrix is:
$$R(\theta,\,\gamma_x,\,\gamma_y,\,\gamma_z)=\exp\left(\theta\left(i\,\gamma_x\,S_x+\gamma_y\,S_y+\gamma_z\,S_z\right)\right)\tag{1}$$
where $\theta$ is the rotation angle and $\gamma_i$ the direction cosines of the rotation axis. The group acting on the spatial vectors is $SO(3)$ and the basis vectors of its Lie algebra are:
$$i\,S_x = \left(\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}\right);\quad i\,S_y = \left(\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}\right);\quad i\,S_z = \left(\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}\right)\tag{2}$$
As we do this, the quantum spin state $\psi$, when it is expressed as a $2\times 1$ column vector in the $z$-component angular momentum eigenstate basis as we did in Application 1, transforms by the image of $R$ under a projective or spinor representation (discussed in my answer here) as $\psi\mapsto\Sigma(\theta,\,\gamma_x,\,\gamma_y,\,\gamma_z)\,\psi$ where:
$$\Sigma(\theta,\,\gamma_x,\,\gamma_y,\,\gamma_z) = \exp\left(i\,\frac{\theta}{2}\left(\gamma_x\,\sigma_x+\gamma_y\,\sigma_y+\gamma_z\,\sigma_z\right)\right)\tag{3}$$
Application 3:
If a spin $\frac{1}{2}$ particle with a magnetic moment is steeped in a classical magnetic field with induction components $B_j$, then the time evolution operator on the quantum state discussed in Application 1 is defined by:
$$\psi(t) = \exp\left(i\,g\,\left(B_x\,\sigma_x+B_y\,\sigma_y+B_z\,\sigma_z\right)\,t\right)\,\psi(0)\tag{4}$$
where $g$ is the particle's gyromagnetic ratio. So the Hamiltonian here is $\hat{H}=-i\,\hbar\,g\,\left(B_x\,\sigma_x+B_y\,\sigma_y+B_z\,\sigma_z\right)$
Bloch Sphere
The Bloch Sphere is a non injective (here losing common phase factor information) representation of our quantum state $\psi$ in everyday Euclidean 3-space. Operators of the form in either (3) or (4) live in the group $SU(2)$. A neat way to do vector analysis in 3-dimensions is to represent a vector with Cartesian co-ordinates $x,\,y,\,z$ as the matrix in the Lie algebra of all $2\times 2$ skew-Hermitian matrices:
$$X=\left(\begin{array}{c}x\\y\\z\end{array}\right)\mapsto \tilde{X}=-i\,(x\,\sigma_x+y\,\sigma_y+z\,\sigma_z)\tag{5}$$
Then the action of a rotation can be equivalently described by $X\mapsto\,R\,X$ or by the so-called spinor map $\tilde{X}\mapsto\Sigma\,\tilde{X}\,\Sigma^{-1}=\Sigma\,\tilde{X}\,\Sigma^\dagger$ where $R$ and $\Sigma$ are the operators in (1) and (3), respectively. One advantage of this is that the vector cross product becomes the Lie bracket and then the inner product is simply the trace (Frobenius) inner product. Going back the other way: if you are willing to ignore constant phase terms in the quantum state in Application 1, then the pure quantum state can be represented by its $2\times 2$ density matrix $\rho=\psi\,\psi^\dagger$. When quantum states transform unitarily as in (3) or (4), the density matrix undergoes the spinor map $\rho\mapsto\Sigma\,\rho\,\Sigma^\dagger$, so, thinking of (5) backwards, we can represent the density matrix as a three Cartesian component vector and it will undergo a corresponding rigid rotation. So the space of density matrices is mapped onto the 2-sphere by this process. In fact, you calculate the Cartesian components of the point on the sphere through:
$$x_j=\psi^\dagger\,\sigma_j\,\psi=\frac{1}{2}\mathrm{tr}(\sigma_j\,\rho)\tag{6}$$
You can see from (6) that any common phase factor $e^{i\,\phi}$ multiplying $\psi$ will not change the point on the Bloch sphere. I talk more about the Bloch sphere, called the Poincaré sphere in Optics, in this answer here.
$\vert+\rangle$ and $\vert-\rangle$ are really just shorthand notations for the two eigenvectors of the diagonal spin operator $\sigma_z$. This means concretely:
$$\vert+\rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$$
$$\vert-\rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$$
Therefore the action of the sigma operator gives you simply the corresponding eigenvalue:
$$\sigma_z \vert+\rangle = \begin{pmatrix} 1& 0 \\0 &-1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} =+1 \vert+\rangle$$
$$\sigma_z \vert-\rangle = \begin{pmatrix} 1& 0 \\0 &-1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} =-1 \vert+\rangle$$
This is a general result in Quantum Mechanics, independent of any applications in the context of solid state physics or likewise.
$\newcommand{\ket}[1]{| #1 \rangle}$ An arbitrary spin state $\ket{s}$ can be broken down as a sum of the $\ket{+}$ and $\ket{-}$ eigenstates: $$\ket{s} = \alpha \ket{+} + \beta \ket{-}$$ Where $\alpha$ and $\beta$ are complex numbers. We'll write the overall vector as: $$\ket{s} = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}$$ Where we remember that the first element means the $\ket{+}$ component and the second means the $\ket{-}$. In this basis, the Pauli matrices have a standard form: $$\sigma_z = \begin{pmatrix} 1& 0 \\0 &-1 \end{pmatrix}$$ The product $\sigma_z \ket{s}$ is now just a matter of matrix-vector multiplication.
• Just a little nitpicking on the terminology: the Pauli matrices do not depend on the choice of a basis, they are what they are; the spin matrices, instead, can be written as proportional to the Pauli matrices when represented on that basis (the $S_z$ axis, in the case at hand). – gented May 16 '17 at 13:34
|
|
# Asset turnover
Asset turnover is a financial ratio that measures the efficiency of a company's use of its assets in generating sales revenue or sales income to the company.[1]
Companies with low profit margins tend to have high asset turnover, while those with high profit margins have low asset turnover. Companies in the retail industry tend to have a very high turnover ratio due mainly to cutthroat and competitive pricing.
${\displaystyle {\mbox{Asset Turnover}}={\frac {\mbox{Net Sales Revenue}}{\mbox{Average Total Assets}}}}$
• "Sales" is the value of "Net Sales" or "Sales" from the company's income statement
• "Average Total Assets" is the average of the values of "Total assets" from the company's balance sheet in the beginning and the end of the fiscal period. It is calculated by adding up the assets at the beginning of the period and the assets at the end of the period, then dividing that number by two.
• Alternatively, "Average Total Assets" can be ending total assets.
## References
1. ^ Bodie, Zane; Alex Kane; Alan J. Marcus (2004). Essentials of Investments, 5th ed. McGraw-Hill Irwin. p. 459. ISBN 0-07-251077-3.
|
|
# 8.(viii) Three coins are tossed once. Find the probability of getting (viii) no tail
Answers (1)
H Harsh Kankaria
Sample space when three coins are tossed: [Same as a coin tossed thrice!]
S = {HHH, HHT, HTH, HTT, THH, TTH, THT, TTT}
Number of possible outcomes, n(S) = 8 [Note: 2x2x2 = 8]
Let E be the event of getting no tail = Event of getting only heads = {HHH}
$\therefore$ n(E) = 1
$\therefore$ $\dpi{100} P(E) = \frac{n(E)}{n(S)}$ $= \frac{1}{8}$
The required probability of getting no tail is $\dpi{80} \frac{1}{8}$.
Exams
Articles
Questions
|
|
Gravitational Infall in a Protostellar Cluster
# Discovery of Large-Scale Gravitational Infall in a Massive Protostellar Cluster
Peter J. Barnes11affiliation: School of Physics A28, University of Sydney, NSW 2006, Australia 22affiliation: Astronomy Department, University of Florida, Gainesville, FL 32611, USA , Yoshinori Yonekura33affiliation: Department of Physical Science, Osaka Prefecture University, 1-1 Gakuen-cho, Sakai, Osaka 599-8531, Japan 44affiliation: Center for Astronomy, Ibaraki University, 2-1-1 Bunkyo, Mito, Ibaraki 310-8512, Japan , Stuart D. Ryder55affiliation: Anglo-Australian Observatory, PO Box 296, Epping, NSW 1710, Australia , Andrew M. Hopkins11affiliation: School of Physics A28, University of Sydney, NSW 2006, Australia 55affiliation: Anglo-Australian Observatory, PO Box 296, Epping, NSW 1710, Australia ,
Yosuke Miyamoto66affiliation: Department of Astrophysics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan , Naoko Furukawa66affiliation: Department of Astrophysics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan , and Yasuo Fukui66affiliation: Department of Astrophysics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan
###### Abstract
We report Mopra (ATNF), Anglo-Australian Telescope, and Atacama Submillimeter Telescope Experiment observations of a molecular clump in Carina, BYF73 = G286.21+0.17, which give evidence of large-scale gravitational infall in the dense gas. From the millimetre and far-infrared data, the clump has mass 210 M, luminosity 2–310 L, and diameter 0.9 pc. From radiative transfer modelling, we derive a mass infall rate 3.410 Myr. If confirmed, this rate for gravitational infall in a molecular core or clump may be the highest yet seen. The near-infrared -band imaging shows an adjacent compact HII region and IR cluster surrounded by a shell-like photodissociation region showing H emission. At the molecular infall peak, the imaging also reveals a deeply embedded group of stars with associated H emission. The combination of these features is very unusual and we suggest they indicate the ongoing formation of a massive star cluster. We discuss the implications of these data for competing theories of massive star formation.
astrochemistry — infrared: ISM — ISM: kinematics and dynamics — ISM: molecules — radio lines: ISM — stars: formation
## 1 Introduction
Many details of massive star formation in dense molecular clouds are still unclear (Churchwell, 2002), despite much recent progress (e.g., Sridharan et al., 2002; Fuller et al., 2005; Longmore et al., 2007). For example, it is still debated whether massive stars can form by a scaled-up version of the accretion thought to occur with low-mass protostars (e.g. McKee & Tan, 2003), or rather form by collective processes in a clustered environment (e.g. Bonnell et al., 2003). Consequently, examples of massive star formation showing evidence of either behaviour can be informative to this debate, especially since there are still relatively few examples known of true massive protostars.
As part of the Census of High- and Medium-mass Protostars (CHaMP, §2.1), we identified the massive dense clump111Here we use the Williams et al. (2000) terms for “core” (that part of a molecular cloud which will collapse to form an individual star or binary) and “clump” (which will collapse and fragment, via many cores, to form a star cluster). G286.21+0.17 as showing striking evidence of large-scale gravitational infall, which we report here. This source (hereafter referred to as BYF73, from the master CHaMP source list; Barnes et al., 2009) has been included in some previous surveys (Bronfman et al., 1996; Dutra et al., 2003; Faundez et al., 2004; Yonekura et al., 2005) and is an Infrared Astronomical Satellite (IRAS) point source, but has not previously been shown to be remarkable. The precise location is (,) = (286.208, +0.169) or (,) = (), about 1.5 northwest of Carinae and 12 north of the rim of the 15-diameter HII region/bubble NGC 3324/IC 2599, at an assumed distance of 2.5 kpc.
## 2 Observations
### 2.1 Survey Strategy
The motivation for CHaMP is to make a complete and unbiased census of higher-mass star formation at many different wavelengths over a large portion of the Milky Way (Barnes et al., 2006), in order to systematically characterise the processes in massive star formation in a uniform way. The first step was to identify 209 dense clumps from CO and HCO maps made with the 4m Nanten telescope (Yonekura et al., 2005, 2009) of a region of the Galactic Plane in Vela, Carina, and Centaurus (specifically and ). A higher-resolution follow-up campaign was then begun to map these clumps in a number of 3-millimetre wavelength (3mm) molecular transitions with the 22m-diameter Mopra dish of the Australia Telescope National Facility222The Mopra telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The University of New South Wales Digital Filter Bank used for the observations with the Mopra telescope was provided with support from the Australian Research Council. (Barnes et al., 2009). The Mopra antenna’s performance has been described by Ladd et al. (2005). Since that study, an on-the-fly (OTF) mapping capability has been implemented in the control software (in 2004), new 3mm MMIC receivers were installed (in 2005) which were at least as sensitive as the previous SIS mixers and much more efficient to operate, and the MOPS wideband digital filterbank was commissioned (in 2006; Wilson et al., 2006). These innovations, when combined with the Nanten maps as finder charts, makes an ambitious survey like CHaMP possible.
Mopra’s OTF mapping mode has been described by T. Wong (2005, unpublished), which we briefly summarise here. The telescope is driven in a raster pattern (which can be in any of the , , , or directions) at a rate such that the data dump interval (usually every 2 s) from the correlator to mass storage is consistent with Nyquist- (or better) sampling of the sky, given the telescope beam and observing frequency. At 90 GHz this drive rate across the sky equates to approximately sec. Each raster row is then offset by a similar amount (i.e., at 90 GHz) from the previous row, until a square map with a size of the user’s choosing is built up. The user also selects whether a reference position (which can be specified in either relative or absolute coordinates) is observed at the beginning of each row, or only once every 2 rows. Additionally, the user can choose from which corner of the square map the raster pattern is begun, i.e. the NE, NW, SE, or SW (in the respective coordinate system being used). Finally the frequency of hot-cold load measurements of needs to be specified; this is typically every 10-30 min, depending on the stability of sky conditions. In the 2007 season, however, a noise-diode calibration system was introduced into the data stream, effectively giving continuous measurements and making separate hot-cold load scans somewhat redundant. With the addition of a calibration spectrum of a known source such as Orion-KL, skydip measurements of the atmospheric opacity were not found to be necessary.
In this way a typical 55 map can be built up over a period of about 70 min at 90 GHz. In order to minimise rastering artifacts, however, a second map is usually made of the same field, but in an orthogonal rastering direction. Including time (10 min) for pointing checks between each map, such a 55 field is “complete” in about 2.5 hr. Further rasters can be done of the same field, and this not only improves the S/N in the usual way, but under variable sky conditions will also minimise noise variations across a map, which might otherwise give erratic sensitivity coverage of the user’s field. After just 2 raster maps, however, the noise variations are usually acceptable (20%) in all but the worst conditions.
The MOPS backend can be employed in either “broadband” or “zoom” mode. With the former, the full 8 GHz available bandwidth is observed with 65536 125-kHz-wide channels in each polarisation, corresponding to a velocity resolution of 0.45 km s at 90 GHz. In contrast the latter allows up to sixteen independently selectable 138-MHz-wide “zoom IFs” to be observed simultaneously from within the filterbank’s 8 GHz total instantaneous bandwidth. Each zoom mode is correlated with 4096 channels in each of two orthogonal polarisations, resulting in a spectral resolution of 33 kHz, or 0.11 km s at 90 GHz. In the 2005–07 austral winter seasons we mapped the brightest 118 Nanten clumps, simultaneously covering many spectral lines in the 85–93 GHz range, among them the =10 transitions of HCO, HCN, NH, HCO, and HCN. At these frequencies, Mopra has a beam FWHM of 36, an inner error beam which extends to 80, and a coupling efficiency of 0.64 to sources of this size (Ladd et al., 2005).
While CHaMP’s 3mm molecular maps reveal the location of dense gas, complementary near-IR imaging of the same clumps can show where star formation has evolved further. By compiling these statistics uniformly we will be in an excellent position to identify demographic trends in the massive star formation process. Thus, an equally important part of CHaMP is a near-IR survey of the Nanten clumps using the IRIS2 imager (Tinney et al., 2004) on the Anglo-Australian Telescope (AAT). With this instrument we have begun acquiring images of each clump in -continuum, Brackett- (a recombination line tracing HII regions), and H =10 (1) & =21 (1) (vibrational quadrupole lines tracing molecular gas heated to a few 1000 K) to delineate the relationship between formed and forming massive stars, and report here results of such imaging.
A third major component of CHaMP will be a deep imaging survey of 1.2mm continuum and spectral-line emission with Atacama Submillimeter Telescope Experiment (ASTE), the 10m submillimeter telescope of the Nobeyama Radio Observatory333The ASTE project is led by Nobeyama Radio Observatory (NRO), a branch of National Astronomical Observatory of Japan (NAOJ), in collaboration with the University of Chile and Japanese institutes including the University of Tokyo, Nagoya University, Osaka Prefecture University, Ibaraki University, Kobe University, and Hokkaido University. at Pampa la Bola in Chile (Kohno et al., 2004; Ezawa et al., 2004). The 1mm continuum is important in characterising the spectral energy distributions (SEDs) of embedded protostars as well as starless cores or clumps, and in correlating this with the phenomenology seen in spectral lines and at other wavelengths. We report here as well some of the first data from this facility, namely HCO and HCO =43 spectra, confirming the evidence of infall from our Mopra data.
### 2.2 Observational Details and Data Reduction
The evidence for infall in BYF73 was first seen in the Mopra HCO and HCO=10 data, presented in Figures 14. These were obtained on 2006 Oct 27–29 and 2007 Sep 5–9, when observing conditions were good ( K or better). The images were formed by coadding OTF maps which abut each other to cover larger areas. The reference position used for sky-subtraction during all mapping was (,) = (285.7,–0.3), which shows no emission in the Nanten CO map. Each area was scanned twice or three times in each of and in order to minimise rastering artifacts and noise variations. The raw OTF data were processed with the Livedata-Gridzilla package (Barnes et al., 2001) by bandpass division and baseline subtraction. The 2s-long OTF samples were then regridded onto a regular grid of 12 pixels, where the samples were weighted by , before averaging them into each gridded pixel. Weighting by the rms of the spectra was not an option provided by Gridzilla; however as described above, since 2007 the continuously-measured has effectively given the same information for each 2 s sample. For all Mopra maps in Figures 17, the effective telescope HPBW has been smoothed at the gridding stage to 40 from the intrinsic 36, in order to reduce noise artifacts. The resulting spectral line data cubes have low but, due to variations in weather and coverage, somewhat variable rms noise levels, ranging from 0.17 K in the southern portions of Figure 2 up to 0.22 K in the north, per 0.11 km s channel; the average across the map is 0.2 K per channel. Although the pointing (checked on the SiO maser source R Carinae every hour or two) was typically good to 10 or better (1 pixel on the scale of our maps), because of the simultaneity of the spectral line mapping afforded by MOPS, the relative registration of features between these lines is perfect.
Observations of HCO and HCO=43 were made using ASTE on 2006 Dec 1–2, when the typical system temperature (double-sideband) ranged from 220 K to 580 K at 345 GHz, including the atmosphere. The half-power beamwidth of the telescope is 22 at 345 GHz and the front end is a 4 K cooled SIS mixer receiver; at this frequency the beam efficiency is 0.65 (Kohno, 2005; Kohno et al., 2008). We used a digital correlator with a bandwidth of 128 MHz and 1024 channels (Sorai et al., 2000). The effective spectral resolution was 151.25 kHz, corresponding to a velocity resolution of 0.13 km s at 345 GHz. The data were obtained in position switched mode centred on (,) = (286.2071, +0.1692); the “off” position used for sky-subtraction was (286.0407, +0.4612), which is also devoid of Nanten CO emission. Observations were made remotely from an ASTE operation room in San Pedro de Atacama, Chile, using a network observation system, N-COSMOS3, developed by NAOJ (Kamazaki et al., 2005). For HCO and HCO, the total integration times of the spectra were 180 s and 620 s, and the rms noise levels 0.57 and 0.21 K per channel, respectively. The intensity was calibrated by using a room-temperature chopper wheel. The absolute intensity was calibrated by observing Orion-KL and assuming (HCO) = 47 K and (HCO) = 2 K (Schilke et al., 1997). The pointing accuracy was measured to be reliable within 5 as checked by optical observations of a star with a CCD camera attached to the telescope, as well as by CO =32 observations of IRC+10216.
IRIS2 observations were made at the AAT in service mode on 2006 May 13 in the -band continuum (2.25–2.29 m) and 3 spectral line filters as described above, Br- and H (1) =10 & =21. For each filter, nine 60s images (dithered by 1) were obtained of the instrument’s 7.77.7 field under 0.9 seeing, and reduced using the ORAC–DR data reduction pipeline (Cavanagh et al., 2003). However the conditions during the observations were non-photometric, there being a fair amount of bushfire haze present. Subsequent image processing was performed with the IRAF444IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the U.S. National Science Foundation. package. The images for each field were registered using astrometry derived from SuperCOSMOS555This research has made use of data obtained from the SuperCOSMOS Science Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council. I-band images; we estimate the resulting rms positional accuracy in the IRIS2 images to be .3. Next, we linearly scaled the spectral-line images to the same relative brightness scale as the -band continuum by matching the integrated fluxes of several stars in each filter, assuming they were of similar colour. We then subtracted the continuum from the spectral-line images before transforming each image to Galactic coordinates. Finally, a three-colour image (shown in Fig. 7a) was formed from the continuum-subtracted Br- and H images: Br- is shown as red, and H (1) =10 & =21 are shown as green & blue, respectively.
Long-slit spectroscopy with IRIS2 was obtained on 2007 Oct 18. The 7.7 long slit was set to a position angle of 131.2, with the stellar cluster and nebulosity of BYF73 spanning most of one half of the slit. Four exposures of 300s were obtained in the -band, with BYF73 nodded by 3.8 along the slit between each exposure. Similar nodded exposures of the nearby A0V star HD 95534 were obtained to assist with telluric correction. All frames were flatfielded using quartz lamp exposures, then nodded pairs were subtracted to remove sky emission. After two-dimensional wavelength calibration and straightening with Xe lamp exposures, the “off” beam data were inverted, aligned, and co-added to the “on” beam data. Each spectral row of the data was divided by an extracted spectrum of HD 95534 (from which intrinsic Br- absorption had been removed), then multiplied by a blackbody spectrum of = 9520 K.
## 3 Analysis and Discussion
### 3.1 Mopra Maps
The Mopra HCO=10 maps, being of high signal-to-noise ratio, reveal a number of interesting features which we describe here. Figure 1 shows the HCO emission from BYF73 across its full velocity range, where we have averaged four velocity channels into each displayed panel for ease of viewing. However all analysis below rests upon the full-resolution data. At the central velocities (–22 to –18 km s) the emission is quite widespread; redward of the line centre (–20 to –18 km s) this extended emission is quite clumpy, while to the blue (–22 to –20 km s) the emission is strongly centrally concentrated. At both the reddest and bluest velocities, the emission is fairly centrally concentrated; in particular there is little obvious evidence for an extended, high-velocity outflow which would tend to have emission well away from the centre at the highest relative velocities. In fact to a casual inspection there seems to be little systematic kinematic structure to these channel maps at all, and it is only upon inspection of the spectra that the infall profiles are revealed.
This is also reflected in the integrated intensity image of Figure 2, where the extended envelope of BYF73 shows very little evidence of being structurally disturbed by (for example) its proximity on the sky to Carinae or to NGC 3324. The only morphological feature of note in the envelope is a bay to the NW, which as we shall see is intrinsic to the source. The inner of BYF73 also appears fairly bland: this area is elongated somewhat in the NW-SE direction in both the HCO and HCO emission, and there is a small but significant offset in the peak positions of these two molecules, with the HCO emission centred slightly to the northwest of the brightest HCO.
In Figure 3 we give the higher-order HCO moment images, overlaid by the moment-0 contours from Figure 2. In contrast to the latter, the intensity-weighted mean velocity (Fig. 3) reveals a striking velocity gradient across the clump; the axis of this gradient is rotated 30 anticlockwise from the long axis of the clump, as seen in the moment-0 contours. The spectral line is most strongly blue-shifted to the north and east of the peak HCO emission, reaching its minimum value 0.5 east of the peak. This blueshift gradually changes to a redshift to the western side of the clump, reaching its maximum value 2 west of the peak, inside the bay of the envelope where the emission is weaker. The weaker emission which wraps around the western side of the bay (the “western arm”) is also reshifted with respect to the clump. As a whole the clump’s velocity is significantly blueshifted with respect to the HCO line centre (see below), which is approximately at the green colour in this image.
The HCO velocity dispersion (where the line FWHM = 2.355) is shown in Figure 3. [Note that because the line shape is strongly non-Gaussian, one should not confuse this moment-2 measurement with an actual linewidth; nevertheless its variation does indicate true changes in the line profile across the source.] Here again we see a strong gradient in this parameter: the bulk of the clump, and to its north and east down to an integrated intensity level 2.0 K km s, has a large 1.5–1.8 km s. Below this intensity to the west and SW, drops to 1.1–1.4 km s, reaching minimum values 1 km s exactly where the velocity field is most redshifted. One therefore suspects that these redshifted features are due mostly to individual, narrower-line substructures in the envelope, and that apart from these features, the clump’s overall blueshift with respect to the optically thin line centre is even more complete.
These images can be compared to the equivalent moment images of the HCO cube (not shown here, but see Fig. 4 for a spectrum). From Gaussian fits to this cube, the peak 0.3–0.4 K, and the line is centred near –20 to –19.5 km s across most of the emission, but shifts to –19 to –18.5 km s along the clump’s SW edge. The line FWHM varies from 1–3 km s toward the SE end of the clump, rising to 2–5 km s toward its NW end. Such fits to the HCO have rms residuals 0.17 K per 0.12 km s channel, with typical uncertainties 0.3 and 0.7 km s to the and linewidth, respectively.
### 3.2 Distance Determinations
Typical Galactic rotation curves (Burton & Gordon, 1978; Clemens, 1985) and standard values of and (8.5 kpc/220 km s, IAU 1978 values; 8 kpc/200 km s, Merrifield 1992; 8.4 kpc/254 km s, Reid et al., 2009) indicate that the central = –19.7 km s for the molecular clump (see §3.4) is formally forbidden at the longitude of BYF73 (meaning that this velocity is inconsistent with such rotation curves for objects at any distance along this line of sight). In fact the minimum allowed velocity for the tangent point at this longitude is km s, which is 2 more positive than (where km s is the cloud-to-cloud velocity dispersion of GMCs; Burton & Gordon, 1978). In spite of this disparity, any other location for BYF73 is even less kinematically favoured than the tangent-point distance. For example, BYF73 and the Car GMC may be sharing in a non-circular streaming motion of the order of 10 km s associated with this part of the Carina Arm. In any case, at such tangent points small uncertainties in the rotation curve or the values of and can translate into large line-of-sight distance uncertainties, up to 50% or more. Therefore while a tangent-point distance of cos=2.351.5 kpc (using Reid et al., 2009) is favoured with the kinematic method, a more robust determination is preferred for the analysis in the sections following, especially in light of the large power with which the distance to BYF73 enters some of the formulae below.
Fortunately, a number of studies have yielded distances to the massive clusters in and near the Car GMC (e.g., see the summary by Yonekura et al., 2005), and to NGC 3324 (Haynes et al., 1978). These range from 2.2–2.8 kpc, reassuringly close to the tangent-point distance. Therefore if we adopt a mean value of =2.5 kpc we would likely only need to attach a 12% uncertainty to it. Although their association in velocity and on the sky is strong circumstantial evidence, it is not certain, however, that BYF73 is actually associated with the Car GMC complex or NGC 3324. In particular, Fig. 2 shows no evidence that the low-density molecular envelope of BYF73 has been at all disturbed by the vigorous star-formation activity closer to Car or by the bubble of NGC 3324. Nevertheless, further evidence that the tangent-point distance is reasonable follows from analysis of the cm-continuum emission of the small HII region adjacent to BYF73 (see Fig. 7 and §3.6). Using the MGPS-2 (Murphy et al., 2007) and SGPS (Haverkorn et al., 2006) flux densities at 843 and 1420 MHz of 625 and 8511 mJy respectively, and assuming an electron temperature in the HII region K (Shaver et al., 1983), standard analysis (Mezger et al., 1967; Barnes, 1985) gives a distance-independent emission measure EM pc cm, typical of compact HII regions (Habing & Israel, 1979). Such HII regions have diameters 0.1–1 pc, bracketing that for BYF73 (from measurement of the Br- nebula in Fig. 7, its FWHM = 0.25 pc), and so yielding a most likely location for it at the tangent point.
In summary, various lines of reasoning make a good case for BYF73 lying close to the tangent-point distance for its longitude. We therefore assign a distance of 2.50.3 kpc, based on the direct measurements listed by Yonekura et al. (2005).
### 3.3 Evidence for Gravitational Infall
The dense molecular clump, centred at (,) = (286.208,+0.169) and easily visible in the Mopra maps, has HCO spectral line profiles that fit the canonical pattern of Zhou et al. (1993) indicating gravitational infall onto a protostar (see Fig. 4). For the optically thick HCO emission, this includes a self-absorbed profile with predominantly stronger blue wings at most positions (the “blue asymmetry” or inverse P-Cygni profile, seen in panels a and c of Fig. 4), together with more Gaussian line profiles for the optically thin transitions of HCO, which are centred in velocity on the HCO self-absorption (panels b and d). Further, the =43 lines (panels c and d) are brighter than the corresponding =10 lines (panels a and b), the self-absorption in the HCO is deeper in the =10 than the =43, the velocity difference (see below) between the blue and red peaks of the HCO lines is slightly greater in the =10 than the =43, and the blue and red peaks in the =43 line are both slightly redward of the respective peaks in the =10 line. All of these details are completely consistent with the Zhou et al. (1993) and Myers et al. (1996) picture of a dense core undergoing gravitational infall, where the velocity of the infall and the temperature both increase towards the centre, producing the respective line profiles and ratios.
However, the mass scale of the infall appears to be unusual. Myers et al. (1996) developed a simple but useful two-layer model to evaluate basic parameters from spectra of molecular cores which are undergoing gravitational infall, while De Vries & Myers (2005) extended this analytic model and provide a general code for robustly determining these parameters and their uncertainties. Although these models were developed in the context of low-mass protostars, the results we derive here satisfy the assumptions made in their treatment of the radiative transfer. The key qualifications are that the infall speed not be much greater, nor much smaller, than the velocity dispersion in the dense gas, which result we obtain below. Here we use the Myers et al. (1996) formalism and the HCO line profiles to estimate the characteristic gas infall speed and motivate further discussion. In §3.4 we use the De Vries & Myers (2005) code to more rigorously evaluate the model fits.
From Myers et al. (1996)’s eq. (9) and using the parameters as listed in Table 1 from the sample spectra in Figure 4, we obtain as shown also in Table 1, and where the quoted errors are obtained by propagating the measured uncertainties through the formula. Continuing to follow Myers et al., we need an estimate for the radius over which the infall profile is seen, in order to allow calculation of a kinematic mass infall rate. This profile is widespread in the HCO data, but its intensity drops only slowly into the background, showing no hard edge. To indicate a radius we consider the emission FWHMs (suitably deconvolved). In the HCO and HCO=10 data, the diameters and respectively, taking a geometric mean of the major and minor axes in each case. At 2.5 kpc these respectively give clump radii pc and pc, where we have now also added in quadrature the uncertainty due to the distance. Although the HCO infall profiles are clearly more widespread than the HCO radius, we conservatively take the latter as an optically thin tracer and therefore more representative of the true column distribution, understanding that this may in fact be a lower limit to the clump radius. This gives
dMkdt = 4πR2μmolmHncrVin (1) ∼ (2.9±1.5)×10−2M⊙yr−1
for BYF73’s mass infall rate, where = 2.30 is the mean molecular mass in the gas and is the critical density for the =10 transition (see §3.5). This should be compared to the gravitational mass infall rate for the self-similar singular isothermal sphere (SIS) solution (Shu, 1977)
dMgdt=σ3G∼0.080×10−2M⊙yr−1, (2)
where instead of the sound speed of Shu, we have substituted, as suggested by Banerjee & Pudritz (2007), the supersonic velocity indicated by the HCO linewidth from Table 1 (see also §3.4). Even so, we see that for BYF73, Shu’s solution cannot give us the observed infall rate. Instead, Banerjee & Pudritz show that a magnetised core can collapse supersonically with an effective speed , where is the Mach number in the flow. For BYF73, then, the observed infall only requires collapse with 3. This Mach number and infall speed above are consistent with (for example) the MHD simulations of Banerjee & Pudritz (2007) or the McKee & Tan (2003) massive turbulent core model; however our mass infall rate is still more than an order of magnitude higher than in such models, mainly because of the large extent of the infall asymmetry in our maps.
Myers et al. (1996) suggested that, for their low-mass protostars, the agreement of the inferred and theoretical rates indicates the derived inward motions are consistent with gravitational infall. Under this interpretation BYF73 also gives a much larger infall rate than is typical of low-mass protostars (10 M yr, increasing to 10 to 10 M yr during FU Orionis-type outbursts; Lada, 1999), again stemming mainly from the parsec-scale extent of the asymmetric HCO profile, and also from the unusually large value for . This mass infall rate is also larger than any seen so far in any similar massive star-forming region (e.g. Fuller et al., 2005; Beltrán et al., 2006). Given the linear size of this region and the near-IR appearance of peculiar emission-line nebulosity at the centre of the clump, we suspect that the entire BYF73 cloud is undergoing a global gravitational collapse. Verification of this suggestion awaits additional supporting evidence including interferometric observations and more detailed modelling. However all of the Mopra spectral line maps of BYF73 (e.g. HCN, NH, etc.), as well as the CS =21 data reported by Bronfman et al. (1996), show similar emission distributions and/or line profiles, with differences as expected from the species’ different relative abundance. This is not surprising considering that they all require high densities ( cm; Spitzer, 1978) to be collisionally excited and thermalised to the gas kinetic temperature, and so should reflect the same dynamical state as seen in the HCO.
### 3.4 Radiative Transfer Modelling
De Vries & Myers (2005) compared a number of analytic radiative transfer models of infall in a low-mass dense core to a full Monte Carlo model. They found that their “Hill5” model gave the most accurate simulation of the Monte Carlo solution, and of all the analytic models they examined, was the most robust against various numerical and instrumental uncertainties. We have used their HILL5 code to analyse our Mopra HCO=10 data cube pixel-by-pixel, and present the results here.
The parameters fitted by the Hill model to an infall spectrum include the peak line excitation temperature and optical depth, the intrinsic (i.e., equivalent optically-thin) and velocity dispersion, and the infall speed . These fits, in order to be considered reliable by De Vries & Myers (2005), should be to spectra with S/N 30. Our Mopra HCO=10 cube does not formally satisfy this requirement per 0.11 km s channel (peaking at S/N 12), but since the fits are over a large velocity range (up to 8 km s to zero-power) we suggest that the figure of merit should rather be the peak S/N in the integrated intensity map (Fig. 2), which is 50. Put another way, by fitting 5 parameters to spectra with 70 independent resolution elements, the problem is more strongly constrained than the per-channel S/N would suggest. Our claim of reliability is bolstered by the solutions themselves, which show little statistical noise in the output parameters above an integrated intensity 1.5 K km s, unless the optical depth is too low to give a good infall solution (i.e., everywhere except toward the SW edge of the clump). In such areas, the fitted and especially are poorly constrained and not physical.
In Figure 6 we show the parameter maps of the Hill5 model fits to the HCO=10 line. The lowest reliable excitation temperature (panel ) is 3–4 K around the northern and eastern perimeter of the clump, and reaches a maximum 6 K near the peak emission. These values are clearly less than the discussed below, but this is partly attributable to the data cube being on the scale, which is 0.64. The only effect this choice of temperature scale has on the model fits is on the scaling of the fitted . Over the same areas, the optical depth (panel ) ranges from 1.5 to 6. The areas where the infall solution is unphysical are clearly visible in these two panels as the red or black pixels, predominantly to the south and/or west of the clump. In the brighter areas where we believe the solutions are reasonable, we note that the highest optical depth lies NE of the brightest HCO emission, precisely where the brightest HCO ridge lies. Furthermore, the HCO/HCO line ratios (ranging from 6–10; see Fig. 2) are entirely consistent with these optical depths and normal abundance ratios of these molecules. This is actually remarkable since the Hill5 model only fits parameters to the HCO cube: the fact that the HCO data are consistent with the model results gives us further confidence in the Hill5 solutions. Likewise, the highest is to the SW of the HCO peak, approximately facing the HII region (cf. §§3.2,3.6) and entirely consistent with that geometry.
The various velocity parameters of the Hill5 models are also remarkably well-behaved. The systemic map (panel ) looks grossly similar to the moment-1 map (Fig. 3), but in fact is slightly redshifted where the infall profile is most prominent. This is made clear in panel which shows the velocity difference (moment-1)–(). This quantity should be close to zero in most places, but skewed to negative values where the infall is strong and the moment-1 values reflect the blue asymmetry of the spectra. Indeed the colours in panel show exactly this: away from the robust infall solutions, the average colour is orange corresponding to a mean difference 0 km s. Where panels and have good solutions, the mean velocity difference is consistently km s, indicating the extent of the spectral asymmetry.
The last two panels of Figure 6 show the velocity dispersion (panel ) and infall speed (panel ). In the area of good fits, the former is 1.00.2 km s, while the latter is 1.00.4 km s. Once again, we see these HCO-derived dispersions are consistent with the actual linewidth measurements of the HCO (§3.1). Furthermore, as predicted by De Vries & Myers (2005) the Hill5 solutions do indeed scale to higher-mass regions than they examined, since for BYF73 the criterion that the intrinsic dispersion is comparable to the infall speed is satisfied.
In panel we note an interesting structure in the velocity field of the infall. Along the long axis of the clump (i.e., to the NW and SE), the infall speeds are consistently lower than 1 km s, whereas across the short axis (to the NE and SW) the infall speeds are consistently higher than 1 km s. It is tempting to interpret this pattern as due to a partially rotationally-supported oblate clump. In this scenario, the infall is somewhat centrifugally hindered in the equatorial plane (which is roughly parallel to the major axis of the emission) by a rotational speed which may be 0.5–1 km s, but is unimpeded along the presumed rotational axis (roughly parallel to the minor axis).
Equally, we note that the field in panel is purely kinematic, since any radiative transfer effects would have been filtered by the model into just and (panels and ). Thus panel may be a better indicator of rotation in BYF73, with the rotation axis being roughly aligned with the emission’s major axis instead, and suggesting a more prolate geometry for BYF73. However we also note that the most redshifted portion of panel , centred near (286.195, 0.157), has already been attributed to a separate, non-infalling component in the data cube (§3.1). Both of these rotation interpretations are thus quite speculative: the noise in both the data and the model may dominate the features we are trying to interpret, and these alternatives really need to be explored with higher resolution and greater sensitivity data in order to discern between them.
An important feature of the Hill5 treatment is that it is consistent with the value obtained in §3.3 based on the Myers et al. (1996) work, which was for a two-layer radiative transfer model. De Vries & Myers (2005) similarly found that their two-layer models often gave solutions for which were mostly consistent with the Hill models, however the Hill5 model was the most robust to errors. In the calculations below we use the Hill5 result = 1.00.4 km s. For the mass infall rate from §3.3, we now have a somewhat larger value
dMkdt ∼ (3.4±1.7)×10−2M⊙yr−1, (3)
recalling that by evaluating this with the HCO radius we are likely obtaining a lower limit to the global infall rate. We conclude that the radiative transfer modelling of the HCO data gives results which are surprising but highly self-consistent, and consistent with other features of our data.
### 3.5 Clump Mass
Despite the satisfactory results of the modelling, in order to make a strong case for the formation of a massive cluster, we also need to establish that the molecular clump has sufficient mass to qualify for this status, and that other possibilities for interpreting our data are discounted. Since the gas density will probably be at least the HCO=10 transition’s critical density (Haese & Woods, 1979; Barnes & Crutcher, 1990) where the bright molecular emission is seen, the cloud mass is given approximately by
M > μmolmHncr(π/ln2)3/2R3 ∼ 1.0×104M⊙(nH23×105cm−3)(R0.40pc)3 ∼ 6.4×104M⊙(nH23×105cm−3)(R0.73pc)3
using the volume for a 3D Gaussian. Here we give two values for the mass based on which size we take for the HCO-emitting region. With ASTE’s detection of the =43 line, even higher-density gas (10 cm) must exist in the clump, and if widespread would give a much higher mass estimate. Therefore the first value for the mass is almost certainly a lower limit. However eq. (4) assumes that the dense gas giving rise to the emission fills our beam, whereas the filling factor is unknown and possibly 1; this may indeed be the case in the outer envelope of the clump, thus the second value is probably an upper limit.
A formally more rigorous, but not necessarily more precise, mass estimate is made (and we obtain an estimate for as well) if we calculate the HCO column density first. We use the full expression without assumptions about optical depth or approximations to the stimulated emission correction in the denominator (e.g., Rohlfs & Wilson, 2006). Assuming LTE applies and with quantities in cgs units, we obtain a column density for each line of sight from
N(HCO+)=3h8π3μ2DJuQ(Tex)eEu/kTex(1−e−hν/kTex)∫τdV, (5a) where Q is the partition function for HCO+ at the excitation temperature Tex, Eu is the energy level of the upper state Ju of the transition, μD is the molecule’s electric dipole moment, and the line optical depth τ (peak value ∼6 from the previous section) is integrated over the velocity, here taken over the range –23.2 to –16.6 km s−1 (as in Fig. 2). Determining the excitation temperature is a little more complicated, however. Faundez et al (2004) derive Td = 30 K for the continuum dust emission from BYF73, but found it necessary to fit two temperature components to the spectral energy distributions (SEDs) of most of their sources. They do not give an explicit value for the warm component in BYF73, but their average warm component has Td∼ 140 K. From the HCO+J=4→3/1→0 brightness ratio (2.20±0.14 from Fig. 4, suitably corrected for the beam efficiencies) at the peak of BYF73, we fit a Tex = 125±26 K for the dense gas. But without a spatially-resolved HCO+J=4→3 map, we are limited to saying that the gas Tex probably takes a range of values from 30–125 K. Because HCO+ is a linear molecule, its partition function is straightforward to calculate (Rohlfs & Wilson, 2006). At these temperatures Q∼14−59, giving N(HCO+) = 4.84×1011Q(Tex)eEu/kTex1−e−hν/kTex∫τdVcm−2 (5b) ∼ (0.92−13)×1015cm−2,
where the velocity is in km s, and we have taken a Gaussian line profile, with dispersion as before, for the integral. Combining the column density with the size measurement (assuming that the physical depth of the source is comparable to the projected size) gives a central density estimate
nH2 = √ln2πNRX ∼ (3.5−51)×105cm−3(R/0.40pc)(X/10−9)
over the same temperature range, which shows that the central density in BYF73 almost certainly exceeds the critical density for thermalising the HCO=10 line, and vindicates this assumption in eq. (4). Similarly integrating the column density (eq. 5b) over the emission region yields a total cloud mass
MLTE = NX(μmolmH)πR2ln2 ∼ (1.2−17)×104M⊙(R/0.40pc)2(X/10−9).
The lower limits for both eqs. (6) & (7) are probably too low, since they don’t include the contribution to the mass from the warmer component; moreover we have used the smaller HCO radius in both. With the larger radius, eq. (6) gives a density 1.9 cm, lower than before but still near the critical density, and eq. (7) a mass 4.110 M. Both low-temperature mass estimates are close to the values in eq. (4). This is surprising given the approximate nature of these calculations, but reassuring. We conclude that an intermediate value, 2.010 M, is probably reasonable for BYF73.
However the upper limits from eqs. (5-7), based on the higher excitation temperature being widespread, are certainly too high, since it is unlikely that such a warm temperature would be typical of the whole parsec-wide dense clump. One would need to compare a map of the HCO=43 emission at a resolution at least as good as our HCO=10 map, in order to derive a reliable map of across the source and address how much larger the clump’s mass might be due to this warmer gas.
In eq. (6-7) we have used an abundance for HCO relative to H, which is a strong upper limit from some recent models of massive core chemistry (e.g., Garrod et al., 2008). These models show is a strong function of time, and is not necessarily the main charge carrier in such regions. Thus in massive cores may be an order of magnitude lower than a more typical value in low-mass cores (e.g., Loren et al., 1990; Caselli et al., 2002; Lee et al., 2003) and used here. On the other hand, Zinchenko et al. (2009) obtain 2.3–1210 cm from observations of a sample of massive clumps, although they cautioned that their values are probably overestimates. This means that parameters derived here that depend on are quite uncertain. Nevertheless, eqs. (4) and (7) suggest that may be close to unity, and that is large.
Indeed BYF73 seems to be quite extreme in this regard too. For example, the mass surface density corresponding to the column density from eq. (5b) is 35 kg m, which is near the largest value of the massive Galactic clusters considered by McKee & Tan (2003). Therefore BYF73 is interesting as a likely environment in which massive protostellar cores may form, and then form massive stars.
Are there alternatives for the dynamical state of this clump besides gravitational infall? To answer this, we evaluate a number of terms from the Virial Theorem. If the linewidths seen in the HCO (2 km s relative to the line centre, counting emission out to the half-power level) were due to rotational support against self-gravity (an interpretation we do not favour due to the self-consistency of the infall modelling, and the effective limit of 1 km s to any rotational contribution to the spectral lines), then
Mrot = v2R/G ∼ 370M⊙(v2kms−1)2(R0.40pc). However thermal and magnetic pressure must also contribute to the support of the cloud; the corresponding virial relations give Mth = 5kTexR/mH2G ∼ 240M⊙(Tex125K)(R0.40pc) and (in cgs units only) Mmag = (5B2R4/18G)1/2 ∼
where we have taken an appropriate value from studies in similar regions for the magnetic field, up to twice the typical Zeeman-derived at a density 310cm (see Fig. 1 of Crutcher, 1999). Such a large value is further supported by Falgarone et al. (2008) who obtained (median) = 560G from CN observations of clouds with a mean density 410cm. These terms, even in combination (1550 M), are much too small to provide the necessary support against gravity, unless (for example) the magnetic field strength were at least ten times the value assumed here, and/or we take linewidths out to the zero-power level (4 km s). While such values for rotation and the magnetic field are not entirely ruled out as a means of supporting BYF73 against collapse, they would be quite extreme. We infer that virial equilibrium does not apply in this case, despite the HCO abundance and uncertainties.
There is also the possibility that the velocity pattern in BYF73 represents a massive outflow rather than infall. Besides the detailed spectroscopic arguments for infall, we discount the outflow interpretation since maps of the HCO line wings (not shown here) do not reveal any particular geometric pattern, such as a bipolar separation of the line wings. Nevertheless, sensitive CO observations should be made of BYF73, since they might be better able to find any outflow, if present.
The conclusion that BYF73 is indeed a massive dense clump undergoing contraction at least (if not collapse) seems fairly reliable, the strongest evidence being the line profiles, the mass calculations, and the IR appearance (see §3.6). We can compare our mass estimate for BYF73 with others in the literature. From Nanten CO mapping and IRAS fluxes, Yonekura et al. (2005) obtained LTE and virial cloud masses of 1900 and 3600 M (resp.) and a luminosity 3.010 L. Our LTE mass estimates are significantly larger than theirs, likely due to our use of a tracer of denser gas, but our total virial mass is smaller than theirs, as might be expected with a smaller observed linear size and linewidth. Our LTE estimates would be smaller if we assumed a smaller effective density for HCO and/or a larger HCO abundance. Either of these might reduce our best estimate above by a factor of 3 or so, to 10 M, bringing it more into line with the Yonekura et al. (2005) number, although we do not favour this value. From 1.2mm mapping and SED fitting Faundez et al. (2004) obtained a clump mass of 470 M, density = 1.410 cm, dust temperature 30 K, and luminosity 1.910 L. Their mass value seems quite low, but could be as much as five times higher with a lower assumed dust opacity, as they point out. This would bring it into closer agreement with the Yonekura et al. (2005) mass estimate, and suggests that such lower dust opacities may be required to explain the higher molecular masses. At any rate, our HCO data strongly suggest the presence of a large amount of dense gas that may not be fully sensed in CO lines or in the mm-continuum.
### 3.6 Infrared Features
Our -band imaging (Fig. 7) has much higher angular resolution than our mm data, or archival centimeter-wave (cm) and far-infrared (FIR) data, and shows some striking structures and correlations. Near the molecular clump, there is a compact HII region (visible as a Br- emission nebula) and IR cluster (visible also in the -continuum image). The Br- is exactly coincident with a centimetre-continuum point source from both the Molonglo Galactic Plane Survey-2 (the MGPS-2 has a similar beamsize, , to our Mopra data; Murphy et al., 2007) and the slightly lower-resolution Southern Galactic Plane Survey (Haverkorn et al., 2006). Such features have been seen before around similar massive star-forming dense clumps, e.g. NGC 2024 (Barnes et al., 1989) or AFGL 5179 (Tej et al., 2006), but the example of BYF73 is interesting in the rather clean separation of the ionised and molecular components, and the distinct “cocooning” of the excited H emission around the very symmetric Br- and cm-continuum. Indeed, this is actually reminiscent of planetary nebulae (e.g., Fig. 9 of Ryder et al., 1998) or the classic picture of a Strömgren sphere. From the pseudocolour composite image in Fig. 7a, we see that the shell of excited H appears to be traced much better by the =21 than the =10 emission. This is surprising since a [1–0]/[2–1] ratio less than unity would be at odds with our understanding of H excitation. Instead, this ratio is likely either an artifact of differential reddening between the various filters used, or due to non-photometric imaging conditions, or both.
To confirm this we obtained a -band long-slit spectrum aligned between the mm and Br- peaks (Fig. 8), which shows that the [1–0]/[2–1] ratio is actually 2.140.10 at the molecular-ionised interface, typical of photodissociation regions (PDRs) or shock-excited jets (e.g., Allers et al., 2005; Caratti o Garatti et al., 2006). From the measured (1) [1–0]/[2–1] ratio and the tabulation of T. Geballe (1995)666 Quoted by T. Kerr (2004), www.jach.hawaii.edu/UKIRT/ astronomy/calib/spec_cal/h2_s.html, the gas kinetic temperature at the PDR interface is constrained to be 4000 K; including the (0) and (3) lines also visible in Figure 8 (with respective ratios (1) 1–0 to these lines of 1.650.07 and 4.50.3) suggests a temperature as high as 5000 K. This is comparable to, but somewhat higher than, H temperatures seen in other star formation PDRs (Allers et al., 2005) or low-mass H jets (Caratti o Garatti et al., 2006), approaching the typical 7000 K for Galactic HII regions at this galactocentric radius (8 kpc; Shaver et al., 1983), and is perhaps indicative of the relative youth of the HII region in BYF73, and/or the strength of the shock excitation from the young stars in the HII region.
Another remarkable feature of the IR imaging is the apparent deficit in line emission from the exact peak of the mm-molecular emission (within the pointing uncertainty). Unfortunately a spectrum for this position is not available, since it coincides with emission in the reference beam (the horizontal black line in Fig. 8 at pixel coordinate 420). So while we cannot obtain any IR line ratios here with the current data, it appears as if the H =2 and Br- are both seen in absorption at this peak, creating an apparent “absorption nebula”. This nebula can be seen in Figure 7a as a green patch to the left (Galactic east) of the HII region, since there the blue (H =21) and red (Br-) appear more strongly “absorbed”, while the green (H =10) is only weakly “absorbed”. While much of this appearance may be due to the construction of the RGB image, at the very least it is likely that there is either unusual, highly localised IR emission/absorption at the molecular peak, or that the deeply embedded stars at that position have very unusual IR colours. Moreover, this positional coincidence is highly suggestive. Further east of the peak of the absorption nebula, there seems to be a weaker, comma-shaped extension, as well as a highly reddened cluster of stars; this shape is also seen in some of the HCO channel maps. Furthermore, this extension is also aligned with the supposed equatorial plane of the oblate clump scenario from §3.4.
There is evidence in other archival data for an unusual source at the molecular peak. A 3-colour (i.e., ) 2MASS image shows that the HII region exhibits some moderate reddening, but that the knot of -band emission visible in Figure 7 at the molecular peak is virtually invisible at shorter wavelengths, confirming its highly embedded nature. A similar 3-colour MSX image (i.e., 8,15,21 m) further shows that this embedded source dominates the luminosity at MIR and longer wavelengths. We therefore have the rather unusual situation that, while a more evolved source is adjacent to our molecular clump, the apparently less evolved source(s) within the clump are more luminous than the revealed exciting stars. Indeed, the luminosity of the deeply embedded IR source(s) is at least partly derived from the release of gravitational potential energy. Taking the mass inflow rate from §3.4, which is itself probably a lower limit as described there,
Lgrav = GM˙MR ∼ 1200L⊙(M/20,000M⊙)(˙M/0.034M⊙yr−1)(R/0.40pc).
This result, that , may be even larger if higher-resolution observations of the infall and central MIR/NIR sources reveal the inflow continues deeper into the central regions.
If the NIR “absorption” at this position were real and not an imaging artifact, then it would imply very large columns of gas, 10 cm. The HCO self-absorption and spatial distribution require the same condition, and so the implication of very high column density at the molecular peak would seem to be strong. This is further supported when we note that the continuum emission of the three stars closest to the “absorption nebula” position in Figure 8a (i.e., the bright horizontal lines at pixels 400, 435, and 460) show a very strong attenuation at the shorter -band wavelengths, presumably due to severe reddening in the molecular clump. Such reddening is not apparently affecting the stars in the HII region (e.g., those labelled A, B, & C in Fig. 8) to the same degree.
The -continuum image reveals details of clustering in BYF73. Compared to the surrounding sky away from any HCO emission, within the HII region there is clearly an overabundance of stars. In addition, the bright H nebulosity immediately to the east of the HII region contains an even more compact clustering of brighter stars, and there is another tight grouping around the molecular peak. To the north and south of the molecular peak, the star density is actually lower than the surroundings, suggesting that here the dust column density is still so high that background stars are being extinguished at 2m. It is clear that many young stars, some massive enough to form an HII region, have already formed in BYF73, and that further star formation appears to be proceeding vigorously to the east of this HII region.
### 3.7 Theoretical Considerations
We note that the typical projected nearest-neighbour separations of stars in these IR groups (i.e., at the molecular and Br- peaks, and molecular-ionised interface), 2 or 5000 AU by inspection of Figure 7b, is less than the Jeans length
RJeans ∼ (kTkinG(μmolmH)2nH2)0.5 ∼ 7900AU(Tkin30K)0.5(nH23×105cm−3)−0.5
in the dense clump, and although higher densities, especially near the centre, may make these scales more commensurate, our estimate for eq. (10) is probably a strong lower limit considering that there is warmer gas in the clump (§3.5), and that the 1mm-derived density (1.410 cm, Faundez et al., 2004) is lower than that used above. This disparity is typical of massive clusters (e.g. Churchwell, 2002) and is a well-known feature of such regions that models must reproduce. Currently, theories attempt to model this structure using either competitive processes (such as coalescence, e.g. Bonnell et al., 2003) or a scaled up accretion disk/turbulent core scenario (e.g., McKee & Tan, 2003). BYF73 promises to be a useful test case in this debate, but as suggested by the IR imagery, will require mm-interferometric observations that approach the IR resolution. At this level (1 or better) we begin to match (at the distance of BYF73) the spatial resolution (0.01 pc) in the simulations of Banerjee & Pudritz (2007). High-resolution maps of the gas velocity field and linewidth will then help to discriminate between the competing theories.
From the current mm data we can say that the line emission pattern and derived velocity field in BYF73 are consistent with the detailed MHD simulations of Banerjee & Pudritz (2007) and radiative transfer treatment of Zhou et al. (1993) for protostars, as well as with the treatment of McKee & Tan (2003). However in this case (a) the mass infall rate and mass & size scales are much larger than in any of these models, (b) there are multiple protostellar objects within the collapse zone, rather than a single, more massive one, and (c) the canonical spectral energy distribution of low-mass Class 0 protostars (André et al., 2000) has the flux dropping to undetectable levels at wavelengths shortward of 10m, although this is under the assumption of spherical symmetry. In BYF73 there are a number of near-IR sources visible at the centre of the infalling clump, so it is likely that orientation effects play a role in the emergent SED for massive protostars, and/or that the SED evolution is different in the massive protostar case. Again, higher-resolution mm- and FIR-continuum images of the cluster sources will help delineate SED evolution in these massive protostars.
Furthermore, this ongoing star formation is happening within a large-scale infall region (1 pc), and to our knowledge this is the first time that such a coincidence of phenomena has been seen. With so much gas still infalling, it is possible that BYF73 could form many more stars before the supply of material is consumed. Dividing the clump’s mass by the infall rate gives a maximum lifetime yr for the supply of raw material for new stars, although if a protostar massive enough to develop its own HII region ionises the gas and arrests the infall, the cluster’s formation may be complete in much less time. This is very long compared to a dynamical timescale, yr, and tends to support longer-timescale models such as the Tan et al. (2006) “Equilibrium Cluster Formation” model, rather than (e.g.) the rapid star cluster formation models of Elmegreen (2000, 2007). However the most embedded stars in the IR “absorption nebula” seem to be arranged in a filamentary geometry; being non-symmetric, such distributions tend to support a rapid formation scenario, however this appearance may be affected by the strong extinction in the area.
In summary, we claim that the molecular and IR observations of BYF73 indicate the existence of a dense clump undergoing global gravitational infall, similar to the case for NGC 2264C (Peretto et al., 2006). We further suggest that, on the basis of the size, mass, luminosity, rate of infall, and near-IR appearance, BYF73 is in the process of forming a massive protocluster. The global mass infall rate as determined from the Mopra mm observations is very high even for a massive protostar, M yr or more. To our knowledge, the upper end of this range would be unprecedented, if confirmed.
## 4 Conclusions
From Mopra and ASTE HCO observations, the Galactic source G286.21+0.17 (which we also call BYF73 from the CHaMP survey master list) has been found to be a massive dense molecular clump exhibiting clear signs of gravitational infall. The size and scale of this infall, M yr over 1 pc, is either a record or close to it, and may indicate the global formation of a massive protocluster. AAT near-IR imaging confirms the existence of unusual spectral signatures and a deeply embedded cluster of stars in the infall zone, as well as an adjacent compact HII region and young star cluster. Higher-resolution mm-wave and FIR/MIR observations of this source are encouraged, since it appears to be an exemplary test case for confronting competing theories of massive star formation.
Users of the Mopra telescope have benefited immensely from the efforts of many people over the last several years, including the talented and dedicated engineers and scientists at the ATNF, and the staff and students of the ÒStar FormersÓ group within the School of Physics at the University of New South Wales. Because of these efforts, use of this facility has changed over this period from being a difficult exercise to a real pleasure. We would also like to acknowledge the members of the ASTE team for the operation and ceaseless efforts to improve ASTE. This work was financially supported in part by Grant-in-Aid for Scientific Research (KAKENHI) on Priority Areas from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT), No. 15071205. PJB gratefully acknowledges support from the School of Physics at the University of Sydney and through NSF grant AST-0645412 at the University of Florida. AMH acknowledges support provided by the Australian Research Council (ARC) in the form of a QEII Fellowship (DP0557850). YM acknowledges financial supports by the research promotion scholarship from Nagoya University and research assistantships from the 21st Century COE Program “ORIUM” (The Origin of the Universe and Matter: Physical Elucidation of the Cosmic History) and the Global COE program “Quest for Fundamental Principles in the Universe: from Particles to the Solar System and the Cosmos”, MEXT, Japan. We also thank the referee for numerous helpful suggestions and comments which led to several improvements in the paper, and J. Tan and C. De Vries for additional helpful comments. Facilities: Mopra (MOPS), AAT (IRIS2), ASTE
## References
• Allers et al. (2005) Allers K N, Jaffe D T, Lacy J H, Draine B T, & Richter M J 2005, ApJ, 630, 368
• André et al. (2000) André, P., Ward-Thompson, D., & Barsony, M. 2000, in Protostars and Planets IV, eds. Mannings, V., Boss, A. P., Russell, S. S. (Tucson: U. of Arizona Press), p59
• Banerjee & Pudritz (2007) Banerjee R & Pudritz R E 2007, ApJ, 660, 479
• Barnes et al. (2001) Barnes, D.G. and 38 co-authors 2001, MNRAS, 322, 486
• Barnes (1985) Barnes, P.J. 1985, M.Sc. Thesis, University of Sydney, unpublished
• Barnes et al. (1989) Barnes, P.J., Crutcher, R., Bieging, J., Willner, S.P., & Storey, J.W.V. 1989, ApJ, 342, 883
• Barnes & Crutcher (1990) Barnes, P.J., & Crutcher, R. 1990, ApJ, 351, 176
• Barnes et al. (2006) Barnes P.J., Yonekura Y, Miller A, Mühlegger M, Agars L, Wong T, Ladd E F, Mizuno N, & Fukui Y 2006, in IAU Symposium 231: Astrochemistry Throughout the Universe, D. Lis, G.A. Blake, E. Herbst (eds.) (Cambridge UP: Cambridge)
• Barnes et al. (2009) Barnes P J, Yonekura Y, & Fukui Y 2009, ÒThe Galactic Census of High- and Medium-mass Protostars II. The Master Catalogue and First Results from Mopra HCO Maps,Ó in prep.
• Beltrán et al. (2006) Beltrán M T, Cesaroni R, Codella C, Testi L, Furuya R S, & Olmi L 2006, Nature, 443, 427
• Bonnell et al. (2003) Bonnell I A, Bate M R, & Vine S G 2003, MNRAS, 343, 413
• Bronfman et al. (1996) Bronfman L, Nyman L-Å, & May J 1996, A&AS, 115, 81
• Burton & Gordon (1978) Burton W.B. & Gordon M.A. 1978, A&A, 63, 7
• Caratti o Garatti et al. (2006) Caratti o Garatti A., Giannini T., Nisini B., & Lorenzetti D. 2006, A&A, 449, 1077Ð1088
• Caselli et al. (2002) Caselli P, Walmsley C M, Zucconi A, Tafalla M, Dore L, & Myers P C 2002, ApJ, 565, 344
• Cavanagh et al. (2003) Cavanagh, B., Hirst, P., Jenness, T., Economou, F., Currie, M. J., Todd, S., & Ryder, S. D. 2003, in ASP Conf. Ser., 295, ÒAstronomical Data Analysis Software and Systems XIIÓ, eds. H. E. Payne, R. I. Jedrzejewski, & R. N. Hook (San Francisco: ASP), 237
• Churchwell (2002) Churchwell, E. 2002, ARAA, 40, 27
• Clemens (1985) Clemens, D.P. 1985, ApJ, 295, 422
• Crutcher (1999) Crutcher, R. M. 1999, ApJ, 520, 706
• De Vries & Myers (2005) De Vries C H & Myers P C 2005, ApJ, 620, 800
• Dutra et al. (2003) Dutra C M, Bica E, Soares J, & Barbuy B 2003, A&A, 400, 533
• Elmegreen (2000) Elmegreen, B. G. 2000, ApJ, 530, 277
• Elmegreen (2007) Elmegreen, B. G. 2007, ApJ, 668, 1064
• Ezawa et al. (2004) Ezawa, H., Kawabe, R., Kohno, K., & Yamamoto, S. 2004, Proc. SPIE, 5489, 763
• Falgarone et al. (2008) Falgarone E, Troland T H, Crutcher R M, & Paubert G 2008, A&A, 487, 247
• Faundez et al. (2004) Faundez S, Bronfman L, Garay G, Chini R, Nyman L-, & May J 2004, A&A, 426, 97
• Fuller et al. (2005) Fuller G A, Williams S J, & Sridharan T K 2005, A&A, 442, 949
• Garrod et al. (2008) Garrod R T, Widicus-Weaver S L, & Herbst E 2008, ApJ, 682, 283
• Habing & Israel (1979) Habing, H.J., & Israel, F.P. 1979, ARA&A, 17, 345
• Haese & Woods (1979) Haese N N & Woods R C 1979, Chem. Phys. Letters, 61, 396
• Haverkorn et al. (2006) Haverkorn, M., Gaensler B M, McClure-Griffiths N M, Dickey J M, & Green A J 2006, ApJS, 167, 230
• Haynes et al. (1978) Haynes R F, Caswell J L, & Simons L W J 1978, Aust J Phys Astrophys Suppl, 45, 1
• Kamazaki et al. (2005) Kamazaki, T., Ezawa, H., Tatematsu, K., Yamaguchi, N., Kuno, N., Morita, K., Yanagisawa, K., Horigome, O., & Maekawa, J. 2005, ASP Conf. Ser., 347, 533
• Kohno et al. (2004) Kohno, K., and 41 co-authors 2004, in The Dense Interstellar Medium in Galaxies, ed. S. Pfalzner et al. (Berlin: Springer), 349
• Kohno (2005) Kohno, K. 2005, ASP Conf. Ser., 344, 242
• Kohno et al. (2008) Kohno, K., Nakanishi K, Tosaki T, Muraoka K, Miura R, Ezawa H, and Kawabe R 2008, Ap&SS, 313, 279-285
• Lada (1999) Lada, C. J. 1999, in The Origin of Stars and Planetary Systems, ed. C. J. Lada & N. D. Kylafis (Dordrecht: Kluwer), 143
• Ladd et al. (2005) Ladd E. F., Purcell C. R., Wong T., & Robertson S., 2005, PASA, 22, 62
• Lee et al. (2003) Lee J-E, Evans N J, Shirley Y L, & Tatematsu K 2003, ApJ, 583, 789
• Longmore et al. (2007) Longmore S N, Burton M G, Barnes P J, Wong T, Purcell C R, & Ott J 2007, MNRAS, 535, 572
• Loren et al. (1990) Loren R B, Wootten A, Wilking B A 1990, ApJ, 365, 269
• Mac Low & Klessen (2004) Mac Low, M.-M., & Klessen, R. S. 2004, Rev. Mod. Phys., 76, 125
• McKee & Tan (2003) McKee C F & Tan J C 2003 ApJ, 585, 850
• Merrifield (1992) Merrifield, M. 1992, AJ, 103, 1552
• Mezger et al. (1967) Mezger P G, Schraml J, & Terzian Y 1967, ApJ, 150, 807
• Murphy et al. (2007) Murphy T, Mauch T, Green A, Hunstead RW, Piestrzynska B, Kels AP, & Sztajer P 2007, MNRAS, 382, 382
• Myers et al. (1996) Myers P C, Mardones D, Tafalla M, Williams J P, Wilner D J 1996, ApJLett, 465, L133
• Nakamura & Li (2005) Nakamura F & Li Z-Y 2005 ApJ, 631, 411
• Peretto et al. (2006) Peretto N, André P, & Belloche A 2006, A&A, 445, 979
• Reid et al. (2009) Reid M.J., and 13 co-authors 2009, ApJ, 700, 137
• Rohlfs & Wilson (2006) Rohlfs K. & Wilson T.L. 2006, Tools of Radio Astronomy, 4th ed. (Springer: Berlin)
• Ryder et al. (1998) Ryder, S. D., Sun, Y.-S., Ashley, M. C. B., Burton, M. G., Allen, L. E., & Storey, J. W. V. 1998, PASA, 15, 228
• Shaver et al. (1983) Shaver, P. A., McGee, R. X., Newton, L. M., Danks, A. C., & Pottasch, S. R. 1983, MNRAS, 204, 53
• Schilke et al. (1997) Schilke, P., Groesbeck, T. D., Blake, G. A., & Phillips, T. G. 1997, ApJS, 108, 301Ð337
• Shu (1977) Shu F 1977 ApJ, 214, 488
• Sorai et al. (2000) Sorai, K., Sunada, K., Okumura, S.K., Iwasa, T., Tanaka, A., Natori, N., & Onuki, H. 2000, Proc. SPIE, 4015, 86
• Spitzer (1978) Spitzer, L. 1978, Physical Processes in the Interstellar Medium (Wiley: New York)
• Sridharan et al. (2002) Sridharan T K, Beuther H, Schilke P, Menten K M, & Wyrowski F 2002, ApJ, 566, 931
• Tan et al. (2006) Tan J C, Krumholz M R, McKee C F 2006, ApJ, 641, L121
• Tej et al. (2006) Tej A., Ojha D. K., Ghosh S. K., Kulkarni V. K., Verma R. P., Vig S., & Prabhu T. P. 2006, A&A, 452, 203Ð215
• Tinney et al. (2004) Tinney, C. G., Ryder, S. D., Ellis, S. C., Churilov, V., Dawson, J., Smith, G., Waller, L., Whittard, J., Haynes, R., Lankshear, A., Barton, J. R., Evans, C. J., Shortridge, K., Farrell, T., & Bailey, J. 2004, in Proc. SPIE, 5492, 998
• Williams et al. (2000) Williams J P, Blitz L, & McKee C F 2000, in Protostars and Planets IV, eds. Mannings, V., Boss, A. P., Russell, S. S. (Tucson: U. of Arizona Press), p97
• Wilson et al. (2006) Wilson W, Müller E, & Ferris D 2006, ATNF Newsletter, no. 59, www.atnf.csiro.au/news/newsletter/jun06/
• Yonekura et al. (2005) Yonekura Y, Asayama S, Kimura K, Ogawa H, Kanai Y, Yamaguchi N, Barnes P J, & Fukui Y 2005, ApJ, 634, 476
• Yonekura et al. (2009) Yonekura Y, Barnes P J, & Fukui Y 2009, ÒThe Galactic Census of High- and Medium-mass Protostars I. A Uniform Sample of Massive Molecular Clumps from Nanten CO and HCO Maps,Ó in prep.
• Zhou et al. (1993) Zhou S., Evans N.J.II, Kömpe C., & Walmsley C.M. 1993, ApJ, 404, 232
• Zinchenko et al. (2009) Zinchenko S., Caselli P., & Pirogov L. 2009, MNRAS, 395, 2234
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
|
# 0.3 Signal processing in processing: sampling and quantization
Page 1 / 4
Fundamentals of sampling, reconstruction, and quantization of 1D (sounds) and 2D (images) signals, especially oriented at the Processing language.
## Sampling
Both sounds and images can be considered as signals, in one or two dimensions, respectively. Sound can be described as afluctuation of the acoustic pressure in time, while images are spatial distributions of values of luminance or color, thelatter being described in its RGB or HSB components. Any signal, in order to be processed by numerical computingdevices, have to be reduced to a sequence of discrete samples , and each sample must be represented using a finite number of bits. The first operationis called sampling , and the second operation is called quantization of the domain of real numbers.
## 1-d: sounds
Sampling is, for one-dimensional signals, the operation that transforms a continuous-time signal (such as, for instance,the air pressure fluctuation at the entrance of the ear canal) into a discrete-time signal, that is a sequence ofnumbers. The discrete-time signal gives the values of the continuous-time signal read at intervals of $T$ seconds. The reciprocal of the sampling interval is called sampling rate ${F}_{s}=\frac{1}{T}$ . In this module we do not explain the theory of sampling, but we rather describe its manifestations. For a amore extensive yet accessible treatment, we point to the Introduction to Sound Processing . For our purposes, the process of sampling a 1-D signal canbe reduced to three facts and a theorem.
• The Fourier Transform of a discrete-time signal is a function (called spectrum ) of the continuous variable $\omega$ , and it is periodic with period $2\pi$ . Given a value of $\omega$ , the Fourier transform gives back a complex number that can be interpreted as magnitude and phase(translation in time) of the sinusoidal component at that frequency.
• Sampling the continuous-time signal $x(t)$ with interval $T$ we get the discrete-time signal $x(n)=x(nT)$ , which is a function of the discrete variable $n$ .
• Sampling a continuous-time signal with sampling rate ${F}_{s}$ produces a discrete-time signal whose frequency spectrum is the periodic replication of the originalsignal, and the replication period is ${F}_{s}$ . The Fourier variable $\omega$ for functions of discrete variable is converted into the frequency variable $f$ (in Hertz) by means of $f=\frac{\omega }{2\pi T}$ .
The [link] shows an example of frequency spectrum of a signal sampled with sampling rate ${F}_{s}$ . In the example, the continuous-time signal had all and only the frequency components between $-{F}_{b}$ and ${F}_{b}$ . The replicas of the original spectrum are sometimes called images .
Given the facts , we can have an intuitive understanding of the Sampling Theorem,historically attributed to the scientists Nyquist and Shannon.
## Sampling theorem
A continuous-time signal $x(t)$ , whose spectral content is limited to frequencies smaller than ${F}_{b}$ (i.e., it is band-limited to ${F}_{b}$ ) can be recovered from its sampled version $x(n)$ if the sampling rate is larger than twice the bandwidth (i.e., if ${F}_{s}> 2{F}_{b}$ )
what is variations in raman spectra for nanomaterials
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
|
# Just look carefully! 2
Calculus Level 5
A function $$f$$ is such that:
$\large \begin{cases} yf(x) + x^2f(y) = f(xy) \\ \displaystyle \lim_{x \to 0} \frac{f(x)}{x} = 1 \end{cases}$
Prove that $$f$$ is differentiable infinitely many times.
Submit your answer as the value of $$f'(10) - 10 f''(20)$$.
×
|
|
• A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## Newton maps of complex exponential functions and parabolic surgery
Fundamenta Mathematicae. 2018. Vol. 241. No. 3. P. 265-290.
The paper deals with Newton maps of complex exponential functions and a surgery tool developed by P. Haissinsky. The concept of "Postcritically minimal" Newton maps of complex exponential functions are introduced, analogous to postcritically finite Newton maps of polynomials. The dynamics preserving mapping is constructed between the space of postcritically finite Newton maps of polynomials and the space of postcritically minimal Newton maps of complex exponential functions.
|
|
# Differential form
1. Oct 15, 2007
### daishin
Let M be a smooth manifold. Locally we can choose 1-forms $$\omega$$$$^{1}$$,$$\omega$$$$^{2}$$,...$$\omega$$$$^{n}$$ whish span M$$^{*}_{q}$$ for each q. Then are there vector fields X$$_{1}$$, X$$_{2}$$, ...,X$$_{n}$$ with $$\omega$$$$^{i}$$(X$$_{j}$$)=$$\delta^{i}_{j}$$? Here $$\delta^{i}_{j}$$ is Kronecker delta.
By vector fields, I meant vector fields on M.
I think there are such vector fields on small neighborhood B in M.(since M* is locally
trivial, we can think of M* restricted to B as B X R^n. And we can find such 1-forms w_1, w_2,...w_n which span M* at each p in B. And of course we can find vector fields X$$_{1}$$, X$$_{2}$$, ...,X$$_{n}$$ on B such that
$$\omega$$$$^{i}$$(X$$_{j}$$)=$$\delta^{i}_{j}$$.
But I am wondering if we can extend this vector fields to whole of M.
Last edited: Oct 15, 2007
2. Oct 15, 2007
### timur
You started with 1-forms which were chosen locally. So before answering your latter question, you should think about whether you can choose the 1-forms globally on M.
3. Oct 16, 2007
### daishin
I think we can always find globally defined 1-forms w_1, w_2,...w_n on M which in some small neighborhood B, they span M* for each p in B. If not please correct me.
My question came from the proof of Frobenius integrability theorem in Spivak Volume 1.
It is a chapter 7 Theorem 14. He starts the proof with locally defined 1-forms w_1,w_2,...,w_n. But in the proof he says:Let X_1, X_2,... X_n be the vecor fields with
w_i(X_j)= delta^i_j. Here, I think he is referring vector fields on M.
4. Oct 16, 2007
### timur
In general such X does not exist since e.g. on sphere you cannot construct nowhere vanishing vector field. So I think the proof refers to locally defined fields.
|
|
# File formats
In the previous pages, we have pointed out that rlist is designed to deal with non-tabular data, that is, data that does not well fit a tabular form. To stress the difference between them, we recall the examples we used.
The following table represents a tabular data:
Name Gender Age Major
Ken Male 24 Finance
Ashley Female 25 Statistics
Jennifer Female 23 Computer Science
The table can be easily stored into either text-based file or relational database. The most commonly used text-based file format to store such type of data is CSV which often uses comma (,) to divide columns. In this format, the data can be written in the following form:
Name,Gender,Age,Major
Ken,Male,24,Finance
Ashley,Female,25,Statistics
Jennifer,Female,23,Computer Science
It is obvious that each line represents a record and columns can be distinguished by comma. In R reading a csv file is simple: read.csv() can handle it easily.
However, when it comes to non-tabular data, standard CSV format and the reader functions do not handle it so well as it does with tabular data. Recall the following table representing a non-tabular dataset:
Name Age Interests Expertise
Ken 24 reading, music, movies R:2, C#:4, Python:3
James 25 sports, music R:3, Java:2, C++:5
Penny 24 movies, reading R:1, C++:4, Python:2
You may have to try to write a CSV file to represent the data, but the outcome would not be satisfactory: the number of values of Interests column is not fixed, and the values of Expertise column are also different in names.
Alternatively, you may also try to build a relational database to contain the data. The structure of the database, however, would be a bit tricky: More than one tables are to be created, each is restricted by one type of structure. To query the data with flexibility, one has to work with multiple tables by joining them.
JSON is a powerful format to represent such flexible data. It certainly has more notations but does not make the representation too complex. The following text is the JSON format of the table above.
[
{
"Name" : "Ken",
"Age" : 24,
"Interests" : [
"music",
"movies"
],
"Expertise" : {
"R": 2,
"CSharp": 4,
"Python" : 3
}
},
{
"Name" : "James",
"Age" : 25,
"Interests" : [
"sports",
"music"
],
"Expertise" : {
"R" : 3,
"Java" : 2,
"Cpp" : 5
}
},
{
"Name" : "Penny",
"Age" : 24,
"Interests" : [
"movies",
],
"Expertise" : {
"R" : 1,
"Cpp" : 4,
"Python" : 2
}
}
]
You may find that the JSON text above fully replicates the information in the table but using notations such as [], {} and "key" : value. Here is a simplified introduction to these notations:
• [] creates a unnamed node array.
• {} creates a named node list.
• "key" : value creates a key-value pair where value can be a number, a string, a [] array, or a {} list.
These notations allow the use of nested lists or arrays, just like how list object in R can be nested. Therefore, this similarity briges the use of JSON and R. rlist package imports jsonlite package to read/write JSON data.
Another file format that is also widely used is YAML. The following text is a YAML format representation (stored here) of the non-tabular data:
- Name: Ken
Age: 24
Interests:
- music
- movies
Expertise:
R: 2
CSharp: 4
Python: 3
- Name: James
Age: 25
Interests:
- sports
- music
Expertise:
R: 3
Java: 2
Cpp: 5
- Name: Penny
Age: 24
Interests:
- movies
|
|
# Zero divided by zero
1. Dec 23, 2006
### mubashirmansoor
Some days ago I read a fallacious algabraic argument which was quite intresting and made me think about such cases, Last night I came up with a technique to make sense out of all those fallacies which include diving by zero... The technique is as follows:
lets say:
[tex]a/b=A[/atex]
[tex]a=bA[/atex]
If we take 'b' as zero, "a = 0" as well and 'A' can be anything.
As a result: [tex]0/0=A[/atex] where 'A' can be anything.
Concludes to two points:
1) Nothing other than zero is divisible by zero, its only zero itself.
2) Zero divided by zero can be anything.
Whats the use of these points?
________________________________
[tex]x^2-x^2=x^2-x^2[/atex]
[tex](x-x)(x+x)=x(x-x)[/atex]
[tex]((x-x)(x+x))/(x-x)=x(x-x)/(x-x)[/atex]
which results to 1 = 2
Using the points above and repeating the third step of the falacy we have;
[tex](0/0)(2x)=(0/0)(x)[/atex]
which means:
[tex]v2x=wx[/atex]
(where v is A#1 & w is A#2)
as we are to keep the equilibrium between the right and left handside of the equation, the relation between v & w is obvious;
[tex]w=2v[/atex]
by subsituting:
[tex]v2x=2vx[/atex]
[tex](v2x)/(2v)=(2vx)/(2v)[/atex]
which means x = x and no more a fallacy.
____________________________________________
Even if we look from the other point of view; as multiplicaton is the inverse process of division, and that something multiplied by zero is zero
so logically zero divided by zero can be anything.
I'd be glad for further comments, I know its forbiden to divide something by zero but its fun
Why cant we do the process mentioned above?
Last edited: Dec 23, 2006
2. Dec 23, 2006
### matt grime
Because its nonsense?
There are two superficial mistakes I can see.
1. Why would anyone want an algebraic operation that resulted in 'anything' as the outcome? This is precisely the reason why it is undefined in any extension of the reals.
2. You have something backwards. In a suitable extension of the reals we can divide anything by zero except zero.
3. Dec 23, 2006
### mubashirmansoor
I really like to know the problem with the statment of mine & I couldn't really get the 2nd point of yours;
how can we divide something other than zero from zero;
1)which number when multiplied by zero gives us a real number except zero??
2)which number when multiplied by zero gives us zero?
Well I think the logical outcome of these two questions lead to what I had thought...
I'm sure that there is something behind this way of thinking which makes it all wrong but where is it????
One might like to have such an operation which reults to anything for giving a sense to the known fallacious algebraic equations.
I'll be really thankfull for further response.
4. Dec 23, 2006
You should look at the field axioms of the real numbers, and at the algebraic properties which can be derived from these axioms.
5. Dec 23, 2006
### HallsofIvy
Staff Emeritus
You are correct that "Zero divided by zero can be anything" but since "zero divided by zero" is not then one specific value it is incorrect to say at all that "zero can be divided by zero". If you accept "anything" as a result for the calculation you have no right to say "v2x= wx" so "w= 2v". That's only saying "in order to get a specific result, we have to force 0/0 to be a specific thing, which we have no right to do".
6. Dec 23, 2006
### matt grime
If the operation may result in *any* answer, then how do you know which is the correct one in any given instance?
7. Dec 23, 2006
### Hurkyl
Staff Emeritus
Recall that the thing that makes a function a function is that for any particular set of inputs, there is exactly one output.
If one so desired, one could define a ternary relation _ @ _ = _ defined by
x @ y = z if and only if yz = x
but one cannot interpret this as defining @ as a function on pairs of real numbers because, as you know, 0@0=x for every x.
Generally one would not use this infix notation for a relation like this, precisely because it looks like @ is being used as a function.
(Of course, if we restricted y to be nonzero, then this does define a function. In fact, @ would be the same as / in that case)
|
|
#### Revision Model Question Paper 2
11th Standard
Reg.No. :
•
•
•
•
•
•
Chemistry
Time : 03:00:00 Hrs
Total Marks : 70
Part I
Choose the most suitable answer from the given four alternatives and write the option code with the corresponding answer.
15 x 1 = 15
1. The oxidation number of fluorine in all its compounds is equal to
(a)
-1
(b)
+1
(c)
-2
(d)
+2
2. Which of the following does not represent the mathematical expression for the Heisenberg uncertainty principle?
(a)
$\triangle x.\triangle p\ge \frac { h }{ 4\pi }$
(b)
$\triangle x.\triangle v\ge \frac { h }{ 4\pi m }$
(c)
$\triangle E.\triangle t\ge \frac { h }{ 4\pi }$
(d)
$\triangle E.\triangle x\ge \frac { h }{ 4\pi }$
3. Match the list-I and list-II using the correct code given below the list.
List-I List -II A. Jewels 1. 1.Sodium chloride B. Bolts and cot 2. Copper C. Table salt 3. Gold D. Utensils 4. Iron
(a)
A B C D 3 4 1 2
(b)
A B C D 4 1 3 2
(c)
A B C D 1 4 2 3
(d)
A B C D 2 3 4 1
4. Non-stoichiometric hydrides are formed by
(a)
(b)
carbon, nickel
(c)
manganese, lithium
(d)
nitrogen, chlorine
5. Consider the following statements.
(i) Alkali metals exhibit high chemical reactivity due to their low ionization energy.
(ii) Lithium is a very soft metal and even it can be cut with a knife.
(iii) Francium is a radioactive element in group 1 elements
Which of the above statements is/are not correct?
(a)
(i) only
(b)
(ii) only
(c)
(i) and (iii)
(d)
(i), (ii) and (iii)
6. Pressure of a gas is equal to __________.
(a)
$\frac{F}{a}$
(b)
F x a
(c)
$\frac{a}{F}$
(d)
F - a
7. The enthalpies of formation of Al2O3 and Cr2Oare -1596 kJ and -1134 kJ, respectively. ΔH for the reaction 2AI + Cr2O3 ⟶ 2Cr + Al2O3 is
(a)
- 1365 kJ
(b)
2730 kJ
(c)
- 2730 kJ
(d)
- 462 kJ
8. [Co(H2O)6]2+ (aq) (pink) + 4Cl (aq) ⇌ [CoCl4]2– (aq) (blue)+ 6 H2O (l)
In the above reaction at equilibrium, the reaction mixture is blue in colour at room temperature. On cooling this mixture, it becomes pink in colour. On the basis of this information, which one of the following is true?
(a)
ΔH > 0 for the forward reaction
(b)
ΔH = 0 for the reverse reaction
(c)
ΔH < 0 for the forward reaction
(d)
Sign of the ΔH cannot be predicted based on this information
9. The KH for the solution of oxygen dissolved in water is 4 × 104 atm at a given temperature. If the partial pressure of oxygen in air is 0.4 atm, the mole fraction of oxygen in solution is
(a)
4.6 x 103
(b)
1.6 x 104
(c)
1 x 10-5
(d)
1 x 105
10. Which of the following has see saw shape?
(a)
PCI5
(b)
IO2F-2
(c)
SOF4
(d)
ClO-3
11. The isomer of ethanol is
(a)
acetaldehyde
(b)
dimethylether
(c)
acetone
(d)
methyl carbinol
12. Which one of the following has least acidic character?
(a)
HCOOH
(b)
CH3COOH
(c)
CH2CICOOH
(d)
CCl3COOH
13. Molecular formula of benzene is ________.
(a)
C6H6
(b)
C6H5
(c)
C7H8
(d)
CH4
14. Identify the correct order of boiling point of halo alkanes?
(a)
CH3-CH2-CH2-CH2CI>(CH3)3C-CI > CH3-CH2-$\underset { \overset { | }{ Cl } }{ CH }$-CH3
(b)
CH3-CH2-CH2-CH2CI>CH3-CH2-$\underset { \overset { | }{ Cl } }{ CH }$-CH3< (CH3)3C-CI
(c)
(d)
15. Assertion (A): Excessive use of chlorinated pesticide causes soil and water pollution.
Reason (R) : Such pesticides are non-biodegradable.
(a)
Both (A) and R are correct and (R) is the correct explanation of (A)
(b)
Both (A) and R are correct and (R) is not the correct explanation of (A)
(c)
Both (A) and R are not correct
(d)
(A) is correct but( R) is not correct
16. Part II
Answer any 6 questions. Question no. 24 is compulsory.
6 x 2 = 12
17. Calculate the equivalent mass of the following - Sodium Hydroxide
18. The stabilisation of a half filled d - orbital is more pronounced than that of the p-orbital why?
19. Explain what is meant by efflerescence.
20. Why are the airplane cabins artificially pressurized?
21. The bond dissociation energies of gaseous chlorine, hydrogen, and hydrogen chloride are 104, 58, and 103 k.cal mol-1 respectively. Calculate the enthalpy of formation of HCI(g). Predict in which of the following, entropy increases/decreases. - 2NaHCO3(s) $\rightarrow$Na2CO3(s) + CO2(g) + H2O(s)
22. For a given reaction at a particular temperature, the equilibrium constant has constant value. Is the value of Q also constant? Explain.
23. Indicate the $\sigma$ and $\pi$ bonds in the following molecules.
C6H6, CH2CI2, CH3NO2, CH2=C=CH2
24. 0.24g of an organic compound gave 0.287 g of silver chloride in the carius method. Calculate the percentage of chlorine in the compound.
25. Discuss the aromatic nucleophilic substitutions reaction of chlorobenzene.
26. Part III
Answer any 6 questions. Question no.33 is compulsory.
6 x 3 = 18
27. Balance the following reaction:
S2O32- + I2$\rightarrow$S2O62- + I-
28. State and explain pauli's exclusion principle
29. Justify that the fifth period of the periodic table should have 18 elements on the basis of quantum numbers.
30. Give a brief account of covalent hydrides.
31. Why sodium hydroxide is much more water soluble than chloride?
32. Explain graphical representation of Gay Lussac's law.
33. The equilibrium constant of a reaction is 10, what will be the sign of ΔG? Will this reaction be spontaneous?
34. 0.24 g of a gas dissolves in 1 L of water at 1.5 atm pressure. Calculate the amount of dissolved gas when the pressure is raised to 6.0 atm at constant temperature.
35. Carry over the following reaction mechanisms.
(i) Bromination of alkene
(ii) Addition of HCN to CH3CHO
(iii) Formation of alkyl bromide with benzoyl peroxide as radical initiator.
36. Part IV
5 x 5 = 25
1. Explain about sp hybridisation with suitable example.
2. Mention the standards prescribed by BIS for quality of drinking water
1. An isotope of hydrogen (A) reacts with diatomic molecule of element which occupies group number 16 and period number 2 to give compound (B) is used as a moderator in nuclear reaction. (A) adds on to a compound ( C), which has the molecular formula C3H6 to give (D). Identify A, B, C and D.
2. Starting from methyl magnesium iodide, how would you prepare
(i) Ethyl methyl ether
(ii) methyl cyanide
(iii) methane
1. Calculate the heat of glucose and its calorific value from following data:
(i) C(graphite)+O2(g) ➝ CO2(g); ΔH= -395 KJ
(ii) H2(g)+$\frac{1}{2}$O2 ➝ H2O(l); ΔH= -269.4 KJ
(iii) C+6H2(g)+3O2(g) ➝ C6H12O6(s); ΔH= -1169.8 KJ
2. Give IUPAC names for the following compounds
1) CH3 – CH = CH – CH = CH – C ≡ C – CH3
2)
3) (CH3)3 C – C ≡ C – CH (CH3)2
4) ethyl isopropyl acetylene
5) CH ≡ C – C ≡ C – C ≡ CH
1. If an electron is moving with a velocity 600 ms-1 which is accurate upto 0.005%, then calculate the uncertainty in its position. (h = 6.63 x 10-34 Js. mass of electron = 9.1 x 10-31 kg)
2. An alkali metal (A) belongs to period number II and group number I react with oxygen to form (B). (A) reacts with water to form (C) with liberation of hydrogen compound (D).Identify A, B, C and D.
1. Balance the following equations by oxidation number method.
K2Cr2O7 + HI ⟶ KI + Crl3 + H2O + I2
2. Calculate the effective nuclear charge experienced by the 4s electron in potassium atom.
|
|
Article Contents
Article Contents
# Invariants of twist-wise flow equivalence
• Flow equivalence of irreducible nontrivial square nonnegative integer matrices is completely determined by two computable invariants, the Parry-Sullivan number and the Bowen-Franks group. Twist-wise flow equivalence is a natural generalization that takes account of twisting in the local stable manifold of the orbits of a flow. Two new invariants in this category are established.
Mathematics Subject Classification: Primary: 58F25, 58F13; Secondary: 58F20, 58F03.
Citation:
|
|
# Math Help - [SOLVED] Proving trig identities
1. ## [SOLVED] Proving trig identities
Prove the following trig identities starting with the left side.
1) sin^2x + cos^2x = 1
2) (cosx - sinx)^2 + (cosx + sinx)^2 = 2
3) cotx + tanx = secx*cscx
Thank you for any help!
2. Originally Posted by live_laugh_luv27
Prove the following trig identities starting with the left side.
1) sin^2x + cos^2x = 1
2) (cosx - sinx)^2 + (cosx + sinx)^2 = 2
3) cotx + tanx = secx*cscx
Thank you for any help!
1. This seems very weird, it's pretty much the basic proof.
$sin(x) = opp/hyp$
$cos(x) = adj/hyp$
Therefore if we square both:
$sin^2(x) = (opp)^2/(hyp)^2$
$cos^2(x) = (adj)^2/(hyp)^2$
$sin^2(x) + cos^2(x) = \frac{(opp)^2 + (adj)^2}{(hyp)^2}$
By the definition of a right angled triangle and using Pythagoras the right side cancels to 1.
----------------
2. Expand to give:
$cos^2(x) - 2sin(x)cos(x) + sin^2(x) + cos^2(x) + 2sin(x)cos(x) + cos^2(x)$
Remember what $cos^2(x)+sin^2(x)$ is equal to
------------------
3. Rewrite in terms of sin and cos:
$\frac{cos(x)}{sin(x)} + \frac{sin(x)}{cos(x)}$
Give them the same denominator by cross multiplying
$\frac{cos^2(x) + sin^2(x)}{sin(x)cos(x)}$
and cancel to give the rhs
3. I know...here is the exact problem.
What is wrong with the following proof?
sin^2x + cos^2x = 1,
sin^2x + cos^2x = sin^2x + (1 - sin^2x),
sin^2x + (1 - sin^2x) = sin^2x - sin^2x + 1,
= 1.
Is there anything wrong with this?
4. Originally Posted by live_laugh_luv27
I know...here is the exact problem.
What is wrong with the following proof?
sin^2x + cos^2x = 1,
sin^2x + cos^2x = sin^2x + (1 - sin^2x),
sin^2x + (1 - sin^2x) = sin^2x - sin^2x + 1,
= 1.
Is there anything wrong with this?
Yeah, you're using the identity you're trying to prove as part of the proof (the bit in red). This shouldn't be done but I can't remember the name for it
5. Originally Posted by e^(i*pi)
Yeah, you're using the identity you're trying to prove as part of the proof (the bit in red). This shouldn't be done but I can't remember the name for it
Oh, ok. So you can't use that part, because then you would be assuming you already proved the identity, which you didn't.
|
|
### Polycircles
Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon?
### Nim
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter.
### Loopy
Investigate sequences given by $a_n = \frac{1+a_{n-1}}{a_{n-2}}$ for different choices of the first two terms. Make a conjecture about the behaviour of these sequences. Can you prove your conjecture?
# Multiplication Arithmagons
### Why do this problem?
This problem offers students the opportunity to explore numerical relationships algebraically, and use their insights to make generalisations that can then be proved.
Relating to this month's theme, think of the process of putting numbers in the vertices and then calculating the edge numbers as an action. Is it possible to undo that action uniquely, that is, to 'solve' the arithmagon?
### Possible approach
This problem could follow on from work on Arithmagons
If a computer room is available, students could use the interactivity to explore multiplication arithmagons and come up with a strategy for deducing the vertex numbers from the edge numbers.
Alternatively, students could create their own multiplication arithmagons and then give their partner the edge numbers to see if they can deduce the vertex numbers.
Start with vertex numbers in the range 1-12, then move on to 20-100, and finally simple fractions or decimals.
Once students have had time to explore a range of different arithmagons, bring the class together to discuss the strategies they have found to work out the vertex numbers.
"Can you see a relationship between the product of the three edge numbers and the product of the three vertex numbers?"
If students haven't given this any thought, give them time to try a few examples, and then encourage them to use algebra to explain any generalisations they make.
When students have devised an efficient method for solving any multiplication arithmagon, return to the more challenging arithmagons that may have taken them some time to solve before, to show the power of general thinking in solving problems.
Finally, the insights offered by algebraic thinking and general methods can be used to tackle these questions:
• What must be true about the edge numbers for the vertex numbers to be whole numbers?
• How does the strategy for finding a vertex number given the edges on an addition arithmagon relate to the strategy for a multiplication arithmagon?
• What happens to the numbers at the vertices if you double (or treble, or quadruple...) one or more of the numbers on the edges?
• Can you create a multiplication arithmagon with fractions at some or all of the vertices and whole numbers on the edges?
### Key questions
Is it always possible to find numbers to go at the vertices given any three numbers on the edges?
What is the relationship between the product of the edge numbers and the product of the vertex numbers?
### Possible extension
For solving the simpler multiplication arithmagons, finding the factors of each number is a useful method. Why is there no analagous method for addition Arithmagons?
Can students create a multiplication arithmagon where the numbers at the vertices are all irrational but the numbers on the edges are all rational?
What about where just one or two numbers at the vertices are irrational but the numbers on the edges are rational?
The stage 5 problem Irrational Arithmagons takes some of these ideas further.
### Possible support
Begin by spending some time looking closely at the structure of addition Arithmagons.
|
|
# Lamellae
Name Description Category Upload Date Author Score Verified
Lamellar Stack Caille This model provides the scattering intensity, $I(q) = P(q) S(q)$, for a lamellar phase where a random distribution in solution are assumed. Here a Caille $S(q)$ is used for the lamellar stacks. ... Lamellae 07 Sep 2017 sasview 0
Lamellar Polydispersity in the bilayer thickness can be applied from the GUI. Definition The scattering intensity $I(q)$ for dilute, randomly oriented, "infinitely large" sheets or lamellae is $... Lamellae 07 Sep 2017 sasview 0 Lamellar Hg # Note: model title and parameter table are inserted automatically This model provides the scattering intensity,$I(q)$, for a lyotropic lamellar phase where a random distribution in solution are a... Lamellae 07 Sep 2017 sasview 0 Lamellar Hg Stack Caille # Note: model title and parameter table are inserted automatically This model provides the scattering intensity,$I(q) = P(q)S(q)\$, for a lamellar phase where a random distribution in solution are ... Lamellae 07 Sep 2017 sasview 0
Lamellar Stack Paracrystal # Note: model title and parameter table are inserted automatically This model calculates the scattering from a stack of repeating lamellar structures. The stacks of lamellae (infinite in lateral di... Lamellae 07 Sep 2017 sasview 0
Page 1 of 1
|
|
# Tag Info
8
There are two that I know of that are pretty simple. I'll first start with one that requires a "Trusted Initializer" where we assume that there is a party Ted which is trusted by both Alice and Bob and only needs to be present for the initialization stage. This is an extension of a quantum protocol and was proposed by Rivest in Section 7. Alice holds $m_0,... 7 The other answers are good but I thought I would systemize the differences with a single example. Say Bob has a database with 10 entries of the form {name, salary} and Alice would like to query it. With PIR, Alice can retrieve any entry or entries of her choosing (say the 8th entry) without Bob learning which one. The trivial PIR is Alice just retrieves ... 7 The term "stand alone" in secure computation typically refers to the case of a protocol being run once, and not to assumptions. In any case, what I assume you are really asking is what assumptions are needed for OT. You are indeed correct that OT is built from asymmetric assumptions, and this is actually inherent for black-box constructions. This was studied ... 6 Yes. The easiest way is if$K$is an RSA private key, and Bob has the public key. Then, here's how it works; we'll call the ciphertext that Bob has$C$: Bob selects a random number$r$, and computes both$C \cdot r^e \bmod N$and$r^{-1} \bmod N$(where$e$and$N$are the public exponent and the modulus from the public key) Bob sends$C \cdot r^e \bmod N$... 5 Yes, if you take an instance out of the function family (e.g.$F_{K_1}$), then the evaluation of this function at$x$always yields the same result. You can think of it like that: If you fix a key$K$, then your PRF is basically a look-up-table. For every possible input$x \in \{0,1,\cdots,2^m-1\}$there is an entry in the look-up-table for the output$F_{...
5
What you are seeking for is a special case of secure multiparty computation, namely secure function evaluation or also called secure 2 party computation. However, general solutions to this problem require interaction, meaning that the parties performing the computation need to exchange more than two messages. You write: To compute some arbitrary function ...
5
Here is a concrete example of how the receiver could extract information about the senders input: Assume the circuit to be evaluated is the simple circuit computing $(x \oplus (y \wedge z)) = w$, where $x, y$ is the input of the sender and $z$ the input of the receiver. Note, that $w$ and $z$ alone does not reveal the value of $y$ (you can write down the ...
5
The problem is because sender has provided the receiver with a garbled circuit in which the sender's inputs are hard coded (or has provided keys for those inputs, which is morally the same). If the receiver has both keys for each input wire then it is trivial to narrow down the possible values of the sender's input. Consider a concrete example, the ...
5
They could use 1 out of 2 oblivious transfer. Alice offers the messages $0$ and $a$ and Bob uses $b$ as his choice bit (I.e., choosing the first message if $b = 0$ and the second if $b = 1$.). It should be easy to see that Bob now receives $a \land b$ (if in doubt write down the truth-table). Now Bob can send the result to Alice (or they can do the protocol ...
5
You can take a look at LibOT, which is a C++ implementation of several OT extension protocols. In the Readme you can find a list with many base and extension Oblivious Transfer protocols. A protocol that people use a lot is the Simplest OT (although it was announced that the security proof has a bug by one of the authors at the TPMPC2018 workshop). ...
5
As in the linked question, what you are missing is that the simulated view for a given input pair must be indistinguishable from the simulated view for the same input pair. So if $A$'s input $s$ is $0$, then the real view of $B$ will be $0$ with probability $1$. On the other hand, if your simulator just chooses a uniform bit, the simulated view of $B$ will ...
4
Not a real answer, but some hints: Single DB PIR schemes (ones that don't need several non-colluding DB) have had serious efficiency problems for a long time. See paper 'on the computational practicality of private information retrieval' by Sion and Carbunar arguing that all schemes at that time (2007) were less efficient than downloading the whole DB (most ...
4
Approach 1 The simplest way of doing this is for the receiver, with choice $j \in \{1,\dots,n\}$, to input $1$ in the $j$-th 1-out-of-2 OT and $0$ elsewhere. The sender, with input $(x_1, \dots, x_n)$, inputs $(0,x_i)$ in the $i$-th OT. Approach 2 An alternative protocol (that just came out of a discussion with a colleague, and seems to be actively secure)...
4
Recall the ElGamal encryption scheme: The secret key is some random $r \in \mathbb{Z}_q$, the public key is $h := g^r$ , together with the group order $q$ and the generator $g$ of the group $\mathcal{G}$. To encrypt a message $m \in \mathcal{G}$, one chooses a random $s \in \mathbb{Z}_q$ and computes the ciphertext $(c_1, c_2) := (g^s, m \cdot g^{rs})$. To ...
4
More generally, any encryption that is commutative can be used because then: $$(D_k \circ D_K \circ E_k \circ E_K)(m) = m$$ I.e. Bob can encrypt the ciphertext $E_K(m)$ with a new key $k$, then gives that to Alice for decoding with $K$ and finally decodes it himself with $k$. Stream ciphers are commutative, as is exponentiation modulo $n$ (used in RSA) ...
4
Are there any Oblivious Transfer (OT) protocols that don’t rely on asymmetrical encryption, public-key encryption or key-exchange? Surprisingly, there are indeed OT protocols which don't rely on public-key encryption. In Precomputing Oblivious Transfer, Beaver showed that if Alice and Bob are each given some correlated randomness by a trusted third party ...
4
It is impossible to achieve (fully) information theoretic oblivious transfer (OT), since OT is complete (and so can compute all functions). Since many (most) functions cannot be securely computed information theoretically with two parties, this means that it's impossible. Having said that, we do have OT protocols that provide information-theoretic security ...
3
The problem is known in the literature as private function evaluation (PFE). A sender has input (a function) $f$; a receiver has input $x$, and only the receiver learns $f(x)$. If you are willing to leak the topology of a circuit that computes $f$ (but not the identity of the gates), then using classical garbled circuits / Yao's protocol will work. These ...
3
The sender chooses log n pairs of secret keys (say, for encryption). Then, each number between 1 and n is naturally associated with a subset of exactly log n keys. The protocol then works by running log n 1-out-of-2 OTs where the receiver asks for the keys that are associated with its input (number between 1 and n). Finally, the sender encrypts each of the n ...
3
The simplest way to do this would be to have the sender randomly shuffle the elements. The receiver chooses a random element to request. That way the receiver has no idea which of the original (before the shuffle) elements he got.
3
OT is typically not used as an application in its own right. In the context of access control, OT limits the number of messages received by B but not which messages. I don't know of any real applications for this (you could talk about a subscription where B has purchased the right to read any $k$ articles, but this is pretty artificial in my opinion). ...
3
You can use Oblivious transfer protocol for the answers: https://en.wikipedia.org/wiki/Oblivious_transfer Here is an example with only 2 answers ($m0$ and $m1$) and uses RSA ($e,d,N$) : In your case Alice would have to send $x_0 \ldots x_9$ and Bob would have to pick $b \in \{0,\ldots,9\}$ where $b$ is the number of his question. The operation $m + k$ can ...
3
There's a new really simple OT protocol based on DH. It's even practical. Watch this video. For the paper and source code, go here.
3
In differential privacy the concern is to protect the privacy of a single row of the database. Informally, the DP concept says that everything that can be learned from the database could be learned without access to that row. In a more technical sense, a mechanism respects this property if the distribution of the answers is almost identical (in a very strict ...
3
Post-quantum oblivious transfer protocols are possible. If you base the security of the OT in a post-quantum assumption, this should give you an OT conjectured to be robust to quantum attackers. Besides the already mentioned OT based on supersingular isogeny (in comments), I can give you some other examples: Code-based: https://eprint.iacr.org/2008/138.pdf ,...
3
I worry that the first problem is harder than your instructor suspects. We had to work a little hard to get a multi party PSI protocol based on efficient OT in our paper Practical Multi-party Private Set Intersection from Symmetric-Key Techniques, by Vladimir Kolesnikov, Naor Matania, Benny Pinkas, Mike Rosulek, Ni Trieu If I remember correctly, we may ...
3
The short answer is: the algorithm that is trying to distinguish real from ideal interaction already knows the "correct" inputs. So it can easily distinguish in this case. More precisely, let's take the security definition from Hazay-Lindell (p21): $$\{ S_2(1^n, y, f(x,y) \}_{x,y,n} \overset{c}\equiv \{ \textsf{view}_2^\pi(x,y,n) \}_{x,y,n}$$ The ...
2
There is a slight distinction between PIR and OT. From Wikipedia: PIR is a weaker version of 1-out-of-n oblivious transfer, where it is also required that the user should not get information about other database items. In other words, OT is stronger in that the receiver only gets what is requested. Differential privacy is new to me, so I'll read up on ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# union of two subsets
by EV33
Tags: subsets, union
P: 197 1. The problem statement, all variables and given/known data What does it mean to have a union of two subsets? Could someone provide me with an example. Thank you.
Sci Advisor HW Helper Thanks PF Gold P: 26,105 Hi EV33! It's the subset consisting of everything in either subset. For example, the union of red cars and white cars is all cars which are red or white. And the union of red cars and fast cars is all cars which are red or fast (or both).
P: 197 What does this mean geometrically though? I don't see how one subset could be in another subset if they are independent of each other.
P: 352
## union of two subsets
Can you give a little more context? The words "geometrically" and "independent" suggest that you may be thinking of something else.
Mentor P: 20,415 If the two subsets have no members in common, their intersection will be empty. For example, the union of O = {1, 3, 5, 7, ...} and E = {0, 2, 4, 6, 8, ...} is the set {0, 1, 2, 3, 4, ...}. They have no members in common. If A = {0, 4, 8, 12, ...}, A U E = E. In this case, set A is a subset of E, so every member of A is automatically a member of E, but not vice versa; there are members of E that aren't also members of A. Because A is a subset of E, their intersection is not empty.
P: 197 Thank you. I think Iknow what you mean now. But just to make sure... Do I have the correct idea? 1. The problem statement, all variables and given/known data So here is my actual problem. Let U and V be the subspaces of R^3, defined by U={x:a^(T)x=0} and V={x:b^(T)x=0} where a= 1 1 0 and b= 0 1 -1 Demonstrate that the union of U and V is not a subspace of R^3 2. Relevant equations To be a sub-space... 1. it needs to contain the zero vector 2. x+y is in W whenever x and y are in W. 3. ax is in W whenever x is in W and a is any scalar. 3. The attempt at a solution 1. they both have the zero vector because a solution to a^(T)x=0 and b^(T)x=0 is x=0. 2. An arbitrary vector that is in U={x:a^(T)x=0} would be any vector that has two zeros in the first two rows, and an arbitrary vector in V={x:b^(T)x=0}, would be any vector with the bottom two rows as zeros. And because U and V are unioned I can choose one arbitrary vector from each U and V individually and for it to be a subspace it would be in the union of U and V. U= 0 0 S V= T 0 0 If you add these two arbitrary vectors together you get T 0 S which is in neither U or V, therefore the union of U and V is not subspace.
P: 352 This is essentially correct. To be concrete, you could exhibit specific elements $$u \in U$$ and $$v \in V$$ such that $$u + v \notin U \cup V$$.
Related Discussions Calculus & Beyond Homework 2 Calculus & Beyond Homework 5 Set Theory, Logic, Probability, Statistics 2 Calculus & Beyond Homework 26
|
|
I have recently come across a user (who I shall not name) who has posted $150+$ answers and has accumulated a reasonable amount of reputation, but who uses no LaTeX / MathJax formatting at all in any of these answers. A quick look at this user's top 15 answers shows:
• All $15$ answers were originally posted with no formatting;
• $11$ out of $15$ answers were later edited (by other users) to include proper formatting;
• $5$ answers have comments to the effect of "please use LaTeX to format your answers on this site." The user has responded to a couple of these in a potentially positive way but has made no effort to start formatting in MathJax.
My question is: is this acceptable behavior?
I think we should be lenient with users who are learning LaTeX syntax, as it can be somewhat daunting. In particular, I agree with this answer with regards to questions lacking formatting. I am also happy to help users out who don't get all the LaTeX formatting right, by adding \left and \right, changing sin to \sin, etc.
But if the user is unwilling to try formatting at all, and just takes advantage of the fact that others format his/her questions for him/her, I am not as inclined to be lenient. Even a simple attempt to add $ signs around formulas can come a long way, and I would greatly appreciate such an attempt. What can / should be done about this? • On the one hand, I'd rather have these people on the site, posting badly formatted (but useful) answers, rather than piss them off by "punishing" their behavior in some way or another. On the other hand, I don't like that they're repeatedly making use of others to format their answers. So I really don't know. – goblin GONE Mar 6 '14 at 13:03 • I don't know who you're talking about, but there's a user who claims to have eyesight problems and for that reason it's easier for him to use ASCII rather than$\LaTeX\$. Even though I don't really understand how it can be easier, I choose to believe him. – Git Gud Mar 6 '14 at 13:22
• The strange issue here is that other people are formatting the answers. It may be too gracious to do so. IMHO, if the formatting is poor enough that it requires editing, it would be perfectly acceptable to just downvote the answer and leave a comment that the answerer should improve it. In general, we should all have a very strong hesitation about editing someone else's answer. I think most people agree that it's reasonable to edit a question by a newish user who may not know how to do it. But I think we can expect answerers to format their own answers in the way they choose. – Carl Mummert Mar 6 '14 at 23:22
• For questions, I generally look at the user's profile before adding LaTeX. If it's their first question and I doubt they'll be coming back much, I edit it. If it's their third or fourth question and it's clear they'll be returning to the site for more, I edit it and ask that they learn LaTeX in future. If they show a history of asking unformatted questions, in spite of comments asking them to start formatting, I don't edit (and I don't attempt to answer the question). – Jack M Mar 8 '14 at 11:18
• @JackM: That's an excellent idea. If everyone did the same, instead of picking up behind the slacker, a stream of urgent suggestions might have more of an effect – MPW Mar 8 '14 at 19:16
|
|
# Isaac Newton What Did He Invent?
Isaac Newton published three books and invented the Newtonian reflecting telescope, a significant improvement on previous reflecting telescopes. Isaac Newton formulated the laws of universal gravitation and motion. He invented differential calculus at the same time as the German mathematician Gottfried Wilhelm Leibniz
|
|
# How does cooling scale with volume?
What equation would give me the answer to the question, "If i have a cup of water at a tempature of say boiling, how long would that cup of water take to cool off compared to say half that size of a cup of water." So the volume is in half. Its a general question I am just looking for where to start.
-
Sean, google for "heat transfer", to learn about basics of a very complicated field, and that this is not a part of thermodynamics. – Georg Dec 4 '11 at 19:29
An actual cup is slightly complicated because there are 2–3 distinct types of surfaces. Let's deal with free-floating cubes of water instead, both at an initial temperature Ti in an environment with temperature Te .
A cube with half the volume will have 50% the thermal mass C, but 63% of the surface area A. Newton's Law of Cooling implies $\frac{dT}{dt}=-h\frac{A}{C}(T-T_0\!)$ , where h is a property of the environment. So the smaller cube will cool 26% faster initially, when both cubes are at Ti .
If you want to know the temperature of a cube at any given time, the solution to the differential equation above is $\frac{T-T_e}{T_i-T_e}=\exp\left(-ht\frac{A}C\right)$. If you solve for t, it follows that when the small cube is at a given temperature, it will take the large cube 26% more time to reach it.
-
This is not wrong, but utterly misleading, because vaporizing is 99 % of heat transfer in the case of boiling water (and well below 100°C still), as asked for! – Georg Dec 5 '11 at 10:28
@Georg, vaporization and convection are both rolled up into the heat transfer coefficient. True, I assume that h is constant with temperature, but generally people aren't so anal about obvious back-of-the-envelope calculations. What's with the chip on your shoulder? – rdhs Dec 5 '11 at 15:28
Vaporisation and convection (especially when not forced, but by Grashoff) are terribly nonlinear. To roll them up in a linear coefficient is misleading. (Especially to beginners!) – Georg Dec 5 '11 at 15:34
|
|
# Is there a way to estimate or calculate the tidal range induced on a water-bearing planet?
Consider a system in which a central star is orbited by a planet with liquid water oceans, which is itself orbited by a moon.
Given the masses and distances between these three objects, is there some formula that outputs the minimum and maximum tide heights the planet's oceans cycle through for every orbit of the moon?
For simplicity, the effects of local topography on the tides are being ignored.
Also, if there is such a formula, could it be applied to solar systems in which there is more than one central star and/or more than one moon orbiting the planet?
$$\frac{15}{8}\frac{mA^4}{Mr^3}$$
Where $$m$$ and $$M$$ are the masses of the moon and planet, respectively; $$r$$ is the orbit radius of the moon and $$A$$ is the radius of the planet. For Earth this is a little less than a metre.
|
|
# Verifying Mathematical Expressions
This is a part of my calculator program. I have to check if the input entered by the user is a valid mathematical expression. The first check is performed by another program, where the program checks if the numbers and symbols are valid and it converts the string into tokens for further verification. The implementation of that program is trivial and thus not covered here.
The following program processes these tokens and outputs whether the expression is valid or not. The tokens are of two types:
1. Numbers: {"10", "5.27", "-91.22"}
2. Symbols: {"+", "-", "*", "/", "^", "!", "(", ")"}
The program handles parentheses differently and does not care about factorial as it's an unary operator. First, I'll describe the Expression class as it defines what an expression is and how its state is calculated.
A valid expression is a number or an unchanged state. So, if the whole expression contains a number then it's valid. If it contains a number, an operator and another number, it makes the entire expression a number. It also makes sure that parentheses are balanced (if present).
Valid expressions: {"1", "1+2", "1+2*3", "1*(2+3)", "(1+2)"}
Invalid expressions: {"1*(2", "1+", "+", "1)*2"}
The state of the expression is considered true if it's valid and false otherwise. The state of the expression is changed when a number is added or parenthesis count is changed. Exceptions are thrown whenever an invalid combination is found.
Note: The program doesn't calculate the expression it just verifies it.
Expression.java
/**
* Tracks the state of the expression.
*/
class Expression {
private boolean numberPresent;
private boolean operatorPresent;
private boolean state;
private int openParCount;
private boolean stateChanged;
private boolean parStateChanged;
if (!numberPresent) {
numberPresent = true;
} else if (operatorPresent) {
// if operator is present, make the entire expression a number
operatorPresent = false;
} else {
throw new ExpressionFormatException();
}
stateChanged = true;
}
void addOperator(String aOperator) throws ExpressionFormatException {
if (numberPresent && !operatorPresent) {
operatorPresent = true;
} else {
throw new ExpressionFormatException();
}
}
/**
* Modify open parenthesis count.
*/
void modParCount(int n) {
openParCount += n;
parStateChanged = true;
stateChanged = true;
}
boolean hasStateChanged() {
return stateChanged;
}
/**
* Get the current state of the expression.
* @return a boolean value.
*/
boolean getState() {
if (!numberPresent && !operatorPresent) {
// if parenthesis state is changed a number needs to be present
state = !parStateChanged;
} else if (numberPresent && operatorPresent) {
state = false;
} else if (numberPresent) {
state = true;
}
// return true only if the state is true and parentheses are balanced
return state && openParCount == 0;
}
}
Now, I'll describe the Verifier class, mainly its isComputable method. The verify method starts the ball rolling, it passes isComputable the token array and the starting offset. The isComputable method makes a localExpression and adds a number or an operator whenever it encounters it. The method recurses when it encounters a "(" and if the localExpression's state is changed. It returns when the control flow drops or it encounters a ")". The method also changes the state of the parentheses in a localExpression. The method returns immediately if the subState, the expression nested inside it, is not valid or ExpressionFormatException is caught.
Verifier.java
import java.util.*;
class Verifier {
private static Set<String> validOperators = Set.of("+", "-", "*", "/", "^");
/**
* Passes tokens and the starting offset to isComputable to verify the expression
*
* @param tokens contains valid numbers and symbols.
* @return a boolean value returned by isComputable.
*/
static boolean verify(ArrayList<String> tokens) {
return isComputable(tokens, new int[] {0});
}
/**
* Verifies the state of the expression and its nested expressions recursively.
*
* @param tokens a list containing valid numbers and symbols.
* @param offset keeps track of the current position in the list.
* @return a boolean value denoting whether the expression is valid.
*/
private static boolean isComputable(ArrayList<String> tokens, int[] offset) {
var localExpression = new Expression();
var subState = true;
while(offset[0] < tokens.size()) {
String token = tokens.get(offset[0]);
try {
if (Evaluator.isNumber(token)) {
} else if (validOperators.contains(token)) {
} else if (token.equals("(")) {
if (localExpression.hasStateChanged()) {
// recurse if the state of the current expression has changed
subState = isComputable(tokens, offset);
if (!subState) {
return false;
} else {
}
} else {
// if state is unchanged increase open parenthesis count
localExpression.modParCount(1);
}
} else if (token.equals(")")) {
// decrease open parenthesis count
localExpression.modParCount(-1);
return localExpression.getState();
}
} catch (ExpressionFormatException e) {
return false;
}
offset[0]++;
}
return localExpression.getState();
}
}
Evaluator.isNumber and ExpressionFormatException are not a part of this program and are too trivial to include here.
Below are some tests:
Output Input
true -> "2+1"
true -> "2+2*(5+6)"
true -> "2+3+4+5+6"
true -> "2!+3^5"
true -> "-5*3+-1"
true -> "4+2*(6+2)/(4-2)/(2*(65+(3*4)))"
true -> "3"
false -> "(2+2"
false -> "2!(3)"
false -> "2**2"
false -> "5+()"
false -> "--1"
false -> "+3"
false -> ")"
Note: The input is shown in Strings to aid readability.
Issues:
1. Expression like "1/0" are considered valid as the Expression object does not care what kind of number it's receiving. Similarly, negative factorials are also allowed.
2. Unary minus is supported but unary plus is not (not yet).
3. The index is the first element in the int[] offset that is passed recursively to keep track of index.
Any help would be appreciated!
• Just a sidenote: if unary minus is supported, --1 should be ok, just as ---1, ----1 and so on. – mtj Oct 12 '19 at 4:34
• _The index is the first element in the int[] offset that is passed recursively to keep track of index.- — I don't understand what you are trying to say. What index? – 200_success Oct 12 '19 at 5:16
• @mtj Thanks for pointing that out. I''ll fix it. – sg7610 Oct 12 '19 at 18:56
• @200_success while(offset[0] < tokens.size()) as we don't have mutable integers in Java, I am passing an int array recursively to keep tack of the current index. It's more of a hack that's why it's one of the issues. There were many alternatives but this is easy to implement and doesn't lead to bloated code. You can read more about it here: stackoverflow.com/a/4520163. – sg7610 Oct 12 '19 at 19:08
## State nomenclature
stateChanged vaguely makes sense, but state doesn't. The word "state" is a fairly vague descriptor. What does state as a boolean actually mean? This should probably be renamed to isValid.
## Boolean factorization
if (!numberPresent && !operatorPresent) {
// if parenthesis state is changed a number needs to be present
state = !parStateChanged;
} else if (numberPresent && operatorPresent) {
state = false;
} else if (numberPresent) {
state = true;
}
can be
if (numberPresent)
isValid = !operatorPresent;
else
isValid = !parStateChanged;
## else after return
This:
if (!subState) {
return false;
} else {
doesn't need the else, due to the return stopping the function beforehand.
## Surprise mutation
getState has a problem. One would assume, reading only the function signature and not the source, that it doesn't change the class - and only computes a value to return it. However, that's not the case - a member is changed. There are several different ways to deal with this depending on your intent:
• Rename the function to describe what it actually does (checkValidity?)
• Separate the check function from the isValid function
• Don't store a state as a member at all, and only have an isValid function
|
|
Narrative Review Examples, Resilient Feat 5e Roll20, Nyc Doe Ipad Unlock, Vodka Cruiser Pom Pom Flavour, Sahuagin Baron Dnd 5e Stats, Awaken Meaning In Tamil, Close Combat 4, Bigger Glutes Without Weights, Landscape Mode In Mobile Camera, Burberry Bucket Hat Women's, " /> Narrative Review Examples, Resilient Feat 5e Roll20, Nyc Doe Ipad Unlock, Vodka Cruiser Pom Pom Flavour, Sahuagin Baron Dnd 5e Stats, Awaken Meaning In Tamil, Close Combat 4, Bigger Glutes Without Weights, Landscape Mode In Mobile Camera, Burberry Bucket Hat Women's, " />
Select Page
Evaluate shifted Jacobi polynomial at a point. Tactics and Tricks used by the Devil. Cumulative distribution function of the non-central F distribution. Oblate spheroidal radial function of the second kind and its derivative. Mary Jane Sterling aught algebra, business calculus, geometry, and finite mathematics at Bradley University in Peoria, Illinois for more than 30 years. Inverse to the lower incomplete gamma function with respect to x. Regularized upper incomplete gamma function. exceptions when an error occurs. with the series solution of the Bessel differential equation. great variety of problems in physics and engineering including almost all partial differential Compute nth derivative of real-order modified Bessel function Kv(z). Characteristic values for oblate spheroidal wave functions. This gives. It only takes a minute to sign up. Modified Bessel function of the second kind of integer order n, Modified Bessel function of the second kind of real order v. Exponentially scaled modified Bessel function of the second kind. employing a change of variable. It allows us to evaluate certain integrals that give us a, Our integral implies that we need to find the coefficient of, Remembering to multiply by 2 to account for the factorial on. Here, you see functions in terms of the operations being performed. To create this article, volunteer authors worked to edit and improve it over time. Calculate degrees of freedom for non-central t distribution. Sequence of associated Legendre functions of the second kind. Characteristic value of even Mathieu functions, Characteristic value of odd Mathieu functions. order ν. You will learn about Numbers, Polynomials, Inequalities, Sequences and Sums, many types of Functions, and how to solve them. I tried using this substitution: $x=asin^{2}(x)$ in order to reduce this to a beta form integral, but I a man getting the value $\frac{B(0,1)}{2}$. Topically Arranged Proverbs, Precepts, Gamma function, Beta function, Bessel functions, Legendre There are a couple of special instances where there are easier ways to find the product of two binominals than multiplying each term in the first binomial with all terms in the second binomial. When a particular differential equation arises frequently enough, is important enough, it receives Co-authoring a paper with a persona non grata, You Do the Gallbladder, I'll Take the Appendix. As seen above, we want to find the coefficient of. an example would be this problem (I will use "l" for absolute value) f(x) = 2 l 3(x - 4) l - 5 I know that when it is in the parenthisis it is opposite so (4,-5) is where the vertex is. 8) into the differential equation 7), collect like terms, and do an index shift in such a way as to Counting eigenvalues without diagonalizing a matrix. Bessel’s equation of order ν. Bessel This equation has a regular singular point at x = 0. Evaluate Chebyshev polynomial of the second kind at a point. To evaluate polynomial column, weights in the second column, and total weights in the final column. Step 5. Modified Bessel function of the first kind of real order. Calculate non-centrality parameter for non-central t distribution. Earl Rainville. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Characteristic values for prolate spheroidal wave functions. Inverse function to bdtr with respect to p. Inverse function to bdtr with respect to k. Inverse function to bdtr with respect to n. Cumulative distribution function of the beta distribution. Step 4. Compute x*log1p(y) so that the result is 0 if x = 0. logsumexp(a[, axis, b, keepdims, return_sign]). polynomial coefficients is numerically unstable. Integral of the Struve function of order 0. function of the first kind. As a first step in solving this equation we do a change of variables from x to t by means of the Can it be justified that an economic contraction of 11.3% is "the largest fall for more than 300 years"? Returns the error function of complex argument. Compute roots of indicial equation. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Step 3. many applications where neither circular symmetry nor partial differential equations are 0 and we shall assume that it is not an integer. We use cookies to make wikiHow great. We can create functions that behave differently based on the input (x) value. 1. © Copyright 2008-2020, The SciPy community. $\begingroup$ Welcome to Mathematica.SE! The functions below, in turn, return the polynomial coefficients in Step 1. Thus 17) can be written as, We could now derive the second solution to our differential equation by going back and By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Calculates log(1 + x) for use when x is near zero. Compute nt zeros of Bessel derivative Y1â(z), and value at each zero. Gaussian cumulative distribution function. Can wires go under the supply wires in my panel? We have thus just shown that the problem of solving Bessel’s equation of order ν with This is a pattern that's called the square of a binomial pattern. It has an infinite number of pieces: The Floor Function . wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. Compute nt zeros of the derivative of the Kelvin function ker. Examples of one-to-one functions are f(x) = 2x3 and. Spherical Bessel function of the second kind or its derivative. What does spit mean in "spit in my glove" from the memoir by Powers? Context manager for special-function error handling. Integrals related to Bessel functions of the first kind of order 0. notation for the Bessel function of the first kind of order ν is Jν(x). Permutations of N things taken k at a time, i.e., k-permutations of N. Compute the arithmetic-geometric mean of a and b. Comparing this to the standard Beta integral we get p to be 0 and q to be 1 and thus the integral tends towards minus infinity. How to prove this integral representation of Bessel function? odd subscripts will be zero. Note that orthopoly1d objects are converted to poly1d when doing Self-imposed discipline and regimentation, Achieving happiness in life --- a matter of the right strategies, Self-control, self-restraint, self-discipline basic to so much in life. Set up column and use multiplication devise to obtain a, For reasons that will be soon apparent let us now multiply the numerator and denominator of 14) orthogonal polynomials: Gauss-Chebyshev (second kind) quadrature. Weighted integral of the Bessel function of the first kind. Calculate standard deviation of normal distribution given other params. Nearly all of the functions below are universal functions and follow But I cannot remember part of it. If you really can’t stand to see another ad again, then please consider supporting our work with a contribution to wikiHow. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Why did they change the registry number of USS Discovery? Parabolic cylinder functions Vv(x) and derivatives. Natural logarithm of absolute value of beta function. Student t distribution cumulative distribution function. The series solution of certain differential equations give rise to special functions such as Bessel’s functions, Legendre’s polynomials, Lagurre’s … But this kind of argument can likely be found in a proof of Euler's reflection formula.
|
|
fhsst-physics
[Top][All Lists]
## [Fhsst-physics] Re: Questions for Spencer
From: Spencer Wheaton Subject: [Fhsst-physics] Re: Questions for Spencer Date: Tue, 29 Mar 2005 07:56:39 +0200 (SAST)
On Mon, 28 Mar 2005, Mark Horner wrote:
> Hi Spence
>
> We've been having a chat here in the STAR control room about our FHSST
> convention for units
> and vector labels. Markus is doing an edit of Vectors and some things
> have come up that I'd like
> your input on as you got the feedback from the PGCE students which led
> to some of the changes.
>
> 1) In journals units are not italicised but we changed to not \rm'ing
> all the units. This was a PGCE thing
> if I remember correctly - was there a good argument for this?
Actually the PGCE guys had no comment here. It was a personal preference I
had at the time but I have since changed my opinion...
>
> 2) Then in the units the "." as in "m.s^2" - they insisted on this
> notation - is that right? Or can we
> leave out the .'s?
>
Again no PGCE comment here but I think the use of the "." is pretty common
at schools.
> Cheers,
>
> Mark
>
reply via email to
|
|
Given an object, using the destructuring syntax you can extract just some values and put them into named variables:
const person = {
firstName: 'Tom',
lastName: 'Cruise',
actor: true,
}
const { firstName: name, age } = person //name: Tom, age: 54
name and age contain the desired values.
The syntax also works on arrays:
const a = [1, 2, 3, 4, 5]
const [first, second] = a
This statement creates 3 new variables by getting the items with index 0, 1, 4 from the array a:
const [first, second, , , fifth] = a
|
|
# Changes
## Lagrange mean value theorem
, 20:15, 20 October 2011
Proof
! Step no. !! Assertion/construction !! Facts used !! Given data used !! Previous steps used !! Explanation
|-
| 1 || Consider the function <br>$h(x) := \frac{f(b) - f(a)}{b - a}\cdot x + \frac{bf(a) - af(b)}{b - a}$. <br> Then, $h$ is a linear (and hence a continuous and differentiable) function with $h(a) = f(a)$ and $h(b) = f(b)$|| || || ||Just plug in and check. Secretly, we obtained $h$ by trying to write the equation of the line joining the points $(a,f(a))$ and $(b,f(b))$.
|-
| 2 || Define $g = f - h$ on $[a,b]$, i.e., $g(x) := f(x) - h(x)$. || || || ||
3,033
edits
|
|
# Seeking closed form for $\sum\sum\left\lfloor\frac{kr}{n-1} \right\rfloor$
I'm trying to get a closed form for $$f(n) = \sum_{k=0}^{n-1}\sum_{r=0}^{k-1}\left\lfloor\frac{kr}{n-1} \right\rfloor$$
It is fairly obvious that for large $n$ this grows like $\frac{n^2}{6}$ but for a step in a problem I am working I need this in in some sort of closed form, or at least a simpler form or single sum.
When I try it, totients come into the picture, but I seem to get ugly sums of totients, and that is not much better than the original double sum.
It doesn't seem like this should be so tough.
• I can't reproduce the growth you mention. What are the first few values of $f(n)$? – lhf Dec 3 '14 at 0:38
• I get $0,0,1,3,7,14,25,38,58,83,116,152,202,254,\ldots$. – lhf Dec 3 '14 at 0:47
• The reason it should go as $n^2/6$ can be seen if you drop the floor function and replace the sums by integrals. – Mark Fischler Dec 3 '14 at 14:56
|
|
A simple text editor made by tkinter.一款使用tkinter编写的文本编辑器程序。
## Project description
A open-source text editor written in Python.
It supports editing text files,binary files with various encodings
which can be automatically detected.
When you edit a binary file, the contents of the file are displayed as escape sequences.
You can find and replace words.You’re also able to choose themes you prefer.
In addition, code highlighting is supported when editing Python code files,like IDLE.
What’s more, dragging and dropping files into the editor window is now supported.
## Project details
Uploaded source
|
|
# How to split a floating point number into individual digits?
I am helping a friend with a small electronics project using a PIC microcontroller 16F877 (A) which I am programming with mikroC.
I have run into a problem which is that I have a floating point number in a variable lets say for example 1234.123456 and need to split it out into variables holding each individual number so I get Char1 = 1, Char2=2 etc etc. for display on LCD. The number will always be rounded to 3 or 4 decimal places so there should be a need to track the location of the decimal point.
Any advice on how to get this split would be greatly appreciated.
There's numerous ways of doing it. You may find your compiler has a library function to do it for you. It may be possible with:
• sprintf() / snprintf()
• dtostrf()
• dtoa()
Alternatively, it's not too hard to write your own routine to do it. It's just a case of first working out how many digits before the decimal point there are, dividing it by 10 that many times, then taking the integer portion repeatedly while multiplying by 10, making sure you add the decimal point in at the right place.
So in pseudo-code it may look something like:
If the value < 0.0:
Insert - into string
Subtract value from 0.0 to make it positive.
While the value <= 10.0:
Divide by 10
Increment decimal counter
For each digit of required precision:
Take the integer portion of the value and place it in the string
Subtract the integer portion from the value
Decrement decimal counter
If decimal counter is 0:
Insert decimal point
Multiply the value by 10.
Use sprintf(). It works just like printf(), but "prints" to a string (also known as an array of char).
do: char stringvar[10]; sprintf(stringvar, "%9.4f", floatvar);
• Not all libraries have the floating point portion of vsprintf compiled in to save space. Especially on 8-bit compilers. – Majenko Mar 7 '15 at 0:14
• @Majenko: That may be true, but if he's handling floating point numbers, there's a good chance that he's already using some floating point libraries. – Peter Bennett Mar 7 '15 at 0:17
• The floating point mathematics libraries are completely separate from the printf floating point handling. For example, take a look at the Arduino's avr-libc. Has full floating point support, but can't printf floats because that code has been specifically excluded as it's incredibly huge. – Majenko Mar 7 '15 at 0:22
• If he's using the floating point libraries, he probably has plenty of code space and can also include the printf library. – Austin Mar 7 '15 at 0:25
• @Austin It's not a case of "including" the "printf library", but a question of if the built-in C library has floating point support in the vsprintf function. There's nothing you can "include" to change that, other than using a different C library. – Majenko Mar 7 '15 at 16:03
On a small MCU without hardware floating point support we should do as little floating point math as possible, and unless you really need the printf family of functions, try to avoid it because it bloats and slows down the code a bunch.
I suggest converting the float to an integer by first multiplying the float by 1,000.0 (assuming you want three decimal places) and then convert it to a long integer, round-off as appropriate. If you will be displaying the result on a 7-segment or dot matrix LCD then I think this format is ideal.
Let's assume that the float can be in the range 0 to 999.999 (negate if negative, save sign for display later.) The corresponding long int then has the range 0 to 999999. We will convert the number starting with the most significant digit.
Pseudo code:
dig6 = -1 // Init MSDigit
while number >= 0
number = number - 100,000
dig6 = dig6 + 1
number = number + 100,000 // Restore number, Dig 6 is done
dig5 = -1
while number >= 0
number = number - 10,000
dig5 = dig5 + 1
number = number + 10,000 // number can be switched to 16-bit here for speed
... dig4 and 3 in similar fashion
dig2 = -1
while number >= 0
number = number - 10
dig2 = dig2 + 1
dig1 = number + 10
At this point you have all six digits stored in a byte each and the minus sign saved. If you are using a 7-segment LCD, pass the digits to a 7-segment encoder function before writing to the LCD. If you are using dot-matrix display with serial interface, add 0x30 to each digit for ASCII encoding. We also need to remember the decimal point between dig4 and dig3.
This algorithm is quite fast since there is no multiplication and division involved. I have used it on tiny 4-bit MCUs with good results.
|
|
### Residence time in presence of moving defects and obstacles
We discuss the properties of the residence time in presence of moving defects or obstacles for a particle performing a one dimensional random walk. More precisely, for a particle conditioned to exit through the right endpoint, we measure the typical time needed to cross the entire lattice in presence of defects. We find explicit formulae for the residence time and discuss several models of moving obstacles. The presence of a stochastic updating rule for the motion of the obstacle smoothens the local residence time profiles found in the case of a static obstacle. We finally discuss connections with applicative problems, such as the pedestrian motion in presence of queues and the residence time of water flows in runoff ponds.
### Lie symmetry structure of nonlinear wave equations
We study Lie point symmetry structure of generalized nonlinear wave equations in the $1+n$-dimensional space-time.
### Renormalization in string-localized field theories: a microlocal analysis
Using methods of microlocal analysis, we prove that renormalization stays a pure ultraviolet problem in string-localized field theories, despite the weaker localization. Thus, power counting does not lose its significance as an indicator for renormalizability. Our proof puts the conjecture that the good ultraviolet behavior of string-localized fields improves renormalizability on safe mathematical ground. It also follows that the standard renormalization methods can be employed for string-localized field theories without any major adjustments.
### Frobenius manifolds on orbit spaces of non-reflection representations
We prove the orbit spaces of some non-reflection representations of finite groups posses Frobenius manifold structures.
### Resistance distance distribution in large sparse random graphs
We consider an Erdos-Renyi random graph consisting of N vertices connected by randomly and independently drawing an edge between every pair of them with probability c/N so that at N->infinity one obtains a graph of finite mean degree c. In this regime, we study the distribution of resistance distances between the vertices of this graph and develop an auxiliary field representation for this quantity in the spirit of statistical field theory. Using this representation, a saddle point evaluation of the resistance distance distribution is possible at N->infinity in terms of an 1/c expansion. The leading order of this expansion captures the results of numerical simulations very well down to rather small values of c; for example, it recovers the empirical distribution at c=4 or 6 with an overlap of around 90%. At large values of c, the distribution tends to a Gaussian of mean 2/c and standard deviation sqrt{2/c^3}. At small values of c, the distribution is skewed toward larger values, as captured by our saddle point analysis, and many fine features appear in addition to the main peak, including subleading peaks that can be traced back to resistance distances between vertices of specific low degrees and the rest of the graph. We develop a more refined saddle point scheme that extracts the corresponding degree-differentiated resistance distance distributions. We then use this approach to recover analytically the most apparent of the subleading peaks that originates from vertices of degree 1. Rather intuitively, this subleading peak turns out to be a copy of the main peak, shifted by one unit of resistance distance and scaled down by the probability for a vertex to have degree 1. We comment on a possible lack of smoothness in the true N->infinity distribution suggested by the numerics.
### Conformal invariance of double random currents and the XOR-Ising model II: tightness and properties in the discrete
This is the second of two papers devoted to the proof of conformal invariance of the critical double random current and the XOR-Ising model on the square lattice. More precisely, we show the convergence of loop ensembles obtained by taking the cluster boundaries in the sum of two independent currents both with free or wired boundary conditions, and in the XOR-Ising models with free and plus/plus boundary conditions. Therefore we establish Wilson's conjecture on the XOR-Ising model. The strategy, which to the best of our knowledge is different from previous proofs of conformal invariance, is based on the characterization of the scaling limit of these loop ensembles as certain local sets of the continuum Gaussian Free Field. In this paper, we derive crossing properties of the discrete models required to prove this characterization.
### High-dimensional near-critical percolation and the torus plateau
We consider percolation on $\mathbb{Z}^d$ and on the $d$-dimensional discrete torus, in dimensions $d \ge 11$ for the nearest-neighbour model and in dimensions $d>6$ for spread-out models. For $\mathbb{Z}^d$, we employ a wide range of techniques and previous results to prove that there exist positive constants $c$ and $C$ such that the slightly subcritical two-point function and one-arm probabilities satisfy $\mathbb{P}_{p_c-\varepsilon}(0 \leftrightarrow x) \leq \frac{C}{\|x\|^{d-2}} e^{-c\varepsilon^{1/2} \|x\|} \quad \text{ and } \quad \frac{c}{r^{2}} e^{-C \varepsilon^{1/2}r} \leq \mathbb{P}_{p_c-\varepsilon}\Bigl(0 \leftrightarrow \partial [-r,r]^d \Bigr) \leq \frac{C}{r^2} e^{-c \varepsilon^{1/2}r}.$ Using this, we prove that throughout the critical window the torus two-point function has a "plateau," meaning that it decays for small $x$ as $\|x\|^{-(d-2)}$ but for large $x$ is essentially constant and of order $V^{-2/3}$ where $V$ is the volume of the torus. The plateau for the two-point function leads immediately to a proof of the torus triangle condition, which is known to have many implications for the critical behaviour on the torus, and also leads to a proof that the critical values on the torus and on $\mathbb{Z}^d$ are separated by a multiple of $V^{-1/3}$. The torus triangle condition and the size of the separation of critical points have been proved previously, but our proofs are different and are direct consequences of the bound on the $\mathbb{Z}^d$ two-point function. In particular, we use results derived from the lace expansion on $\mathbb{Z}^d$, but in contrast to previous work on high-dimensional torus percolation we do not need or use a separate torus lace expansion.
### Conformal invariance of double random currents and the XOR-Ising model I: identification of the limit
This is the first of two papers devoted to the proof of conformal invariance of the critical double random current and the XOR-Ising models on the square lattice. More precisely, we show the convergence of loop ensembles obtained by taking the cluster boundaries in the sum of two independent currents with free and wired boundary conditions, and in the XOR-Ising models with free and plus/plus boundary conditions. Therefore we establish Wilson's conjecture on the XOR-Ising model. The strategy, which to the best of our knowledge is different from previous proofs of conformal invariance, is based on the characterization of the scaling limit of these loop ensembles as certain local sets of the Gaussian Free Field. In this paper, we identify uniquely the possible subsequential limits of the loop ensembles. Combined with the second paper, this completes the proof of conformal invariance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.