text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Free Access
Issue A&A Volume 510, February 2010 A98 7 Interstellar and circumstellar matter https://doi.org/10.1051/0004-6361/200913567 17 February 2010
A&A 510, A98 (2010)
O18O and C18O observations of Ophiuchi A,
R. Liseau1 - B. Larsson2 - P. Bergman1 - L. Pagani3 - J. H. Black1 - Å. Hjalmarson1 - K. Justtanont1
1 - Department of Radio and Space Science, Chalmers University of Technology, Onsala Space Observatory, 439 92 Onsala, Sweden
2 - Department of Astronomy, Stockholm University, AlbaNova, 106 91 Stockholm, Sweden
3 - LERMA, L'Observatoire de Paris, 61 avenue de l'Observatoire, 75014 Paris, France
Received 29 October 2009 / Accepted 5 December 2009
Abstract
Context. Contrary to theoretical expectation, surprisingly low concentrations of molecular oxygen, O2, have been found in the interstellar medium. Telluric absorption makes ground based O2 observations essentially impossible and observations had to be done from space. Millimetre-wave telescopes on space platforms were necessarily small, which resulted in large, several arcminutes wide, beam patterns. Observations of the (NJ = 11-10) ground state transition of O2 with the Odin satellite resulted in a detection toward the dense core . At the frequency of the line, 119 GHz, the Odin telescope has a beam width of 10, larger than the size of the dense core.
Aims. The precise nature of the emitting source and its exact location and extent are therefore unknown. The current investigation is intended to remedy this.
Methods. Although the Earth's atmosphere is entirely opaque to low-lying O2 transitions, it allows ground based observations of the much rarer in favourable conditions and at much higher angular resolution with larger telescopes. In addition, exhibits both multiple radial velocity systems and considerable velocity gradients. Extensive mapping of the region in the proxy (J=3-2) line can be expected to help identify the O2 source on the basis of its line shape and Doppler velocity. Line opacities were determined from observations of optically thin (J=3-2). During several observing periods, two intensity maxima in were searched for in the (21-01) line at 234 GHz with the 12 m APEX telescope. These positions are associated also with peaks in the mm-continuum emission from dust.
Results. Our observations resulted in an upper limit on the integrated intensity of < 0.01 K km s-1 () into the 26 5 beam. Together with the data, this leads to a ratio of N( )/N( . Combining Odin's O2 with the present observations we infer an O2 abundance .
Conclusions. Examining the evidence, which is based primarily on observations in lines of and , leads us to conclude that the source of observed O2 emission is most likely confined to the central regions of the . In this limited area, implied O2 abundances could thus be higher than inferred on the basis of Odin observations ( ) by up to two orders of magnitude.
Key words: ISM: abundances - ISM: molecules - ISM: lines and bands - ISM: clouds - ISM: individual objects: SM 1 - ISM: individual objects: SM 1N
1 Introduction
Oxygen is the most abundant of the astronomical metals (e.g., Asplund et al. 2009, and references therein). Consequently, in its molecular form, it was also expected to be very abundant in the UV-shielded regions inside molecular clouds (e.g., Bergin et al. 2000; Charnley et al. 2001; Black & Smith 1984; Viti et al. 2001; Willacy et al. 2002; Quan et al. 2008; Roberts & Herbst 2002; Spaans & van Dishoeck 2001) and to contribute significantly to the cooling, hence the energy balance, of dense clouds (Goldsmith & Langer 1978).
Because of the high O2 content in the Earth's atmosphere, astronomical O2 sources cannot be observed from the ground. Dedicated space missions came into operation near the beginning of the new millenium. Their unsuccessful searches (Pagani et al. 2003; Goldsmith et al. 2000) were highly disappointing and it was hard to understand that, in the interstellar medium (ISM), O2 is an elusive species (see references cited above).
Eventually, after more than 20 days of Odin-observing during three different runs, came a real break-through: for the very first time, O2 was finally detected in the ISM (Larsson et al. 2007). The O2 emitting object, , is a dense clump (Loren et al. 1990) in a region of active star formation (L 1688). On the basis of theoretical model calculations, the detectability of this kind of source had earlier been predicted by Black & Smith (1984) and Maréchal et al. (1997a), where the latter authors made their specific prediction with regard to Odin.
Odin carries a 1.1 m telescope which is designed for observations in the submillimetre regime, between roughly 480 and 580 GHz (0.5-0.6 mm). However, the O2 discovery was made with a dedicated 119 GHz (2.5 mm) receiver aboard Odin, fix-tuned to the frequency of the ground state O2 ( NJ = 11-10) transition at 118 750.343 MHz. At this frequency, the telescope beam size is 10, larger than the angular dimension of the dense , which is about 4 (FWHM of devonvolved CS core, Liseau et al. 1995).
It follows that the true O2 source is likely under-resolved, the consequence of which directly affects estimates of the abundance of O2, i.e. N(O2)/N(H2): depending on the adopted model, the Odin observations imply an abundance which is currently uncertain by two orders of magnitude (Liseau et al. 2005).
In Fig. 2 of Larsson et al. (2007), the Odin-O2 line is compared to transitions of other molecular species in . Whereas lines of H2O and CO are optically very thick over large parts of the cloud and have self-absorbed profiles, the optically thin O2 line displays a simple, Gaussian shape. This line shape is similar to that of the line (Pankonin & Walmsley 1978), displayed at the top of the figure and which most likely originates in the -PDR. If also the main source of O2 emission, the abundance would indeed be very low.
However, the O2 line shape is also similar to that of the C18O (3-2) line, also shown in the figure. This suggests that the O line can be used as a tracer of the molecular oxygen emission and we set out to map the 10 Odin beam in the (3-2) transition of C18O with the APEX beam of size 19 . It was expected that a detailed comparison of the line centre velocity with that of the O2 line would help to narrow down the exact location of the O2 emission, since two distinct velocity components are known to be present in . This information is needed to understand, where, i.e. in what physical conditions, the majority of the O2 molecules is excited: in the cold and dense dark cores (Di Francesco et al. 2004), in the extended warm Photon Dominated Region (PDR; Hollenbach et al. 2009) or in the hot shocked gas of the outflow from VLA 1623 (Liseau & Justtanont 2009)? With the proxy for O2 emission, probable emission regions were identified, which were then observed for .
There exists earlier work for this line and the . Goldsmith et al. (1985) observed in the same transition and with comparable beam size (26 ), albeit at an offset 11 E and 61 N relative to the position of SM 1N. They obtained <120 mK () over 0.34 km s-1. At similar channel resolution (0.32 km s-1) and toward essentially the same position, Liszt & Vanden Bout (1985) obtained an rms-noise value of <17.5 mK with the 12 m NRAO telescope (34 ). These papers also present energy level diagrams. Observations made with the 10 m telescope of the Caltech Submillimeter Observatory (CSO) in July 1991 and for the position 162325, -241549 (B1950) resulted in an rms noise temperature of 16 mK in a 0.25 km s-1 velocity bin and of 12 mK after binning to 0.50 km s-1 (van Dishoeck, Keene, & Phillips, private communication).
The derivation of molecular abundances requires knowledge of the H2 column density. One of the widely exploited techniques to estimate N(H2) is to use observations of C18O, the transitions of which in many cases can be shown to be optically thin. We discovered, however, that in the dense core regions of , this not to be the case everywhere and that appropriate opacity corrections using the 13C18O line needed to be made.
This paper is organised as follows: in Sect. 2, our APEX observations of the in transitions of , and are described. Section 3 presents our results, which are discussed in Sect. 4. Finally, in Sect. 5 our main conclusions are briefly summarised.
2 Observations and data reductions
All observations have been made with the SIS receivers and spectrometers at the Atacama Pathfinder EXperiment (APEX). The 12 m APEX telescope is located at an altitude of about 5100 m on the Llano de Chajnantor in northern Chile. The telescope pointing is accurate to 3 (rms).
The Fast Fourier Transform Spectrometer (FFTS) was configured to have 8192 channels, which over a bandwidth of 1 GHz provides a resolution of 122 kHz, corresponding to 0.16 km s-1 and 0.11 km s-1 at 234 GHz and 329 GHz, respectively. As frontends for these frequencies, we used APEX 1 of the Swedish Heterodyne Facility Instrument (SHFI, Vassilev et al. 2008) and APEX 2A (Risacher et al. 2006).
2.1 observations
The data have been collected during three different observing runs in 2008 and 2009. The frequency of the (21-01) line can be derived from the data given by Steinbach & Gordy (1975) as 23 3946.179 MHz. At 234 GHz, the APEX beam has a half power beam width and the main beam efficiency is . The telescope was pointed toward and (J2000), a position which was initially chosen on the basis of, as it turned out, insufficiently sampled data (see Sect. 3.2). In addition, the strongest peak of doubly deuterated formaldhyde emission in the (Bergman et al., in preparation), which is situated 30 south of these coordinates, was also observed. These positions are close to the location of intense mm-dust-emission (cf. Fig. 3), i.e. the dense core SM 1 (Motte et al. 1998). For the primary position, the total on-source integration time was 4.9 h and the average system temperature was 220 K, whereas for the -30 -position, these values were 6.5 h and 210 K, respectively.
2.2 and observations
The observations were collected during two observing runs in 2006 and 2007 at the APEX telescope. The observing mode was position switched raster mapping and the data were sampled according to the Nyquist criterion on a rectangular 10 grid, aligned with the equatorial coordinate system ( ). At 329 GHz, the and the average system temperature was =200 K. The efficiencies were and for point source and extended source calibrations, respectively.
In addition, an extended raster map of the outer regions of was obtained on a coarser grid with 20 (full beam) spacings. The entire region observed is thus as large as .
The origin of the map is the same as that of the Odin observations, i.e. the (0, 0) position is at and Dec = -242354 (J2000). The same reference position as for the Odin observations (Larsson et al. 2007), viz. 15 N relative to the map centre, was used here for calibration purposes. In addition to the map, five positions were also observed in the (3-2) transition of the even rarer isotope (Table 2). Klapper et al. (2003) provide lab-frequencies for the (3-2) rotational transition of and , i.e., 329 330.552 MHz and 314 119.660 MHz, respectively, and where the latter is a weighted mean value, with the 13C hyperfine structure being ignored.
3 Results
3.1 18
The line was not detected toward any of the observed positions. Toward the position associated with P 2 (see Fig. 3 and Table 1), the noise level is =6.5 mK () in a 0.62 km s-1 bin. The result is similar for the observation of the position 30 south (P 3), i.e., = 8.2 mK in a 0.16 km s-1 bin (Fig. 1).
Figure 1: The central part of the 1 GHz wide APEX spectrum centered on the frequency of the transition, 233.946179 GHz, and obtained toward and (J2000) in . The sampling is in 122 kHz wide channels ( km s-1). Open with DEXTER
Figure 2: (J=3-2) spectra ( vs. ) of, from top to bottom, CO (black), (red) and (blue) toward two positions in the (cf. Fig. 3). For clarity, two of the spectra are offset by K and the spectra have been multiplied by a factor of twenty. Open with DEXTER
Figure 3: (3-2) integrated intensity, , of the dark core . The map was obtained with APEX and observed positions are shown as crosses. The beam size at 329 GHz is shown in the lower right corner. Offsets are with respect to the origin, and (J2000). The position of the outflow driving Class 0 source VLA 1623 is shown by the star symbol. P 1-P 4 designate the clumps discussed in the text (Table 1). The beam size at 234 GHz is indicated by the dotted circles, at the observed positions. Open with DEXTER
Table 1: -peaks of integrated intensity, .
Figure 4: A mosaic of maps of integrated (3-2) line intensity over 1.0 km s-1 wide velocity intervals, from +1.0 to +5.0 km s-1. The extended maps with 20 spacing are shown. Two velocity components are identified in the , falling into the [2, 3] and [3, 4] bins, respectively. The lowest contour level corresponds to 4 K km s-1 and increments are also by this amount. Open with DEXTER
3.2 and
Example spectra in three isotopes of CO (3-2) are shown in Fig. 2 toward two positions in the central region of the . Further, Fig. 3 shows the inner, high-resolution, map of integrated intensity, , of the (3-2) line. Within a range of RA offsets +30 to +50 , four distinct intensity peaks are discernable. In Table 1, these are designated P 1 through P 4 and their J2000 coordinates are given. The C18O line is very narrow, e.g. merely 1.0 km s-1 (FWHM) at the inconspicuous (0, 0) position.
Examination of the entire data set for (3-2) reveals the fact that, within the mapped region, maximum emission occurs at LSR-velocities +2.7 to +3.7 km s-1. This velocity interval corresponds to that of the O2 119 GHz emission, viz. +2.5 to +3.5 km s-1 (Larsson et al. 2007) and Fig. 6 here. displays a complex velocity field and two distinct velocity components can be identified, giving rise to spectral line blending. These components are essentially confined within the LSR-velocity bins [+2, +3] and [+3, +4] (in km s-1). Figure 4 presents a mosaic of the integrated line intensity in 1.0 km s-1 wide bins. Experimenting also with different binnings demonstrates quite convincingly that the location of the O2 emitting gas is most likely associated with the central core region of .
4 Discussion
4.1 The dense clumps of Oph A
The intensity maxima in Fig. 3 seem comparable in size with the APEX beam, which could indicate that the diameter of these clumps does not exceed 20 . From the comparison of their locations with those observed in the emission of the dust at 1.3 mm (Motte et al. 1998) and 850 m (Johnstone et al. 2000) and of the quiescent gas in the (1-0) line (Di Francesco et al. 2004), it becomes evident that P 4 lacks correspondence with features at 1.3 mm and emission, but shows up weakly at 850 m. P 1 likely is N1 (which is not seen in the dust maps), P 2 corresponds to N 5 (also prominent in the dust as SM 1N), and P 3 seems associated with N 4 and SM 1 (also 16264-2423 of Johnstone et al. 2000). Derived temperatures and densities for these clumps are of the order of 15-30 K and cm-3, respectively (e.g., Motte et al. 1998; Johnstone et al. 2000; André et al. 1993).
In summary, the evidence points toward the fact that also O2 is concentrated in the dense dark core regions, where the molecules would be protected against photo-dissociation due to the intense UV field (G0 of the order of 102) generated by the two B-stars, east and west of the cores, respectively (Liseau et al. 1999). The size of the O2 emitting regions appears not to exceed one arcminute, so that a conservative estimate of the Odin beam filling would be about 0.01. If the emission originates in a core of size 20 or smaller, the Odin beam filling factor would be reduced by yet another order of magnitude. The O2 abundance would scale accordingly and could in this case be locally as high as a few times 10-5, which would be comparable to the total abundance of oxygen in the gaseous phase (e.g., Liseau & Justtanont 2009).
4.1.1 Line optical depths
The ratio of the and 12 line intensities can be used to estimate the optical depth in the rarer isotope line, , where . From the data presented in Table 2, it is clear that the (3-2) line could have significant opacity along several lines of sight, unless the relative abundance [12 / ] (or the excitation temperatures for these species differ substantially).
Federman et al. (2003) determined a column density ratio toward a line of sight designated by them. However, their coordinates refer to the star, one degree north-north-west from the discussed in our paper. In the associated nebula, the physical conditions are different from those in the dense core, possibly leading to different isotopic abundances. In the shielded regions of the dense cores, chemical isotopic fractionation can be expected to be of minor importance. It is worth noting that in the nearby core C, a lower isotopic ratio has been derived by Bensch et al. (2001).
Table 2: Observed positions of (3-2) and line opacities, .
4.2 Column densities
If local thermal equilibrium (LTE) is a good approximation for the level populations, the column density of all molecules of the species, N(mol) in cm-2, can be estimated from the observed intensity of an optically thin line, viz.
(1)
where
(2)
with cgs-units of K-1 cm-3 s. Here, is the beam filling factor for the source which may be smaller than the beam ( ) and is the main beam efficiency. is the transition temperature, K is the temperature of the back-ground radiation field, is the quasi-Planck function, is the upper level energy in K, is the partition function and is the statistical weight of the upper level and the other symbols have their usual meaning.
4.2.1 and H2 column densities
We limit the discussion to the central core region, where observed (3-2) line intensities of the +3 km s-1 component are = 20 K km s-1. The upper level energy lies nearly 32 K above ground. The spontaneous transition probability for the transition is , the transition temperature is K and the statistical weight of the upper level is . Using the collisional rate coefficients of Schinke et al. (1985) for collisions with para-H2, yields critical densities, , of about to cm-3 for K to 300 K, respectively (Table 3). Therefore, except perhaps for the very lowest temperatures, the condition of LTE should be fulfilled for the (3-2) transition (cf. Sect. 4.1).
The sizes of the clumps are comparable to the beam size, so that the main beam efficiency, , is used for the intensity calibration and we assume here a beam filling factor of unity. For the broad range of temperatures of 10 to 300 K, the corresponding column densities of are listed in Table 3. For an X( ) , the derived H2 column densities, on the 20 scale (2400 AU), are N(H2) cm-2. These results are in general agreement with those reported by others (Loren et al. 1990; Motte et al. 1998). Possible opacity corrections to the (3-2) intensity, of the order of 2, would increase the column density accordingly. The column densities presented in Table 3 are therefore likely lower limits.
Table 3: Column densities of and .
4.2.2 O18O column density and O18O abundance
The NJ=21-01 transition has the largest Einstein coefficient of the low-lying transitions, viz. (Maréchal et al. 1997b). We adopt the coefficients for collisional de-excitation, , which are based on the work by Bergman (1995) and which have been derived for collisions with He. For collisions with H2, these were multiplied by 1.4. Values for temperatures other than 300 K were obtained by scaling with the square root of the temperature. From Table 3, it can be seen that critical densities for the 21-01 transition are rather low for a wide range of temperatures (<1500 cm-3 above 10 K). In particular, for the dense core conditions of , where densities are in excess of 105 cm-3 (Sect. 4.1), LTE is certainly a valid assumption (see also Black & Smith 1984; Maréchal et al. 1997b). The temperature of the transition is K and the statistical weight of the upper level is .
In Table 3, the results for and are compared. The ratio N( )/N( ) exceeds unity and increases with decreasing temperature. This ratio could correspond to about half the value of that of the CO/O2 ratio (Black & Smith 1984). For three cold cores (10 and 15 K), Fuente et al. (1993) determined CO/O2 > 3-7, limits consistent with, but considerably smaller, than the values displayed in Table 3. The effects of a varying C/O ratio in the ISM at column densities (values of the visual extinction ) as high as those found in were explicitly considered in the models by Maréchal et al. (1997a, see their Fig. 10). For the O2 119 GHz line, the integrated intensity is >10 K km s-1 for C/O < 0.4 when > 20 mag. In contrast, for similar extinction, the intensity is <100 mK km s-1 for C/O > 1. Future observations will likely be able to follow any variation of this ratio in different regions of the ISM (see below and also Black & Smith 1984).
In the dense cold (100 K) regions of the , the column density of is lower than 1015 cm-2 (Table 3) and, hence, the abundance relative to H2, X( ) < 10-8. Consequently, for the range of 10 to 40 K and a standard elemental isotopic ratio, the abundance of the primary species of molecular oxygen, X(O2 ( ) (Wannier 1980), should be limited to < , consistent with the Odin result (Larsson et al. 2007). We can conclude, therefore, that in the , the molecular oxygen abundance is bounded by , where the beam averaged O2 column density is 1015 cm-2. If reflecting the fraction of the Odin beam that is filled by the O2 source, its implied size is . This could be well-matched to the 3.5 m telescope of the Herschel Space Observatory, the beam widths of which are 44 at 487 GHz, the frequency of the O2 (NJ=32-12) transition, and 28 at 773 GHz for the (54 -34) line (Fig. 5).
Figure 5: O2 line ratio diagram for the two strongest transitions accessible to HIFI aboard the Herschel Space Observatory. In these multi-transition calculations, radiation from dust was included and LTE was not assumed. Intensity ratios are relative to the (11-10) 119 GHz line, which was detected by Odin ( mK km s-1 in a 10 beam; Larsson et al. 2007). These transitions are the (33-12) 487 GHz line (44 ) and the (54-34) 773 GHz line (28 ), respectively. Labels along the graph refer to the density, n(H2) in cm-3, and the gas temperature, in K. Open with DEXTER
Figure 6: The line profile of the (3-2) map, after convolution with a 600 beam, is shown in black and is compared to the scaled O2 line of Larsson et al. (2007) shown in red. The higher S/N APEX data are spectrally sampled at a higher rate and have also higher resolution. The line intensity is dominated by an extended cloud component at an LSR-velocity seemingly different from that of the P 3 core and the O2 emission (see also Table 4). Open with DEXTER
4.3 Nature, location and extent of the O2 source
4.3.1 Oxygen in the cold ISM
The capital letter designation of the cores was introduced by Loren et al. (1990) for the location of emission peaks in lines of DCO+ in the . Depending on the details of the considered models of the deuteration process, they derived kinetic gas temperatures inside the cores which were always low, in the range 18-23 K, whereas temperatures in the outer layers were considerably higher.
In the interstellar medium, it is expected that most of the molecular oxygen is formed by the reaction
The thermal rate coefficient of this reaction has recently been measured in the laboratory down to temperatures of 39 K (Carty et al. 2006), where it remains rapid. In subsequent ab initio theoretical calculations, Xu et al. (2007) found a much smaller rate at temperatures below 30 K and suggested that this might solve the problem of missing O2 in cold interstellar clouds. The low-temperature behaviour of the reaction is also of interest for ultra-cold collisions. Quéméner et al. (2009) have determined that the reaction still proceeds in the limit of zero temperature with a rate coefficient of approximately cm3 s-1. Quan et al. (2008) re-examined the sensitivity of the interstellar O2 abundance to the low-temperature behaviour of the source reaction.
Also a widely favoured explanation for the generally observed paucity of molecular oxygen in the gas phase is depletion of atomic oxygen with subsequent hydrogenation on cold grain surfaces. This scenario seems supported by several observed molecules. For instance, the hydrogenation of CO is predicted to lead to H2CO, CH3OH and subsequently also the deuterated forms of these species (Fuchs et al. 2009; Matar et al. 2008). In order to become observable, these species have to be returned into the gas phase. Indeed, enhanced emission in methanol and doubly deuterated formaldehyde has been observed toward the centre of by, respectively, Liseau et al. (2003) and Bergman et al. (in preparation). In addition, widespread emission of gas phase H2O in is also observed (Larsson et al., in preparation), a fraction of which may have been similarly produced by the hydrogenation of O2 on cold grain surfaces (Ioppolo et al. 2008). The equilibrium between adsorption and desorption of molecules would then naturally lead to low levels of both species (as compared and in contrast to pure gas phase chemistry).
Table 4: Gaussian parameters of (3-2) and O2 (11-10) lines.
4.3.2 Site and size of the O2 source
Figure 6 shows the (3-2) spectrum after the convolution of the observed map with a 10 beam. A Gaussian profile provides a good fit to the observed line, the parameters of which are = 2.9 K, = 3.20 km s-1 and =1.45 km s-1 (Table 4). This velocity is offset from that of the core P 3 (3.62 km s-1, cf. also Table 2) and the intensity is dominated by an extended component. The integrated value is = 4.5 K km s-1. Using the temperature assumed by Larsson et al. (2007, i.e. 30 K) we obtain the beam averaged column density cm-2. The comparison with the Odin result, i.e. cm-2, would indicate that N( )/ . A abundance that is larger than that of O2 would be difficult to explain and would speak against an extended O2 emission region.
The O2 119 GHz Odin line shares the LSR-velocity with that of intensity maxima in and N2H+ (this paper and Di Francesco et al. 2004,2009). It seems therefore reasonable to identify the location of predominant O2 emission with the central parts of the cold core , i.e. the region including P 2 and P 3 (SM 1N and SM 1, respectively) with a probable extent on the 30 to 1 scale.
5 Conclusions
Summarising, we briefly conclude the following:
(3-2) mapping observations with APEX of a region in have revealed a complex radial velocity field. The central have been spatially sampled at the Nyquist frequency.
The of the O2 119 GHz line appears confined to a particular region (SM 1), which is also a prime emitter in and N2H+.
The observation of toward SM 1 (P 3) and SM 1N (P 2) resulted in upper limits. Combined with the data and for temperatures below 30 K, this leads to a ratio of N( )/N( .
From the O2 and observations we infer an O2 abundance .
The O2 source is likely relatively compact, on the arcminute or smaller scale, and should become readily detectable by upcoming Herschel HIFI observations.
Acknowledgements
We wish to thank Cathy Horellou and Daniel Johansson for making part of the APEX observations.
Footnotes
... A
Based on observations with APEX, Llano Chajnantor, Chile.
...
Data cubes of Figs. 3 and 4 are only available in electronic from at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/510/A98
... missions
SWAS in 1998, see http://cfa-www.harvard.edu/swas/, and Odin in 2001, see http://www.snsb.se/eng_odin_intro.shtml
... (B1950)
This corresponds to 162626 4, -242233 in J2000 coordinates and is at (+25 , +80 ) relative to the origin of the map (Fig. 3).
... Chile
http://www.apex-telescope.org/
... preparation)
The 234 GHz spectra admitted also lines of deuterated formaldehyde. Mapping observations revealed this peak position.
... them
In addition, for their line of sight, Federman et al. (2003) also give .
... beam
Odin observations resulted in a column density of oxygen cm-2 (Larsson et al. 2007). If , the beam filling of the O2 source is 10-3 to 10-2, i.e. the beam corrected cm-2. If / , then the expected column density of isotopic oxygen is likely within cm-2 cm-2 for a source of size about 20 to 60 .
... Observatory
http://herschel.esac.esa.int/
... density
This column density, which represents an average over tenarcminutes, implies an H2 column density, cm-2, a value which has been derived also by other means (Larsson et al. 2007, and references therein). At the adopted distance of 120 pc, this translates into an H2-mass of the cloud of 30 . Not totally unexpected, most of the mass would be contributed on larger scales (cf., e.g., Maruta et al. 2009; Motte et al. 1998).
Copyright ESO 2010
All Tables
Table 1: -peaks of integrated intensity, .
Table 2: Observed positions of (3-2) and line opacities, .
Table 3: Column densities of and .
Table 4: Gaussian parameters of (3-2) and O2 (11-10) lines.
All Figures
Figure 1: The central part of the 1 GHz wide APEX spectrum centered on the frequency of the transition, 233.946179 GHz, and obtained toward and (J2000) in . The sampling is in 122 kHz wide channels ( km s-1). Open with DEXTER In the text
Figure 2: (J=3-2) spectra ( vs. ) of, from top to bottom, CO (black), (red) and (blue) toward two positions in the (cf. Fig. 3). For clarity, two of the spectra are offset by K and the spectra have been multiplied by a factor of twenty. Open with DEXTER In the text
Figure 3: (3-2) integrated intensity, , of the dark core . The map was obtained with APEX and observed positions are shown as crosses. The beam size at 329 GHz is shown in the lower right corner. Offsets are with respect to the origin, and (J2000). The position of the outflow driving Class 0 source VLA 1623 is shown by the star symbol. P 1-P 4 designate the clumps discussed in the text (Table 1). The beam size at 234 GHz is indicated by the dotted circles, at the observed positions. Open with DEXTER In the text
Figure 4: A mosaic of maps of integrated (3-2) line intensity over 1.0 km s-1 wide velocity intervals, from +1.0 to +5.0 km s-1. The extended maps with 20 spacing are shown. Two velocity components are identified in the , falling into the [2, 3] and [3, 4] bins, respectively. The lowest contour level corresponds to 4 K km s-1 and increments are also by this amount. Open with DEXTER In the text
Figure 5: O2 line ratio diagram for the two strongest transitions accessible to HIFI aboard the Herschel Space Observatory. In these multi-transition calculations, radiation from dust was included and LTE was not assumed. Intensity ratios are relative to the (11-10) 119 GHz line, which was detected by Odin ( mK km s-1 in a 10 beam; Larsson et al. 2007). These transitions are the (33-12) 487 GHz line (44 ) and the (54-34) 773 GHz line (28 ), respectively. Labels along the graph refer to the density, n(H2) in cm-3, and the gas temperature, in K. Open with DEXTER In the text
Figure 6: The line profile of the (3-2) map, after convolution with a 600 beam, is shown in black and is compared to the scaled O2 line of Larsson et al. (2007) shown in red. The higher S/N APEX data are spectrally sampled at a higher rate and have also higher resolution. The line intensity is dominated by an extended cloud component at an LSR-velocity seemingly different from that of the P 3 core and the O2 emission (see also Table 4). Open with DEXTER In the text
Copyright ESO 2010
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.
|
{}
|
Dear Professor Mean, I know you told me that when one of the row probabilities in a two by two table is 0% or if one of the row probabilities is 100%, then the odds ratio is either 0 or infinity? But how do I tell which?
It depends on how you define odds and what you place in the numerator versus the denominator. Here’s a simple example to orient yourself.
|
{}
|
F. Berland Beauty
time limit per test
3 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
There are $n$ railway stations in Berland. They are connected to each other by $n-1$ railway sections. The railway network is connected, i.e. can be represented as an undirected tree.
You have a map of that network, so for each railway section you know which stations it connects.
Each of the $n-1$ sections has some integer value of the scenery beauty. However, these values are not marked on the map and you don't know them. All these values are from $1$ to $10^6$ inclusive.
You asked $m$ passengers some questions: the $j$-th one told you three values:
• his departure station $a_j$;
• his arrival station $b_j$;
• minimum scenery beauty along the path from $a_j$ to $b_j$ (the train is moving along the shortest path from $a_j$ to $b_j$).
You are planning to update the map and set some value $f_i$ on each railway section — the scenery beauty. The passengers' answers should be consistent with these values.
Print any valid set of values $f_1, f_2, \dots, f_{n-1}$, which the passengers' answer is consistent with or report that it doesn't exist.
Input
The first line contains a single integer $n$ ($2 \le n \le 5000$) — the number of railway stations in Berland.
The next $n-1$ lines contain descriptions of the railway sections: the $i$-th section description is two integers $x_i$ and $y_i$ ($1 \le x_i, y_i \le n, x_i \ne y_i$), where $x_i$ and $y_i$ are the indices of the stations which are connected by the $i$-th railway section. All the railway sections are bidirected. Each station can be reached from any other station by the railway.
The next line contains a single integer $m$ ($1 \le m \le 5000$) — the number of passengers which were asked questions. Then $m$ lines follow, the $j$-th line contains three integers $a_j$, $b_j$ and $g_j$ ($1 \le a_j, b_j \le n$; $a_j \ne b_j$; $1 \le g_j \le 10^6$) — the departure station, the arrival station and the minimum scenery beauty along his path.
Output
If there is no answer then print a single integer -1.
Otherwise, print $n-1$ integers $f_1, f_2, \dots, f_{n-1}$ ($1 \le f_i \le 10^6$), where $f_i$ is some valid scenery beauty along the $i$-th railway section.
If there are multiple answers, you can print any of them.
Examples
Input
4
1 2
3 2
3 4
2
1 2 5
1 3 3
Output
5 3 5
Input
6
1 2
1 6
3 1
1 5
4 1
4
6 1 3
3 4 1
6 5 2
1 2 5
Output
5 3 1 2 1
Input
6
1 2
1 6
3 1
1 5
4 1
4
6 1 1
3 4 3
6 5 3
1 2 4
Output
-1
|
{}
|
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
1 초 512 MB 0 0 0 0.000%
## 문제
Given a guest graph G and a host graph H with the same number of vertices, the graph embedding problem is to find a one-to-one correspondence σ from the vertex set V(G) of G to the vertex set V(H) of H, along with a mapping from each edge in the edge set E(G) of G to a path in H. Many applications can be modeled as graph embedding. In particular, graph embedding has long been used to model the problem of arranging a parallel algorithm in a parallel architecture.
The quality of an embedding can be measured by certain cost criteria. Among others, the dilation is the maximum of the lengths of paths mapped by all edges of G. If the host graph H is a tree in which any two vertices are joined by a unique path, an edge (u, v) of G is necessarily mapped to the unique path in H joining σ(u) and σ(v). So, the dilation of an embedding σ of graph G into tree H can be simply represented as max(u,v)𝜖E(G) dH(σ(u), σ(v)), where dH(σ(u), σ(v)) denotes the distance between σ(u) and σ(v) in H. The dilation of the embedding shown in Figure H.1 below, for example, is three.
Figure H.1: An embedding σ of a path graph into a tree both with 12 vertices, where σ can be written in two-line notation $\begin{pmatrix} 1 & 2 &3&4&5&6&7&8&9&10&11&12 \\ 7&6&5&4&1&2&3&8&9&11&12&10 \end{pmatrix}$, meaning σ(1) = 7, σ(2) = 6, σ(3) = 5, …, and σ(12) = 10.
We are concerned with the problem of embedding of a path graph into a tree, where a path graph is a tree that has at most two leaves. Given an embedding of a path graph into a tree, your job is to write an efficient program that finds the dilation of the embedding.
## 입력
Your program is to read from standard input. The first line contains an integer, n, representing the number of vertices of the host graph H which forms a tree, where 2 ≤ n ≤ 100,000. It is followed by n − 1 lines, each contains two positive integers u and v that represent an edge between vertex u and vertex v of H. It is assumed that the vertices are indexed from 1 to n. The last line contains an ordering σ(1), σ(2), …, σ(n) of the vertices of H, which represents the embedding of a path graph G into H, where V(G) = {1, 2, … , n} and E(G) = {(u, v) ∶ v = u + 1}.
## 출력
Your program is to write to standard output. Print exactly one line which contains an integer. The integer should be the dilation of the given embedding if the dilation is three or less; the integer should be 99 otherwise.
## 예제 입력 1
12
4 1
4 2
4 3
4 7
5 6
6 7
7 8
8 9
9 10
11 10
12 10
7 6 5 4 1 2 3 8 9 11 12 10
## 예제 출력 1
3
## 예제 입력 2
4
1 2
4 3
2 3
4 2 3 1
## 예제 출력 2
2
## 예제 입력 3
7
1 2
4 3
2 3
5 7
6 5
4 5
7 6 1 2 3 4 5
## 예제 출력 3
99
|
{}
|
# Bottom-Up Form of Top-Down Grammar defines same Formal Language
## Theorem
Let $\mathcal L$ be a formal language.
Let $\mathcal T$ be a top-down grammar for $\mathcal L$.
Let $\mathcal B$ be the bottom-up form of $\mathcal T$.
Then $\mathcal B$ is also a formal grammar for $\mathcal L$.
## Proof
Let $\phi$ be a $\mathcal B$-WFF.
If $\phi$ is a letter, then it is clearly a $\mathcal T$-WFF.
For, it may be formed by replacing the starting metasymbol of $\mathcal T$ by $\phi$.
Suppose that $\phi$ is formed from WFFs $\phi_1, \ldots, \phi_n$ by the rule of formation $\mathbf R_{\mathcal B}$ of $\mathcal B$.
Suppose also that each of $\phi_1, \ldots, \phi_n$ is also a $\mathcal T$-WFF.
Then by applying the rule of formation $\mathbf R$ of $\mathcal T$, we obtain a collation with metasymbols $\psi_1, \ldots, \psi_n$.
By assumption, we can apply rules of formation of $\mathcal T$ to each $\psi_i$ to yield the corresponding $\mathcal T$-WFF $\phi_i$.
It follows that $\phi$ is also a $\mathcal T$-WFF.
By the Principle of Structural Induction, each $\mathcal B$-WFF is also a $\mathcal T$-WFF.
Conversely, suppose that $\phi$ is a $\mathcal T$-WFF.
Let it be formed by applying the rules of formation $\mathbf R_1, \ldots, \mathbf R_n$, in succession.
We will prove that each metasymbol in the inputs of these rules of formation will be replaced by a $\mathcal B$-WFF.
In particular, then, $\phi$ will be a $\mathcal B$-WFF.
Since $\mathbf R_n$ is the last rule of formation applied, it must replace a metasymbol by a letter.
Since letters are $\mathcal B$-WFFs, $\mathbf R_n$ satisfies the assertion.
Suppose that we have established that all metasymbols in the result of $\mathbf R_i$ will be replaced by $\mathcal B$-WFFs.
Then also the metasymbol $\psi$ that $\mathbf R_i$ replaces by a new collation will be replaced by a WFF.
This is because the collation resulting from replacing $\psi$ according to $\mathbf R_i$ contains metasymbols $\psi_i$, which by assumption will be replaced by corresponding $\mathcal B$-WFFs $\phi_i$.
Now the rule of formation $\left({\mathbf R_i}\right)_{\mathcal B}$ ensures that $\psi$ will be replaced by a $\mathcal B$-WFF.
So each metasymbol $\psi$ in the input of $\mathbf R_i$ will be replaced by a $\mathcal B$-WFF.
Hence $\phi$ is a $\mathcal B$-WFF.
$\blacksquare$
## Remark
This theorem establishes that any formal language has a bottom-up grammar.
We may therefore assume any formal language to be given by a bottom-up grammar, which provides conceptual simplicity.
|
{}
|
# Implementing Dynamic memory networks
The Allen Institute for Artificial Intelligence has organized a 4 month contest in Kaggle on question answering. The aim is to create a system which can correctly answer the questions from the 8th grade science exams of US schools (biology, chemistry, physics etc.). DeepHack Lab organized a scientific school + hackathon devoted to this contest in Moscow. Our team decided to use this opportunity to explore the deep learning techniques on question answering (although they seem to be far behind traditional systems). We tried to implement Dynamic memory networks described in a paper by A. Kumar et al. Here we report some preliminary results. In the next blog post we will describe the techniques we used to get to top 5% in the contest.
## Contents
The questions of this contest are quite hard, they not only require lots of knowledge in natural sciences, but also abilities to make inferences, generalize the concepts, apply the general ideas to the examples and so on. The methods based on deep learning do not seem to be mature enough to handle all of these difficulties. On the other hand these questions have 4 answer candidates. That’s why, as was noted by Dr. Vorontsov, simple search engine indexed on lots of documents will perform better as a question answering system than any “intelligent” system.
But there is already some work on creating question answering / reasoning systems using neural approaches. As another lecturer of the DeepHack event, Tomas Mikolov, told us, we should start from easy, even synthetic questions and try to gradually increase the difficulty. This roadmap towards building intelligent question answering systems is described in a paper by Facebook researchers Weston, Bordes, Chopra, Rush, Merriënboer and Mikolov, where the authors introduce a benchmark of toy questions called bAbI tasks which test several basic reasoning capabilities of a QA system.
Questions in the bAbI dataset are grouped into 20 types, each of them has 1000 samples for training and another 1000 samples for testing. A system is said to have passed a given task, if it correctly answers at least 95% of the questions in the test set. There is also a version with 10K samples, but as Mikolov told during the lecture, deep learning is not necessarily about large datasets, and in this setting it is more interesting to see if the systems can learn answering questions by looking at a few training samples.
Some of the bAbI tasks. More examples can be found in the paper.
## Memory networks
bAbI tasks were first evaluated on an LSTM-based system, which achieve 50% performance on average and do not pass any task. Then the authors of the paper try Memory Networks by Weston et al. It is a recurrent network which has a long-term memory component where it can learn to write some data (the input sentences) and read them later.
bAbI tasks include not only the answers to the questions but also the numbers of those sentences which help answer the question. This information is taken into account when training MemNN, they not only get the correct answers but also an information about which input sentences affect the answer. Under this so called strongly supervised setting “plain” Memory networks pass 7 of the 20 tasks. Then the authors apply some modifications to them and pass 16 tasks.
The structure of MemN2N from the paper.
We are mostly interested in weakly supervised setting, because the additional information on important sentences is not available in many real scenarios. This was investigated in a paper by Sukhbaatar, Szlam, Weston and Fergus (from New York University and Facebook AI Research) where they introduce End-to-end memory networks (MemN2N). They investigate many different configurations of these systems and the best version passes 9 tasks out of 20. Facebook’s MemN2N repository on GitHub lists some implementations of MemN2N.
## Dynamic memory networks
Another advancement in the direction of memory networks was made by Kumar, Irsoy, Ondruska, Iyyer, Bradbury, Gulrajani and Socher from Metamind. By the way, Richard Socher is the author of an excellent course on deep learning and NLP at Stanford, which helped us a lot to get into the topic. Their paper introduces a new system called Dynamic memory networks (DMN) which passes 18 bAbI tasks in the strongly supervised setting. The paper does not talk about weakly supervised setting, so we decided to implement DMN from scratch in Theano.
High-level structure of DMN from the paper.
### Semantic memory
The input of the DMN is a sequence of word vectors of input sentences. We followed the paper and used pretrained GloVe vectors and added the dimensionality of word vectors to the list of hyperparamaters (controlled by the command line argument --word_vector_size). DMN architecture treats these vectors as part of a so called semantic memory (in contrast to the episodic memory) which may contain other knowledge as well. Our implementation uses only word vectors and does not fine tune them during the training, so we don’t consider it as a part of the neural network.
### Input module
The first module of DMN is an input module that is a gated recurrent unit (GRU) running on the sequence of word vectors. GRU is a recurrent unit with 2 gates that control when its content is updated and when its content is erased. The hidden state of the input module is meant to represent the input processed so far in a vector. Input module outputs its hidden states either after every word (--input_mask word) or after every sentence (--input_mask sentence). These outputs are called facts.
Formal definition of GRU. z is the update gate and r is the reset gate. More details and images can be found here.
Then there is a question module that processes the question word by word and outputs one vector at the end. This is done by using the same GRU as in the input module using the same weights.
### Episodic memory
The fact and question vectors extracted from the input enter the episodic memory module. Episodic memory is basically a composition of two nested GRUs. The outer GRU generates the final memory vector working over a sequence of so called episodes. This GRU state is initialized by the question vector. The inner GRU generates the episodes.
Details of DMN architecture from the paper.
The inner GRU generates the episodes by passing over the facts from the input module. But when updating its inner state, the GRU takes into account the output of some attention function on the current fact. Attention function gives a score (between 0 and 1) to each of the fact, and GRU (softly) ignores the facts having low scores. Attention function is a simple 2 layer neural network depending on the question vector, current fact, and current state of the memory. After each full pass on all facts the inner GRU outputs an episode which is fed into the outer GRU which on its turn updates the memory. Then because of the updated memory the attention may give different scores to the facts. So new episodes can be created. The number of steps of the outer GRU, that is the number of the episodes, can be determined dynamically, but we fix it to simplify the implementation. It is configured by --memory_hops setting.
All facts, episodes and memories are in the same n-dimensional space, which is controlled by the command line argument --dim. Inner and outer GRUs share their weights.
###
The final state of the memory is being fed into the answer module, which produces the answer. We have implemented two kinds of answer modules. First is a simple linear layer on top of the memory vector with softmax activation (--answer_module feedforward). This is useful if each answer is just one word (like in the bAbI dataset). The second kind of answer module is another GRU that can produce multiple words (--answer_module recurrent). Its implementation is half baked now, as we didn’t need it for bAbI.
The whole system is end-to-end differentiable and is trained using stochastic gradient descent. We use adadelta by default. More formulas and details of architecture can be found in the original paper. But the paper does not contain many implementation details, so we may have diverged from the original implementation.
## Initial experiments
We have tested this system on bAbI tasks with a few randomly selected hyperparameters. We initialized the word vectors by using 50-dimensional GloVe vectors trained on Wikipedia. Answer module is a simple feedforward classifier over the vocabulary (which is very limited in bAbI tasks). Here are the results.
First two columns are for strongly supervised systems MemNN and DMN. Third column is the best results of MemN2N. The last 3 columns are our results with different dimensions of the memory.
First basic observation is that weakly supervised systems are generally worse than the strongly supervised ones. When compared to MemN2N, our system performs much worse on the tasks 2, 3 and 16. As a result we pass only 7 tasks out of 20. On the other hand, our results on tasks 5, 6, 8, 9, 10 and 18 are better than MemN2N. Surprisingly what we got on the 17th task is better than in strongly supervised systems!
Our system converges very fast on some of the tasks (like the first one), overfits on many other tasks and does not converge on tasks 2, 3 and 19.
19th task (path finding) is not solved by any of these systems. Wojciech Zaremba from OpenAI informed us during his lecture about one system which managed to solve it using 10K training samples. This remains a very interesting challenge for us. We need to carefully experiment with various parameters to reach some meaningful conclusions.
We have tried to test on the full shuffled list of 20000 bAbI tasks. We couldn’t reach 60% average accuracy after 50 hours of training on an Amazon instance, while MemN2N authors report 87.6% accuracy.
This implementation of DMN is available on Github. We really need lots of feedback on this code.
## Next steps
• We need a good way to visualize the attention in the episodic memory. This will help us understand what is exactly going on inside the system. Many papers now include such visualizations on some examples.
• Our model overfits on many of the tasks even with 25-dimensional memory. We briefly experimented with L2 regularization but it didn’t help much (--l2).
• Currently we are working on a slightly modified architecture which will be optimized for multiple choice questions. Basically it will include one more input module which will read the answer choices and will provide another input for the attention mechanism.
• Then we will be able to evaluate our code on more complex QA datasets like MCTest.
• Training with batches is not properly implemented yet. There are several technical challenges related to the variable length of input sequences. It becomes much harder to keep in control because of this kind of bugs in Theano.
We would like to thank the organizers of DeepHack.Q&A for the really amazing atmosphere here in PhysTech.
|
{}
|
# Paula Corporation was authorized to issue 28,000 shares of common stock. Record the journal entry...
Paula Corporation was authorized to issue 28,000 shares of common stock. Record the journal entry for each of the following independent situations, assuming Paula issues 5, 600 shares at $9 on July 20, 201X: a. Common stock has an$8 par value. b. Common stock has no-par and no stated value. c. Common stock is no-par stock with a stated value of \$7.
|
{}
|
## Tuesday, February 12, 2008
### Two interesting documents from the IMF
The IMF released two documents on 4 February: India: 2007 Article IV Consultation, and India: Selected Issues.
### Selected Issues
While a quick search on the IMF website shows many Selected Issues' documents, I had not noticed this product type earlier. It turns out to be a small edited book containing seven papers about India:
1. Competitiveness and Exchange Rate Policy by Hiroko Oura, Petia Topalova, Andrea Richter-Hume, and Charles Kramer;
2. Challenges to Monetary Policy from Financial Globalization: The Case of India by Charles F. Kramer, Helene K. Poirson and A. Prasad;
3. Monetary Policy Communication and Transparency by Helene K. Poirson;
4. Financial Development and Growth in India: A Growing Tiger in a Cage? by Hiroko Oura and Renu Kohli;
5. Developing the Foreign Exchange Derivatives Market by Andreas Jobst;
6. Inclusive Growth by Petia Topalova;
7. India's Social Protection Framework by Andrea Richter Hume.
Of these, three stood out for me.
It has been previously noted that the RBI fares very badly in international comparisons of central bank transparency. Further, while central banks worldwide have improved over the last decade, RBI has stagnated. The paper by Helene K. Poirson is an outstanding how-to manual on how RBI's transparency can be improved. It is sensible, well written and immediately actionable. It reviews the recent difficulties of monetary policy from the viewpoint of communication strategy, and draws on these episodes to propose solutions. I hope RBI is able to implement all this right away. Everyone interested in Indian monetary economics should read this article.
The paper by Hiroko Oura and Renu Kohli was also most interesting to me. There is a vast literature based on the CMIE firm-level database; this one stands out as obtaining interesting answers to interesting questions. Specifically, it sheds light on the areas where the Indian financial sector does or does not deliver the goods in terms of financing of firms.
The paper by Jobst on currency derivatives is an excellent policy paper, one that is particularly timely given that RBI is presently engaged in trying to ensure that a currency futures market does not succeed.
### Article IV Consultation document
I am generally cynical about Article IV documents. Too often, they are suffused with bureaucratic triumphalism, with sentences of the form Under the steady guidance of the great leader, the peasants and workers reaped a glorious harvest''. The IMF is forced to praise India's deft handling of macroeconomic policy in every alternate paragraph. If your tastes run to ruthless truth-telling', the result of the Article IV process is often not interesting.
However, this time, the document is well worth reading, particularly if you're able to ignore the platitudes. It gives the reader a good grip of the overall macroeconomic situation, and a sound perspective on the difficulties of both fiscal and monetary policy. It struck me that there isn't an Indian effort of this genre out there.
1. Dr.Ajay,
Just to ping that
Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
|
{}
|
# Lf a line is drawn parallel to the y axis through the point (4,2), then what would its equation be?
Nov 21, 2016
$x = 4$
#### Explanation:
A line parallel to the y-axis, passes through all points in the plane with the same x-coordinate. For this reason it's equation is.
$\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{x = c} \textcolor{w h i t e}{\frac{2}{2}} |}}}$
where c is the value of the x-coordinate of the points it passes through.
The line passes through the point $\left(\textcolor{red}{4} , 2\right)$
$\Rightarrow x = 4 \text{ is the equation}$
graph{y-1000x+4000=0 [-10, 10, -5, 5]}
|
{}
|
Vuforia On Unity3D: Usage of Camera Focus modes?
I'm fidgeting with AR a bit, and reading through Vuforia's documentation I've found about the Camera Focus Modes.
I'm using a DSLR through SparkoCam to do some tests, and have found that using this on my "CameraController" script:
if(!CameraDevice.Instance.SetFocusMode(CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO))
{
Debug.LogError("Device does not support AF.");
}
Always prompts that the device has AF enabled, even if I disable AF on the camera... because it seems Sparkocam does not have a way to tell the computer that AF is off.
So, I guess I'm a bit lost here. I can't query the CameraDevice for which Focus Mode is being used, and the article does not talk in depth about its differences (besides that "you have to touch the screen to refocus", for instance). Should be considered a good practice something along the lines of (Pseudocode incoming):
if (!setFocusModeContinuous()) {
if (!setFocusModeNotContinuous() {
/* ... */
Debug.LogError("Try to plug something that is not a pinhole camera, please");
closeProgram();
}
}
or does Vuforia already handles this and I can effectively ignore this unless I need some specific focus mode for whatever reason?
|
{}
|
Next: Spectral factorization Up: The square root of Previous: Root-finding recursions
## The convergence rate
We can now analyze which of the particular choices of is more appropriate as far as the convergence rate is concerned.
If we consider the general form of the square root iteration
we can estimate the convergence rate by the difference between the actual estimation at step (n+1) and the analytical value . For the general case, we obtain
or
(7)
sqroot Figure 2 Convergence plots for different recursive algorithms, shown in Table 1.
The possible selections for from Table 1 clearly show that the recursions described in the preceding subsection generally have a linear convergence rate (that is, the error at step n+1 is proportional to the error at step n), but can converge quadratically for an appropriate selection of the parameter , as shown in Table 7. Furthermore, the convergence is faster when is closer to .
|
{}
|
# Is the product of two positive semidefinite matrices positive semidefinite?
If $X$ and $W$ are real, square, symmetric, positive semidefinite matrices of the same dimension, does $XW + WX$ have to be positive semidefinite?
This is not homework.
-
The above comment is to an earlier version of the question and no longer applies. – Stefan Smith Oct 28 '12 at 17:06
To answer the second part of your question, the matrix $XW+WX$ need not be positive semidefinite. Let $$X = \left(\begin{array}{rr} 4 & 2 \\ 2 & 1 \end{array} \right).$$ Let $$W = \left(\begin{array}{rr} 4 & -2 \\ -2 & 1 \end{array} \right).$$ Let $$v = \left(\begin{array}{r}0\\1\end{array}\right).$$
Then $$v^T XW v + v^T W X v = \big( \ 2 \ 1 \ \big) \left(\begin{array}{rc}-2 \\1\end{array}\right) + \big(\,-\!2 \ 1 \ \big) \left(\begin{array}{c} 2 \\1\end{array}\right) = -6 < 0.$$
-
Prof. Shor: I am honored that you took the time to answer my humble question. – Stefan Smith Oct 28 '12 at 14:40
@navigetor23 : My question was $tr(XW) \geq 0$, and I also wondered if $XW + WX$ had to be positive semidefinite. I discovered the answer to my first question online after I posted. You posted a better reason why $tr(XW) \geq 0$, but someone else proved that $XW + WX$ need not be positive semidefinite and I accepted their answer. Unfortunately I can only accept one answer. – Stefan Smith Oct 28 '12 at 16:54
@navigetor23 : I should have split it up into two questions. I didn't because I was expecting $XW + WX$ to be positive semidefinite. – Stefan Smith Oct 28 '12 at 17:02
@navigetor23 : I edited my question so it asked if $XW + WX$ had to be positive semidefinite. I will re-ask why $XW$ has to have nonnegative trace and accept your answer if you give the same answer as before. – Stefan Smith Oct 28 '12 at 17:08
If $A, B$ are real, pos, and symmetric, then $A=A^{1/2}A^{1/2}$ and the trace of $AB$ is the trace of $A^{1/2}A^{1/2}B$ which is the the trace of $A^{1/2}BA^{1/2}$ which is a positive semidefinite matrix. Thus trace of $AB$ is nonnegative.
-
How do you know $A^\frac{1}{2} A^\frac{1}{2}B$ and $A^\frac{1}{2} B A^\frac{1}{2}$ have the same trace, unless $A^\frac{1}{2}B$ is symmetric? – Stefan Smith Oct 28 '12 at 15:07
The cyclic property of trace: $\mathrm{Tr} ABC$ = $\mathrm{Tr} BCA$. – Peter Shor Oct 28 '12 at 15:09
Thanks again, Prof. Shor. – Stefan Smith Oct 28 '12 at 15:12
$$XW \sim ZX\underbrace{Z^{-1}Z}_IWZ^{-1} = DZWZ^{-1}$$
Since $X$ is symmetric, square, and positive semi-definite, $D$ is diagonal for some $Z$ and has non-negative elements.
$ZWZ^{-1} \sim W$ has positive trace (trace is invariant to similarity and $W$ is positive square and semi-definite). There can be no negative values in the diagonal for $W$ and the following shows why.
Let $\mathbf{e}_i$ be the unit vector for the index $i$ that is the same index as the supposed negative diagonal value in $W$ (or any similarity to $W$). Then $\mathbf{e}_i^\top W \mathbf{e}_i = W_{ii} < 0$ which contradicts that $W$ is positive semidefinite.
-
|
{}
|
3.8k views
The control signal functions of a 4-bit binary counter are given below (where X is “don’t care”):
1 X X X Clear to 0
0 X 0 0 No change
0 $\uparrow$ 1 X Load input
0 $\uparrow$ 0 1 Count next
The counter is connected as follows:
Assume that the counter and gate delays are negligible. If the counter starts at 0, then it cycles through the following sequence:
1. 0, 3, 4
2. 0, 3, 4, 5
3. 0, 1, 2, 3, 4
4. 0, 1, 2, 3, 4, 5
edited | 3.8k views
0
This question is actually an ambiguous question. How can we decide if it is synchronous counter or ripple counter ? In question they didn't mentioned anything about this..
Now come to these 2 questions asked in 2015 and 2016 , where they clearly mention synchronous counter https://gateoverflow.in/8054/gate2015-2_7 and https://gateoverflow.in/39670/gate-2016-1-8 and for this 2016 question Both 3 and 4 were given the correct answer in GATE that year by IISC .
This 2007 question should have both C and D as correct answer..
0
What is the significance of input "0011" ? Is it the initial state?
0
Yes, 0011 is given input , it is considered as initial state from the diagram .
0
Then why are incrementing a4a3a2a1 from 0000 ? It should be from 0011 right ? Or is the value given just to identify the initial value of clear and then we reset a3a2a1a0 back to 0000 ?
0
I think , yes , 0011 value given to identify the initial value of clear and then we reset a3a2a1a0 back to 0000 .
+6
@Bikram sir
Using the following 2 arguments can't we conclude that the counter is Synchronous::
1)
Assume that the counter and gate delays are negligible
If counter delay is assumed to be negligible, then this counter is definately a synchronous counter as in case of asynchronous counter, for an N bit Asynchronous counter, the total propagation delay is N*Tp where Tp is the individual delay of each flip flop involved in counter.
Also, when this is synchronous counter, as soon as the output of bits reaches, 0101, clear logic gets activated and "Assuming no delays" this will quickly make the output appear 0000 and we won't get to see 5 as output of the counter.
To see 5 as the output, either we need to make clear logic work at 6 or we need to add delay to clear logic equal to 1 time period of clock pulse. Till, the next clock pulse comes, just before that our clear logic can be switched on and set counter to 0.
2)
the output of 0101 from the counter isn't stable. I.E. the output lines of the counter will show 0101 but will immediately be reset to 0. I have drawn this conclusion from the table which states that when clear is 1, the counter will not care about the clock or any other parameter and clear the counter to 0.
I am under the impression that only the stable outputs of a counter can be considered as valid sequences.
+2
@VS
yes, i agree with both of your points.
It is synchronous counter and answer is C.. question is unambiguous too.
0
@VS
If possible, please write a separate answer with all these arguments for this question . Your all points are valid.
0
@bikram sir, @vs
what is the significance of input 0011 here . ??
0
0011 value given to identify the initial value of clear and then we reset a3a2a1a0 back to 0000
0
@ bikram sir ,
but Sir it is not going to effect clr bcz clr has connection with output not input .... a/c to me at clk =0 the counter value was 0011 and at clk=1 it resets it to 0000 and then
when it reached to 0101 then it clears .
0
given i/p "0011" doesn't affect here
0
@bikram sir .
. yes sir
thank you
0
can someone please explain this question littel bit more?
0
@VS
if it was asynchronous (i.e) gate delays were not negligible then in that 5 will be a part of the output
or still it wont because its unstable
(i.e) even though 5 is displayed in output it will be cleared after the delay and so still the answer will be option C ?
$Remark:$
1. If clear = 1, counter will be reset to 0000 without any delay, and counter doesn't count 5.
2. If load=1, counter will be loaded with the input 0011, but note that counter counts 5 in this case unlike clear input.
3. Counter counts from 0 to 4.
4. Clear and Load are direct inputs, it means they can be applied to the counter without using any pulse.
selected
whenever A4 A3 A2 A1 =0 1 0 1 then clear line will be enabled as A3 and A1 set.
given table says that whenever clear control signal set , it clears to 0 0 0 0 , before the current clock cycle completes.
so 5 is cleared to 0 in the same clock cycle.
so counter sequence is 0, 1 , 2, 3, 4
Hence option C .
0
are you using "Assume that the counter and gate delays are negligible" this line to decide that input is cleared in same clock cylce ?
+3
Yes, as counter is synchronous, it will count 5 too.
In case of asynchronous counter, it counts till 4 only.
0
@praveen sir,
i agree with you in general case.
please check the given function table, here clear has highest priority , it will be applied to counter irrespective of clock signal. observer don't cares , when clear set to 1.
0
But question is different, the condition on which we get clear =1.
Only when we reach to state 0101
+1
we get clear when we reach 0101 is true , but that state won't exist till next clock , before next clock pulse value will be reset to zero , whenever clock applied it goes to 0001 because in the last cycle itself it clear the output of counter to zero.
+9
Praveen sir, I think C should be the answer as the output of 0101 from the counter isn't stable. I.E. the output lines of the counter will show 0101 but will immediately be reset to 0. I have drawn this conclusion from the table which states that when clear is 1, the counter will not care about the clock or any other parameter and clear the counter to 0. (also stated by pramod).
I am under the impression that only the stable outputs of a counter can be considered as valid sequences. Is this assumption wrong? Please advice.
–1
Since input state is initially given in the question....0011..cant we start from there ?
Like
0011 (3)
0100(4)
0101(5) --> Clear set so 0000(0)
So cant v take sequence as 3,4,0 ?
+1
Assume that the counter and gate delays are negligible
If counter delay is assumed to be negligible, then this counter is definately a synchronous counter as in case of asynchronous counter, for an N bit Asynchronous counter, the total propagation delay is N*Tp where Tp is the individual delay of each flip flop involved in counter.
Also, when this is synchronous counter, as soon as the output of bits reaches, 0101, clear logic gets activated and "Assuming no delays" this will quickly make the output appear 0000 and we won't get to see 5 as output of the counter.
To see 5 as the output, either we need to make clear logic work at 6 or we need to add delay to clear logic equal to 1 time period of clock pulse. Till, the next clock pulse comes, just before that our clear logic can be switched on and set counter to 0.
+1
What is the significance of inputs "0011" shown in the question ? Is it the initial state ?
0
What is the significance of inputs 0011?
+1
@rahul ,here when Load will be set to $1$ then we will load this input $0011$to output.But you can see here Load is set to 0 and it will never change throughut the functionality so we are not using this input at all.
Using the following 2 arguments we can conclude that the counter is Synchronous::
1)
Assume that the counter and gate delays are negligible
If counter delay is assumed to be negligible, then this counter is definately a synchronous counter as in case of asynchronous counter, for an N bit Asynchronous counter, the total propagation delay is N*Tp where Tp is the individual delay of each flip flop involved in counter.
Also, when this is synchronous counter, as soon as the output of bits reaches, 0101, clear logic gets activated and "Assuming no delays" this will quickly make the output appear 0000 and we won't get to see 5 as output of the counter.
To see 5 as the output, either we need to make clear logic work at 6 or we need to add delay to clear logic equal to 1 time period of clock pulse. Till, the next clock pulse comes, just before that our clear logic can be switched on and set counter to 0.
2)
The output of 0101 from the counter isn't stable. I.E. the output lines of the counter will show 0101 but will immediately be reset to 0. I have drawn this conclusion from the table which states that when clear is 1, the counter will not care about the clock or any other parameter and clear the counter to 0.
I am under the impression that only the stable outputs of a counter can be considered as valid sequences.
So, 5 is cleared to 0 in the same clock cycle.
so counter sequence is 0, 1 , 2, 3, 4
Hence option C .
0
What is the significance of 0011 at input?
0
No significance
0
@VS can you explain the statement "we need to add delay to clear logic equal to 1 time period of clock pulse".
ans is D:
given input is 0011
A1 & A3 is 0, so clear=0. now as per last row of table count will be next count. initially counter was 0 now it will become 1.
now 1 will be the input 0001. here A1&A3 is 0, so clear is 0. and again counter will be increased to 2.
now 2 will be the input 0010. here A1&A3 is 0, so claer is 0. and again counter will be increased to 3.
now 3 will be the input 0011. here also A1&A3 is 0. so counter will become 4.
now 4 will be the input 0100. here also A1&A3 is 0. so counter will become 5.
now 5 will be input 0101. here A1&A3 is 1. so clear will become 1. As per first row of table counter will reset to 0 again.
hence ans is D.
+3
Why given i/p "0011" doesnt affect here. Why we havent considered it as 0, 4,5 as an answer.. (Ignore options for some time)
Why?
0
"If the counter starts at 0, then it cycles through the following sequence:"is given in the question, i think in the diagram they just want to illustrate some state
|
{}
|
## Odds and Probability: Commonly Misused Terms in Statistics – An Illustrative Example in Baseball
Yesterday, all 15 home teams in Major League Baseball won on the same day – the first such occurrence in history. CTV News published an article written by Mike Fitzpatrick from The Associated Press that reported on this event. The article states, “Viewing every game as a 50-50 proposition independent of all others, STATS figured the odds of a home sweep on a night with a full major league schedule was 1 in 32,768.” (Emphases added)
Screenshot captured at 5:35 pm Vancouver time on Wednesday, August 12, 2015.
Out of curiosity, I wanted to reproduce this result. This event is an intersection of 15 independent Bernoulli random variables, all with the probability of the home team winning being 0.5.
$P[(\text{Winner}_1 = \text{Home Team}_1) \cap (\text{Winner}_2 = \text{Home Team}_2) \cap \ldots \cap (\text{Winner}_{15}= \text{Home Team}_{15})]$
Since all 15 games are assumed to be mutually independent, the probability of all 15 home teams winning is just
$P(\text{All 15 Home Teams Win}) = \prod_{n = 1}^{15} P(\text{Winner}_i = \text{Home Team}_i)$
$P(\text{All 15 Home Teams Win}) = 0.5^{15} = 0.00003051757$
Now, let’s connect this probability to odds.
It is important to note that
• odds is only applicable to Bernoulli random variables (i.e. binary events)
• odds is the ratio of the probability of success to the probability of failure
For our example,
$\text{Odds}(\text{All 15 Home Teams Win}) = P(\text{All 15 Home Teams Win}) \ \div \ P(\text{At least 1 Home Team Loses})$
$\text{Odds}(\text{All 15 Home Teams Win}) = 0.00003051757 \div (1 - 0.00003051757)$
$\text{Odds}(\text{All 15 Home Teams Win}) = 0.0000305185$
The above article states that the odds is 1 in 32,768. The fraction 1/32768 is equal to 0.00003051757, which is NOT the odds as I just calculated. Instead, 0.00003051757 is the probability of all 15 home teams winning. Thus, the article incorrectly states 0.00003051757 as the odds rather than the probability.
This is an example of a common confusion between probability and odds that the media and the general public often make. Probability and odds are two different concepts and are calculated differently, and my calculations above illustrate their differences. Thus, exercise caution when reading statements about probability and odds, and make sure that the communicator of such statements knows exactly how they are calculated and which one is more applicable.
## Mathematical Statistics Lesson of the Day – Basu’s Theorem
Today’s Statistics Lesson of the Day will discuss Basu’s theorem, which connects the previously discussed concepts of minimally sufficient statistics, complete statistics and ancillary statistics. As before, I will begin with the following set-up.
Suppose that you collected data
$\mathbf{X} = X_1, X_2, ..., X_n$
in order to estimate a parameter $\theta$. Let $f_\theta(x)$ be the probability density function (PDF) or probability mass function (PMF) for $X_1, X_2, ..., X_n$.
Let
$t = T(\mathbf{X})$
be a statistics based on $\textbf{X}$.
Basu’s theorem states that, if $T(\textbf{X})$ is a complete and minimal sufficient statistic, then $T(\textbf{X})$ is independent of every ancillary statistic.
Establishing the independence between 2 random variables can be very difficult if their joint distribution is hard to obtain. This theorem allows the independence between minimally sufficient statistic and every ancillary statistic to be established without their joint distribution – and this is the great utility of Basu’s theorem.
However, establishing that a statistic is complete can be a difficult task. In a later lesson, I will discuss another theorem that will make this task easier for certain cases.
## Mathematics and Applied Statistics Lesson of the Day – Contrasts
A contrast is a linear combination of a set of variables such that the sum of the coefficients is equal to zero. Notationally, consider a set of variables
$\mu_1, \mu_2, ..., \mu_n$.
Then the linear combination
$c_1 \mu_1 + c_2 \mu_2 + ... + c_n \mu_n$
is a contrast if
$c_1 + c_2 + ... + c_n = 0$.
There is a reason for why I chose to use $\mu$ as the symbol for the variables in the above notation – in statistics, contrasts provide a very useful framework for comparing multiple population means in hypothesis testing. In a later Statistics Lesson of the Day, I will illustrate some examples of contrasts, especially in the context of experimental design.
## Mathematical Statistics Lesson of the Day – Complete Statistics
The set-up for today’s post mirrors my earlier Statistics Lesson of the Day on sufficient statistics.
Suppose that you collected data
$\mathbf{X} = X_1, X_2, ..., X_n$
in order to estimate a parameter $\theta$. Let $f_\theta(x)$ be the probability density function (PDF)* for $X_1, X_2, ..., X_n$.
Let
$t = T(\mathbf{X})$
be a statistic based on $\mathbf{X}$.
If
$E_\theta \{g[T(\mathbf{X})]\} = 0, \ \ \forall \ \theta,$
implies that
$P \{g[T(\mathbf{X})]\} = 0] = 1,$
then $T(\mathbf{X})$ is said to be complete. To deconstruct this esoteric mathematical statement,
1. let $g(t)$ be a measurable function
2. if you want to use $g[T(\mathbf{X})]$ to form an unbiased estimator of the zero function,
3. and if the only such function is almost surely equal to the zero function,
4. then $T(\mathbf{X})$ is a complete statistic.
I will discuss the intuition behind this bizarre definition in a later Statistics Lesson of the Day.
*This above definition holds for discrete and continuous random variables.
## Christian Robert Shows that the Sample Median Cannot Be a Sufficient Statistic
I am grateful to Christian Robert (Xi’an) for commenting on my recent Mathematical Statistics Lessons of the Day on sufficient statistics and minimally sufficient statistics.
In one of my earlier posts, he wisely commented that the sample median cannot be a sufficient statistic. He has supplemented this by writing on his own blog to show that the median cannot be a sufficient statistic.
Thank you, Christian, for your continuing readership and contribution. It’s a pleasure to learn from you!
## Mathematical Statistics Lesson of the Day – Minimally Sufficient Statistics
In using a statistic to estimate a parameter in a probability distribution, it is important to remember that there can be multiple sufficient statistics for the same parameter. Indeed, the entire data set, $X_1, X_2, ..., X_n$, can be a sufficient statistic – it certainly contains all of the information that is needed to estimate the parameter. However, using all $n$ variables is not very satisfying as a sufficient statistic, because it doesn’t reduce the information in any meaningful way – and a more compact, concise statistic is better than a complicated, multi-dimensional statistic. If we can use a lower-dimensional statistic that still contains all necessary information for estimating the parameter, then we have truly reduced our data set without stripping any value from it.
Our saviour for this problem is a minimally sufficient statistic. This is defined as a statistic, $T(\textbf{X})$, such that
1. $T(\textbf{X})$ is a sufficient statistic
2. if $U(\textbf{X})$ is any other sufficient statistic, then there exists a function $g$ such that
$T(\textbf{X}) = g[U(\textbf{X})].$
Note that, if there exists a one-to-one function $h$ such that
$T(\textbf{X}) = h[U(\textbf{X})],$
then $T(\textbf{X})$ and $U(\textbf{X})$ are equivalent.
## Mathematical Statistics Lesson of the Day – Sufficient Statistics
*Update on 2014-11-06: Thanks to Christian Robert’s comment, I have removed the sample median as an example of a sufficient statistic.
Suppose that you collected data
$\mathbf{X} = X_1, X_2, ..., X_n$
in order to estimate a parameter $\theta$. Let $f_\theta(x)$ be the probability density function (PDF)* for $X_1, X_2, ..., X_n$.
Let
$t = T(\mathbf{X})$
be a statistic based on $\mathbf{X}$. Let $g_\theta(t)$ be the PDF for $T(X)$.
If the conditional PDF
$h_\theta(\mathbf{X}) = f_\theta(x) \div g_\theta[T(\mathbf{X})]$
is independent of $\theta$, then $T(\mathbf{X})$ is a sufficient statistic for $\theta$. In other words,
$h_\theta(\mathbf{X}) = h(\mathbf{X})$,
and $\theta$ does not appear in $h(\mathbf{X})$.
Intuitively, this means that $T(\mathbf{X})$ contains everything you need to estimate $\theta$, so knowing $T(\mathbf{X})$ (i.e. conditioning $f_\theta(x)$ on $T(\mathbf{X})$) is sufficient for estimating $\theta$.
Often, the sufficient statistic for $\theta$ is a summary statistic of $X_1, X_2, ..., X_n$, such as their
• sample mean
• sample median – removed thanks to comment by Christian Robert (Xi’an)
• sample minimum
• sample maximum
If such a summary statistic is sufficient for $\theta$, then knowing this one statistic is just as useful as knowing all $n$ data for estimating $\theta$.
*This above definition holds for discrete and continuous random variables.
## Mathematics and Mathematical Statistics Lesson of the Day – Convex Functions and Jensen’s Inequality
Consider a real-valued function $f(x)$ that is continuous on the interval $[x_1, x_2]$, where $x_1$ and $x_2$ are any 2 points in the domain of $f(x)$. Let
$x_m = 0.5x_1 + 0.5x_2$
be the midpoint of $x_1$ and $x_2$. Then, if
$f(x_m) \leq 0.5f(x_1) + 0.5f(x_2),$
then $f(x)$ is defined to be midpoint convex.
More generally, let’s consider any point within the interval $[x_1, x_2]$. We can denote this arbitrary point as
$x_\lambda = \lambda x_1 + (1 - \lambda)x_2,$ where $0 < \lambda < 1$.
Then, if
$f(x_\lambda) \leq \lambda f(x_1) + (1 - \lambda) f(x_2),$
then $f(x)$ is defined to be convex. If
$f(x_\lambda) < \lambda f(x_1) + (1 - \lambda) f(x_2),$
then $f(x)$ is defined to be strictly convex.
There is a very elegant and powerful relationship about convex functions in mathematics and in mathematical statistics called Jensen’s inequality. It states that, for any random variable $Y$ with a finite expected value and for any convex function $g(y)$,
$E[g(Y)] \geq g[E(Y)]$.
A function $f(x)$ is defined to be concave if $-f(x)$ is convex. Thus, Jensen’s inequality can also be stated for concave functions. For any random variable $Z$ with a finite expected value and for any concave function $h(z)$,
$E[h(Z)] \leq h[E(Z)]$.
In future Statistics Lessons of the Day, I will prove Jensen’s inequality and discuss some of its implications in mathematical statistics.
## Mathematical Statistics Lesson of the Day – The Glivenko-Cantelli Theorem
In 2 earlier tutorials that focused on exploratory data analysis in statistics, I introduced
There is actually an elegant theorem that provides a rigorous basis for using empirical CDFs to estimate the true CDF – and this is true for any probability distribution. It is called the Glivenko-Cantelli theorem, and here is what it states:
Given a sequence of $n$ independent and identically distributed random variables, $X_1, X_2, ..., X_n$,
$P[\lim_{n \to \infty} \sup_{x \epsilon \mathbb{R}} |\hat{F}_n(x) - F_X(x)| = 0] = 1.$
In other words, the empirical CDF of $X_1, X_2, ..., X_n$ converges uniformly to the true CDF.
My mathematical statistics professor at the University of Toronto, Keith Knight, told my class that this is often referred to as “The First Theorem of Statistics” or the “The Fundamental Theorem of Statistics”. I think that this is a rather subjective title – the central limit theorem is likely more useful and important – but Page 261 of John Taylor’s An introduction to measure and probability (Springer, 1997) recognizes this attribution to the Glivenko-Cantelli theorem, too.
## Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Chebyshev’s Inequality
In 2 recent Statistics Lessons of the Day, I
Chebyshev’s inequality is just a special version of Markov’s inequality; thus, their motivations and intuitions are similar.
$P[|X - \mu| \geq k \sigma] \leq 1 \div k^2$
Markov’s inequality roughly says that a random variable $X$ is most frequently observed near its expected value, $\mu$. Remarkably, it quantifies just how often $X$ is far away from $\mu$. Chebyshev’s inequality goes one step further and quantifies that distance between $X$ and $\mu$ in terms of the number of standard deviations away from $\mu$. It roughly says that the probability of $X$ being $k$ standard deviations away from $\mu$ is at most $k^{-2}$. Notice that this upper bound decreases as $k$ increases – confirming our intuition that it is highly improbable for $X$ to be far away from $\mu$.
As with Markov’s inequality, Chebyshev’s inequality applies to any random variable $X$, as long as $E(X)$ and $V(X)$ are finite. (Markov’s inequality requires only $E(X)$ to be finite.) This is quite a marvelous result!
## Mathematical Statistics Lesson of the Day – Chebyshev’s Inequality
The variance of a random variable $X$ is just an expected value of a function of $X$. Specifically,
$V(X) = E[(X - \mu)^2], \ \text{where} \ \mu = E(X)$.
Let’s substitute $(X - \mu)^2$ into Markov’s inequality and see what happens. For convenience and without loss of generality, I will replace the constant $c$ with another constant, $b^2$.
$\text{Let} \ b^2 = c, \ b > 0. \ \ \text{Then,}$
$P[(X - \mu)^2 \geq b^2] \leq E[(X - \mu)^2] \div b^2$
$P[ (X - \mu) \leq -b \ \ \text{or} \ \ (X - \mu) \geq b] \leq V(X) \div b^2$
$P[|X - \mu| \geq b] \leq V(X) \div b^2$
Now, let’s substitute $b$ with $k \sigma$, where $\sigma$ is the standard deviation of $X$. (I can make this substitution, because $\sigma$ is just another constant.)
$\text{Let} \ k \sigma = b. \ \ \text{Then,}$
$P[|X - \mu| \geq k \sigma] \leq V(X) \div k^2 \sigma^2$
$P[|X - \mu| \geq k \sigma] \leq 1 \div k^2$
This last inequality is known as Chebyshev’s inequality, and it is just a special version of Markov’s inequality. In a later Statistics Lesson of the Day, I will discuss the motivation and intuition behind it. (Hint: Read my earlier lesson on the motivation and intuition behind Markov’s inequality.)
## Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Markov’s Inequality
Markov’s inequality may seem like a rather arbitrary pair of mathematical expressions that are coincidentally related to each other by an inequality sign:
$P(X \geq c) \leq E(X) \div c,$ where $c > 0$.
However, there is a practical motivation behind Markov’s inequality, and it can be posed in the form of a simple question: How often is the random variable $X$ “far” away from its “centre” or “central value”?
Intuitively, the “central value” of $X$ is the value that of $X$ that is most commonly (or most frequently) observed. Thus, as $X$ deviates further and further from its “central value”, we would expect those distant-from-the-centre vales to be less frequently observed.
Recall that the expected value, $E(X)$, is a measure of the “centre” of $X$. Thus, we would expect that the probability of $X$ being very far away from $E(X)$ is very low. Indeed, Markov’s inequality rigorously confirms this intuition; here is its rough translation:
As $c$ becomes really far away from $E(X)$, the event $X \geq c$ becomes less probable.
You can confirm this by substituting several key values of $c$.
• If $c = E(X)$, then $P[X \geq E(X)] \leq 1$; this is the highest upper bound that $P(X \geq c)$ can get. This makes intuitive sense; $X$ is going to be frequently observed near its own expected value.
• If $c \rightarrow \infty$, then $P(X \geq \infty) \leq 0$. By Kolmogorov’s axioms of probability, any probability must be inclusively between $0$ and $1$, so $P(X \geq \infty) = 0$. This makes intuitive sense; there is no possible way that $X$ can be bigger than positive infinity.
## Mathematical Statistics Lesson of the Day – Markov’s Inequality
Markov’s inequality is an elegant and very useful inequality that relates the probability of an event concerning a non-negative random variable, $X$, with the expected value of $X$. It states that
$P(X \geq c) \leq E(X) \div c,$
where $c > 0$.
I find Markov’s inequality to be beautiful for 2 reasons:
1. It applies to both continuous and discrete random variables.
2. It applies to any non-negative random variable from any distribution with a finite expected value.
In a later lesson, I will discuss the motivation and intuition behind Markov’s inequality, which has useful implications for understanding a data set.
## Mathematics and Applied Statistics Lesson of the Day – The Geometric Mean
Suppose that you invested in a stock 3 years ago, and the annual rates of return for each of the 3 years were
• 5% in the 1st year
• 10% in the 2nd year
• 15% in the 3rd year
What is the average rate of return in those 3 years?
It’s tempting to use the arithmetic mean, since we are so used to using it when trying to estimate the “centre” of our data. However, the arithmetic mean is not appropriate in this case, because the annual rate of return implies a multiplicative growth of your investment by a factor of $1 + r$, where $r$ is the rate of return in each year. In contrast, the arithmetic mean is appropriate for quantities that are additive in nature; for example, your average annual salary from the past 3 years is the sum of last 3 annual salaries divided by 3.
If the arithmetic mean is not appropriate, then what can we use instead? Our saviour is the geometric mean, $G$. The average factor of growth from the 3 years is
$G = [(1 + r_1)(1 + r_2) ... (1 + r_n)]^{1/n}$,
where $r_i$ is the rate of return in year $i$, $i = 1, 2, 3, ..., n$. The average annual rate of return is $G - 1$. Note that the geometric mean is NOT applied to the annual rates of return, but the annual factors of growth.
Returning to our example, our average factor of growth is
$G = [(1 + 0.05) \times (1 + 0.10) \times (1 + 0.15)]^{1/3} = 1.099242$.
Thus, our annual rate of return is $G - 1 = 1.099242 - 1 = 0.099242 = 9.9242\%$.
Here is a good way to think about the difference between the arithmetic mean and the geometric mean. Suppose that there are 2 sets of numbers.
1. The first set, $S_1$, consists of your data $x_1, x_2, ..., x_n$, and this set has a sample size of $n$.
2. The second, $S_2$, set also has a sample size of $n$, but all $n$ values are the same – let’s call this common value $y$.
• What number must $y$ be such that the sums in $S_1$ and $S_2$ are equal? This value of $y$ is the arithmetic mean of the first set.
• What number must $y$ be such that the products in $S_1$ and $S_2$ are equal? This value of $y$ is the geometric mean of the first set.
Note that the geometric means is only applicable to positive numbers.
## Mathematics and Applied Statistics Lesson of the Day – The Weighted Harmonic Mean
In a previous Statistics Lesson of the Day on the harmonic mean, I used an example of a car travelling at 2 different speeds – 60 km/hr and 40 km/hr. In that example, the car travelled 120 km at both speeds, so the 2 speeds had equal weight in calculating the harmonic mean of the speeds.
What if the cars travelled different distances at those speeds? In that case, we can modify the calculation to allow the weight of each datum to be different. This results in the weighted harmonic mean, which has the formula
$H = \sum_{i = 1}^{n} w_i \ \ \div \ \ \sum_{i = 1}^{n}(w_i \ \div \ x_i)$.
For example, consider a car travelling for 240 kilometres at 2 different speeds and for 2 different distances:
1. 60 km/hr for 100 km
2. 40 km/hr for another 140 km
Then the weighted harmonic mean of the speeds (i.e. the average speed of the whole trip) is
$(100 \text{ km} \ + \ 140 \text{ km}) \ \div \ [(100 \text{ km} \ \div \ 60 \text{ km/hr}) \ + \ (140 \text{ km} \ \div \ 40 \text{ km/hr})]$
$= 46.45 \text{ km/hr}$
Notice that this is exactly the same calculation that we would use if we wanted to calculate the average speed of the whole trip by the formula from kinematics:
$\text{Average Speed} = \Delta \text{Distance} \div \Delta \text{Time}$
## Mathematics and Applied Statistics Lesson of the Day – The Harmonic Mean
The harmonic mean, H, for $n$ positive real numbers $x_1, x_2, ..., x_n$ is defined as
$H = n \div (1/x_1 + 1/x_2 + .. + 1/x_n) = n \div \sum_{i = 1}^{n}x_i^{-1}$.
This type of mean is useful for measuring the average of rates. For example, consider a car travelling for 240 kilometres at 2 different speeds:
1. 60 km/hr for 120 km
2. 40 km/hr for another 120 km
Then its average speed for this trip is
$S_{avg} = 2 \div (1/60 + 1/40) = 48 \text{ km/hr}$
Notice that the speed for the 2 trips have equal weight in the calculation of the harmonic mean – this is valid because of the equal distance travelled at the 2 speeds. If the distances were not equal, then use a weighted harmonic mean instead – I will cover this in a later lesson.
To confirm the formulaic calculation above, let’s use the definition of average speed from physics. The average speed is defined as
$S_{avg} = \Delta \text{distance} \div \Delta \text{time}$
We already have the elapsed distance – it’s 240 km. Let’s find the time elapsed for this trip.
$\Delta \text{ time} = 120 \text{ km} \times (1 \text{ hr}/60 \text{ km}) + 120 \text{ km} \times (1 \text{ hr}/40 \text{ km})$
$\Delta \text{time} = 5 \text{ hours}$
Thus,
$S_{avg} = 240 \text{ km} \div 5 \text{ hours} = 48 \text { km/hr}$
Notice that this explicit calculation of the average speed by the definition from kinematics is the same as the average speed that we calculated from the harmonic mean!
## Mathematical and Applied Statistics Lesson of the Day – The Central Limit Theorem Can Apply to the Sum
The central limit theorem (CLT) is often stated in terms of the sample mean of independent and identically distributed random variables. An often unnoticed or forgotten aspect of the CLT is its applicability to the sample sum of those variables. Since $n$, the sample size, is just a constant, it can be multiplied to $\bar{X}$ to obtain $\sum_{i = 1}^{n} X_i$. For a sufficiently large $n$, this new statistic still has an approximately normal distribution, just with a new expected value and a new variance.
$\sum_{i = 1}^{n} X_i \overset{approx.}{\sim} \text{Normal} (n\mu, n\sigma^2)$
## Video Tutorial – Useful Relationships Between Any Pair of h(t), f(t) and S(t)
I first started my video tutorial series on survival analysis by defining the hazard function. I then explained how this definition leads to the elegant relationship of
$h(t) = f(t) \div S(t)$.
In my new video, I derive 6 useful mathematical relationships that exist between any 2 of the 3 quantities in the above equation. Each relationship allows one quantity to be written as a function of the other.
I am excited to continue adding to my Youtube channel‘s collection of video tutorials. Please stay tuned for more!
You can also watch this new video below the fold!
## Mathematical and Applied Statistics Lesson of the Day – The Central Limit Theorem Applies to the Sample Mean
Having taught and tutored introductory statistics numerous times, I often hear students misinterpret the Central Limit Theorem by saying that, as the sample size gets bigger, the distribution of the data approaches a normal distribution. This is not true. If your data come from a non-normal distribution, their distribution stays the same regardless of the sample size.
Remember: The Central Limit Theorem says that, if $X_1, X_2, ..., X_n$ is an independent and identically distributed sample of random variables, then the distribution of their sample mean is approximately normal, and this approximation gets better as the sample size gets bigger.
## Video Tutorial – The Hazard Function is the Probability Density Function Divided by the Survival Function
In an earlier video, I introduced the definition of the hazard function and broke it down into its mathematical components. Recall that the definition of the hazard function for events defined on a continuous time scale is
$h(t) = \lim_{\Delta t \rightarrow 0} [P(t < X \leq t + \Delta t \ | \ X > t) \ \div \ \Delta t]$.
Did you know that the hazard function can be expressed as the probability density function (PDF) divided by the survival function?
$h(t) = f(t) \div S(t)$
In my new Youtube video, I prove how this relationship can be obtained from the definition of the hazard function! I am very excited to post this second video in my new Youtube channel. You can also view the video below the fold!
|
{}
|
# Induced connections on induced bundles
Start with a connection on a vector bundle over a smooth manifold.
This bundle has a morphism into a universal bundle with connection and any two such morphisms are smoothly homotopic.
What is the relation between the two induced connections? Is it that there is a bundle isomorphism of the two induced bundles so that the one induced connection is induced from the other?
-
|
{}
|
Continuous extension of a real function
Related;
Open set in $\mathbb{R}$ is a union of at most countable collection of disjoint segments
This is the theorem i need to prove;
"Let $E(\subset \mathbb{R})$ be closed subset and $f:E\rightarrow \mathbb{R}$ be a contiuous function. Then there exists a continuous function $g:\mathbb{R} \rightarrow \mathbb{R}$ such that $g(x)=f(x), \forall x\in E$."
I have tried hours to prove this, but couldn't. I found some solutions, but ridiculously all are wrong. Every solution states that "If $x\in E$ and $x$ is not an interior point of $E$, then $x$ is an endpoint of a segment of at most countable collection of disjoint segments.". However, this is indeed false! (Check Arthur's argument in the link above)
Wrong solution Q4.5;
http://www.math.ust.hk/~majhu/Math203/Rudin/Homework15.pdf
Just like the argument in this solution, i can see that $g$ is continuous on $E^c$ and $Int(E)$. But how do i show that $g$ is continuous on $E$?
-
I don't know how to formulate this rigorously, but it seems, all you need to do is take the points $f(\partial E)$ and show that they can be connected by a continuous function in $\mathbb{R} \setminus Int(E)$, isnt't it? – Karolis Juodelė Oct 23 '12 at 17:07
A constructive and explicit proof proceeds as follows. Since $E$ is closed, $U=\mathbb{R}\setminus E$ is a countable union of disjoint open intervals, say, $U=\bigcup (a_n,b_n)$. Necessarily, we must have that $a_n,b_n\in E$. Define $f(x)$ as follows. $$f(x) = \begin{cases} g(x) &\text{if }x\in E \\ \frac{x-a_n}{b_n-a_n}g(b_n)+\frac{b_n-x}{b_n-a_n}g(a_n) & \text{if }x\in[a_n,b_n] \end{cases}$$
Notice first that $f(x)$ is well-defined and also, for all $x\in(a_n,b_n)$, either $g(a_n)\le f(x)\le g(b_n)$ or $g(b_n)\le f(x)\le g(a_n)$ depending on whether $g(a_n)\le g(b_n)$ or otherwise. Clearly, $f$ is continuous on $U$. Now suppose that $x\in E$ and $\epsilon>0$. Then there are a few cases.
Case 1: Suppose that for every $\eta>0$, $(x-\eta,x)\cap E\not=\emptyset$ and $(x,x+\eta))\cap E\not=\emptyset$. Then since $f\vert_E=g$, there is some $\delta>0$ such that if $y\in E$ and $\vert x-y\vert<\delta$ then $\vert f(x)-f(y)\vert<\epsilon$. Because of the condition we have for Case 1, we may choose some $x_1,x_2\in E$ with $x-\delta<x_1<x<x_2<x+\delta$. Choose $\delta'=\min\{x-x_1,x_2-x\}$. If $\vert y-x\vert<\delta'$, then if $y\in E$, we're done. If $y\in U$, then $y\in(a_m,b_m)$ for some $m\in\mathbb{N}$. Furthermore, $a_m,b_m\in E$ and are within $\delta$ of $x$. Also, $f(y)$ is lies between $g(a_m)$ and $g(b_m)$. Thus $f(y)$ is within $\epsilon$ of $f(x)$ since $f(a_m)=g(a_m)$ and $f(b_m)=g(b_m)$ are within $\epsilon$ of $f(x)$.
Case 2: There is some $\eta>0$ for which $(x-\eta,x)\cap E=\emptyset$ or $(x,x+\eta)\cap E=\emptyset$. In this case, $x$ is an endpoint of one of the intervals of $U$. Thus $f$ is linear on either $[x,x+\eta)$ or $(x-\eta,x]$ (maybe both). Certainly, we can get a $\delta>0$ corresponding to $\epsilon$ on this side of $x$. For the other side of $x$, use the argument from Case 1 to get some $\delta'$. Choosing $\delta''=\min\{\delta,\delta'\}$ proves the result.
-
I'm afraid your proof is wrong. In case 2, $x$ need not be an endpoint. You can check this in the link in my post. – Katlus Oct 23 '12 at 18:07
@Katlus, my apologies, I switched $\not=$ and $=$. It's fixed now. – J. Loreaux Oct 23 '12 at 18:37
@Katlus: The argument is incomplete, though the hole is easily patched. As it stands, it works only if $E$ is unbounded in both directions. If $E$ is bounded above, let $f(x)=f(\sup E)$ for $x\ge\sup E$, and similarly if $E$ is bounded below. – Brian M. Scott Oct 23 '12 at 18:47
@Brian, that's a valid point. Thanks for mentioning it. – J. Loreaux Oct 23 '12 at 18:51
You’re welcome. – Brian M. Scott Oct 23 '12 at 18:51
This is a special case of the Tietze extension theorem. This is a standard result whose proof can be found in any decent topology text. A rather different proof can be found here.
-
3000 answers, Prof.! – Pedro Tamaroff Oct 23 '12 at 17:41
@Peter: So it is; I hadn’t any idea that I’d written so many. – Brian M. Scott Oct 23 '12 at 17:54
@Brian Yes, Tietze extension theorem is a generalization of this theorem. I searched for it, but it seems like it's almost impossible to prove that theorem on my level right now, so I wanted to prove it at least for $\mathbb{R}$ which is a special case just for now. – Katlus Oct 23 '12 at 18:01
|
{}
|
In our Computational Logic seminar here at The University of Iowa, we are studying logic programming this semester. We are using the very nice book “Logic, Programming, and Prolog”, freely available online. We were talking today about the existence of a least Herbrand model for a definite program. A definite program is just a set of clauses of the form $A_0 \leftarrow A_1,\ldots,A_m$, where each $A_i$ is an atomic formula (predicate applied to terms). (Free variables in clauses are interpreted universally.) If $m = 0$, then we just have an atomic fact $A_0$ in the definite program. A Herbrand interpretation is a first-order structure where each function symbol $f$ of arity $k$ is interpreted as $\lambda x_1,\ldots,x_k. f(x_1,\ldots,x_k)$, and each predicate is interpreted as a subset of the set of ground (i.e., variable-free) atomic formulas. A Herbrand model of a definite program P is then just a Herbrand interpretation which satisfies every clause in P. It will be convenient below to identify a Herbrand interpretation with a subset of the set of all ground atomic formulas. Such a subset determines the meanings of the predicate symbols by showing for which tuples of ground terms they hold. We will pass tacitly between the view of a Herbrand interpretation as a first-order structure and the view of it as a set of ground atomic formulas. The Herbrand base is the Herbrand interpretation corresponding to the set of all ground atomic formulas. It says that everything is true.
What I want to talk about briefly in this post is the fact that the set of Herbrand models of definite program P forms a complete partial order, where the ordering is the subset relation, the greatest element is the Herbrand base, and the greatest lower bound of a non-empty subset S of Herbrand models of P is the intersection of all the models in S. In a complete partial order, every subset S of elements should have a greatest lower bound (though it need not lie in S). Alternatively — and what I am interested in for this post — we can stipulate that every subset S should have a least upper bound. The two formulations are equivalent, and the proof is written out below. “Logic, Programming, and Prolog” contains a simple elegant proof of the fact that the intersection of a non-empty set of Herbrand models is itself a Herbrand model.
What I want to record here is the proof that in general, if in a partial order $(X,\sqsubseteq)$ every subset $S\subseteq X$ (including the empty set) has a greatest lower bound, then every such $S$ also has a least upper bound. The proof I have seen for this is a one-liner in Crole’s “Categories for Types”. It took me some puzzling to understand, so I am writing it here as much for my own memory as for the possible interest of others, including others from the seminar who watched me fumble with the proof today!
Let $S$ be a subset of $X$. Let $\textit{ub}(S)$ be the set of elements which are upper bounds of $S$ (that is, the set of elements $u$ which are greater than or equal to every element of $S$). The claim is that the greatest lower bound of $\textit{ub}(S)$ is the least upper bound of $S$. By the assumption that every subset of $X$ has a greatest lower bound, we know that there really is some element $q$ which is the greatest lower bound of $\textit{ub}(S)$. As such, $q$ is greater than or equal to every other lower bound of $\textit{ub}(S)$. Now here is a funny thing. Every element $x$ of $S$ is a lower bound of $\textit{ub}(S)$. Because if $y\in \textit{ub}(S)$, this means that $y$ is greater than or equal to every element in $S$. In particular, it is greater than or equal to $x$. Since this is true for every $y\in \textit{ub}(S)$, we see that $x$ is a lower bound of $\textit{ub}(S)$. But $q$ is the greatest of all such lower bounds by construction, so it is greater than or equal to the lower bound $x$. And since this is true for all $x\in S$, we see that $q$ is an upper bound of all those elements, and hence an upper bound of $S$. We just have to prove now that it is the least of all the upper bounds of $S$. Suppose $u'$ is another upper bound of $S$. This means $u'\in\textit{ub}(S)$. Since by construction $q$ is a lower bound of $\textit{ub}(S)$, this means that $q \sqsubseteq u'$, as required to show that $q$ is the least of all the upper bounds of $S$.
The final interesting thing to note about the complete partial order of Herbrand models of a definite program P is that while the greatest lower bound of a non-empty set $S$ of models is their intersection, and while the greatest element is the Herbrand base (a universal Herbrand model), the intuitive duals of these operations are not the least element nor the least upper bound operation. The intuitive dual of a universal Herbrand model would be, presumably, the empty Herbrand interpretation. But this need not be a model at all. For example, the definite program P could contain an atomic fact like $p(a)$, and then the empty Herbrand interpretation would not sastisfy that fact. Furthermore, if $S$ is a non-empty set of Herbrand models, $\bigcup S$ is not the least upper bound of $S$. That is because $\bigcup S$ need not be a Herbrand model of P at all. Here is a simple example. Suppose P is the definite program consisting of clauses $\textit{ok}(h(a,b))$ and $\textit{ok}(h(x,y)) \leftarrow \textit{ok}(x),\textit{ok}(y)$. Consider the following two Herbrand models $H_1$ and $H_2$ of this program P. In $H_1$ the interpretation of $\textit{ok}$ contains all the terms built using $h$ from $a$ and $h(a,b)$. In $H_2$, the interpretation of $\textit{ok}$ contains all the terms built using $h$ from $b$ and $h(a,b)$. If we take the intersection of $H_1$ and $H_2$, then it is a Herbrand model, in fact the minimal one: it says that $\textit{ok}(h(a,b))$ is true, as required by the first clause in P; and if two terms $t_1$ and $t_2$ are in the interpretation of $\textit{ok}$, then so is $h(t_1,t_2)$. But if we take the union of $H_1$ and $H_2$, what we get is not a Herbrand model of P at all. Because $H_1 \cup H_2$ contains $\textit{ok}(h(a,a))$ and $\textit{ok}(h(b,b))$, for example, but not $\textit{ok}(h(h(a,a),h(b,b)))$. To get an upper bound of $H_1$ and $H_2$, it is not enough to take their union. One must take their union and then close them under the deductive consequences of the program P. That’s the intuition, though we would need to formally define closure under deductive consequences — and it would be a bit nicer to be able to apply a model-theoretic notion (since we are working model-theoretically here) rather than a proof-theoretic one. Declaratively, we know we can get the least upper bound of a set $S$ of Herbrand models as the intersection of the set of all Herbrand models which are supersets of every model in $S$. But this is rather a hard definition to work with.
Anyhow, this is a nice example of finding an interesting abstract structure in semantics, as well as a good exercise in reasoning about such structures.
I haven’t yet started repeating myself — though there’s every chance you’ll hear it here twice — but iteration is the sort of thing one can find just one use after another for. I mean, if you’ve seen it once, you’ve seen it a thousand times: iteration delivers repeatedly. How many times have you iterated to good effect? I say again: is iteration great or what?
Ok, got that out of my system. :-) I am working on lambda encodings right now, and with Church-encoded data, every piece of data is its own iterator. So the encoding tends to make one think of algorithms in terms of iteration. We have a function f, and a starting point a, and we wish to apply f to a in a nested fashion, n times: $f^0(a) = a$ and $f^{n+1}(a) = f(f^n(a))$. To multiply numbers N and M, for example, we can iterator the function “add M” on starting point 0, N times. And other natural algorithms have iterative formulations.
What about division? Usually in total type theories (where it is required that uniform termination of every function — that is, termination on all inputs — has to be confirmed statically by some termination-checking algorithm or technique), natural-number division is implemented using well-founded recursion. The idea is that to divide x by y, we basically want to see how many times we can subtract y from x until x becomes smaller than y (at which point it is the remainder of the division). So one wants to make a recursive call to division on x – y, and since that quantity is not the predecessor of x (or y), the usual structural decrease demanded by the termination checker is not satisfied. So the usual simple schemes for observing termination statically cannot confirm that division is terminating. And indeed, if y were 0, there would be no structural decrease. So it is not a completely trivial matter. The solution one finds in Coq (Arith/Euclid.v in the standard library for Coq version 8.4) and Agda (Data/Nat/DivMod.agda in the standard library version 0.8) is to use well-founded recursion. This is a somewhat advanced method that uses a generalized inductive type to encode, effectively, all the legal terminating call sequences one could make using a given well-founded ordering. Then we can do structural recursion on an extra argument of this generalized inductive type.
Well-founded recursion is really quite cool, and it’s amazing to see the power of the type theory in the fact that well-founded recursion is derivable, not primitive, to the language. Every student of type theory should try walking through the definitions needed for well-founded recursion over, say, that natural number ordering <. But as elegant and impressive as it is, it's a pretty heavy hammer to have to get out. For starters, if you want to reason later about the function you defined by well-founded recursion, you are most likely going to have to use well-founded induction in that reasoning. So you find yourself continually setting up these somewhat complicated inductions to prove simple lemmas. A second issue is that at least in Agda, because there is no term erasure explicit in the language, if you write a function by well-founded recursion, you are going to be manipulating these values of the generalized inductive datatype at runtime. I reported earlier on this blog that in my experience this led to a major, major slowdown for running code extracted from Agda. So if you are just doing some formal development to prove a theorem, then well-founded recursion won't cause you serious problems in Agda. But if you want to extract and run code that uses well-founded recursion, you likely will see major performance issues.
In my standard library for Agda, the version of natural-number division defined by well-founded recursion is in nat-division.agda:
{- a div-result for dividend x and divisor d consists of the quotient q, remainder r, and a proof that q * d + r = x -} div-result : ℕ → ℕ → Set div-result x d = Σ ℕ (λ q → Σ ℕ (λ r → q * d + r ≡ x))
div-helper : ∀ (x : ℕ) → WfStructBool _<_ x → (y : ℕ) → y =ℕ 0 ≡ ff → div-result x y div-helper x wfx 0 () div-helper x (WfStep fx) (suc y) _ with 𝔹-dec (x =ℕ 0) ... | inj₁ u = 0 , 0 , sym (=ℕ-to-≡ u) ... | inj₂ u with 𝔹-dec (x < (suc y)) ... | inj₁ v = 0 , (x , refl) ... | inj₂ v with (div-helper (x ∸ (suc y)) (fx (∸< {x} u)) (suc y) refl) ... | q , r , p with <ff {x} v ... | p' with ∸eq-swap{x}{suc y}{q * (suc y) + r} p' p ... | p'' = (suc q) , (r , lem p'') where lem : q * (suc y) + r + suc y ≡ x → suc (y + q * suc y + r) ≡ x lem p''' rewrite +suc (q * (suc y) + r) y | +comm y (q * (suc y)) | +perm2 (q * (suc y)) r y = p'''
_÷_!_ : (x : ℕ) → (y : ℕ) → y =ℕ 0 ≡ ff → div-result x y x ÷ y ! p = div-helper x (wf-< x) y p
This code returns a value of type div-result x y, which contains the quotient q, remainder r, and the proof that x equals y * q + r. It is not as simple as one would like, due to the use of well-founded recursion.
But we can avoid well-founded recursion for defining division, if we go back to our old friend iteration (“There he is again!” — sorry, I said I had that out of my system, but apparently not quite). Because we know that we will not possibly iterate subtraction of y from x more than x times, if y is not 0. So we can pass an extra argument in to division which is a counter, that we start out at x. Again we use the div-result type, but this time there is no need for well-founded recursion:
divh : (n : ℕ) → (x : ℕ) → (y : ℕ) → x ≤ n ≡ tt → y =ℕ 0 ≡ ff → div-result x y divh 0 0 y p1 p2 = 0 , 0 , refl divh 0 (suc x) y () p2 divh (suc n) x y p1 p2 with keep (x < y) divh (suc n) x y p1 p2 | tt , pl = 0 , x , refl divh (suc n) x y p1 p2 | ff , pl with divh n (x ∸ y) y (∸≤2 n x y p1 p2) p2 divh (suc n) x y p1 p2 | ff , pl | q , r , p = suc q , r , lem where lem : y + q * y + r ≡ x lem rewrite sym (+assoc y (q * y) r) | p | +comm y (x ∸ y) = ∸+2{x}{y} (<ff{x}{y} pl)
_÷_!_ : (x : ℕ) → (y : ℕ) → y =ℕ 0 ≡ ff → div-result x y x ÷ y ! p = divh x x y (≤-refl x) p
You can find this in nat-division2.agda. The code is also a bit less cluttered with helper lemmas, although we still do need to require that x is less than or equal to n, in order to rule out the case that we run out of counter budget (n) before we are done dividing x.
This example shows that sometimes iteration is sufficient for defining functions like division whose natural definition is not structurally recursive. The moral of the story is that we should not forget about iteration. And that is a lesson worth repeating!
Well, I am embarrassed at how late I am in posting the solution to the puzzle I mentioned in my last post. It has been a busy summer with taking care of our sweet new baby at home, and running StarExec development at work. Anyhow, below is a graph with the minimum number of nodes which contains every legal possible combination of properties from termination (aka strong normalization), normalization (aka weak normalization), confluence (aka Church-Rosser), and local confluence (aka Weak Church-Rosser), and their negations. This graph was found by Hans Zantema, whom I asked about this puzzle by email (he agreed to let me share his solution here). Furthermore, he argues that 11 is the minimal number of nodes, as follows. Out of the 16 possible combinations of properties SN, WN, CR, and WCR and their negations, we exclude immediately the combinations with SN and ~WN (since SN implies WN) and CR and ~WCR (since CR implies WCR). So there are three legal possibilities for the values of CR and WCR, and three for the values of SN and WN. These are independent, so there are 9 legal combinations of properties. Now, Hans argues, since there is a node X which is SN and ~WCR, there must be two nodes which are SN and CR. For since X is SN but not WCR, it has two children (which are still SN) which cannot be joined. We may assume these children are themselves CR, otherwise we could repeat this observation and the graph would not be minimal. Similarly, since there is a node which is ~WN and ~WCR, there must be two nodes which are ~WN and CR. So there must be at least 11 nodes. And the graph below has 11 nodes. To test your knowledge, you can try to identify which combination of properties each node has! Fun!
Suppose we have a graph (A,->) consisting of a set of objects A and a binary relation -> on A. This is a simple case of an abstract reduction system, as defined in the Terese book (in the more general case, we have not just one relation ->, but an indexed set of relations). In the theory of abstract reduction systems, an element x is confluent iff whenever there is a path from x to y and a path from x to z, then there exists some element q which is reachable from both y and z. An element x is locally confluent iff whenever there is an edge (not an arbitrary path) from x to y and an edge from x to z, then there is some element q reachable from both y and z. So confluence implies local confluence, but (rather famously) the reverse implication holds only for terminating systems. An element is terminating iff there are no infinite paths from that element. An element is normalizing iff there exists a path from that element to a normal form, which is an element that has no outgoing edges. So terminating implies normalizing.
We have these four properties: confluence, local confluence, termination (sometimes also called strong normalization), and normalization (sometimes called weak normalization). What is the smallest graph that is property diverse, in the sense that for every consistent combination of properties, the graph contains an element with that combination of properties? (The consistency requirement for the set of properties for an element arises because confluence implies local confluence and termination implies normalization).
I will post the answer to this (with a nice picture) Monday…
It has been a very long time since I posted here — life has been busy, including a new baby at home. But I really want to share about my recent experience tackling several performance problems in Agda. Agda, as I hope you know already, is a very elegant dependently typed pure functional programming language. It supports Unicode, so you can write → instead of -> for function space; and many other cool symbols. It has user-defined mixfix notation, so you can define if_then_else_ (you write underscores in Agda to show where the arguments go) with the expected syntax. It compiles to Haskell, although I get the impression that many people just use Agda as an advanced type checker, and do not bother compiling executables. Agda has very good inference for implicit arguments, which can help make code shorter and more readable. It also features definition of terminating functions by recursive equations. So you can write beautiful definitions like the following for infix vector append:
_++𝕍_ : ∀ {ℓ} {A : Set ℓ}{n m : ℕ} → 𝕍 A n → 𝕍 A m → 𝕍 A (n + m)
[] ++𝕍 ys = ys
(x :: xs) ++𝕍 ys = x :: (xs ++𝕍 ys)
You might object to naming the function _++𝕍_ instead of _++_, but Agda does not support type classes or other approaches to operator overloading, and I prefer never to have to worry about symbols clashing from different included files. This definition is from my standard library (not the Agda standard library; if prompted for username and password, enter “guest” for both). There is a very nice emacs mode, too.
With all these great features, working well together, Agda provides just about the most elegant programming experience I have had in 21 years of coding. I think it is a fantastic language, with much to emulate and learn from. These accolades aside, my purpose in this post is to discuss some grotesque workarounds for performance problems inherent in the implementation and maybe even the language. To be direct: Agda’s type checker has abysmal performance. Suppose we create an Agda source file defining test to be a list containing 3000 copies of boolean true. Agda takes 12.5 seconds to type check this file on my laptop. If we give the same example to OCaml, it takes 0.2 seconds, in other words, heading towards two orders of magnitude slower. Now, is Agda’s type checker doing fancier things than OCaml’s? Undoubtedly. But not on this example! I am willing to accept some overhead in general for fancier type checking even on code that does not use fancy types. And OCaml has been around for quite some time and is engineered by one of the best language implementors on the planet. Fine. So let Agda be 2 times slower. Let it be 5 times slower. But 60 times slower? That is not good.
I ran into the performance issues with Agda’s type checker while tilting at the windmill I’ve been dueling off and on the past three years: parsing by rewriting. Without going into detail here, let me just say I’ve been working, with very patient colleague Nao Hirokawa, to create a new approach to parsing based on term rewriting. The idea as it stands now is that one runs an automaton approximating the language of your CFG over the input string, to create a formal artifact called a run. Then one applies confluent run-rewriting rules to this run, where those rules are derived from the productions of the grammar and will rewrite every string in the language to a term built from the start symbol and then containing the parse tree as a subterm. I love the approach, because it is inherently parallelizable (because the run-rewriting rules are confluent), and because we can resolve ambiguities in the grammar by adding rewrite rules. The trickiest part is coming up with the right approximating automaton, and this is still not at all ready for prime time (despite the fact that I inflicted it on my undergraduate class this semester).
Anyhow, since I have been teaching undergrad PL this semester using Agda (more on this in a later post), and since Agda does not have a parser generator (perhaps because FPers seem to prefer parser combinators), I decided I would make my parsing-by-rewriting tool, called gratr, target Agda as a backend. After a fair bit of hacking I had this working, only to discover that for even pretty small grammars, I was generating several thousand line Agda source files, which Agda absolutely could not handle. Imagine my disappointment! Beautiful (if not yet ready for big grammars) approach to parsing, targeting Agda, and the Agda type checker could not handle my generated parsers’ source files, except for the tiniest of grammars. I was depressed, and determined to find a way around this problem to get working parsers from the medium-small grammars I wanted to use for class, as well as research.
Enter –use-string-encoding. A little experimentation revealed that while Agda chokes checking even very simple terms when they get even moderately big, it will affirm that double-quoted strings indeed have type string in time seemingly independent of string size. Oh the fiendery. Let us encode all our parsing data structures — that means automaton and run-rewriting rules both — as strings, get them past the Agda type checker, and then decode at runtime to get our parser. It is gross, it is nasty, but it might just work. Of course, no one wants to decode their parsing data structures every time a string is parsed, but that was the price I decided I’d be willing to pay to get my parsers running in Agda.
I spent a month or so — while learning to care, with my wife, for a very fussy newborn — implementing this. I finally had code in gratr to dump all the data structures as strings, and code in Agda to decode those strings and plug them into my existing parsing-by-rewriting infrastructure. Agda could type check the file containing the strings in a second or two, even when the strings were huge (megabytes-long files, due to the unfortunately large automata my approach is currently producing). The moment of truth arrives: let us actually compile the entire shebang to an executable (not just type check that one file). Agda type-checking chokes. I cannot believe what I am seeing. What is happening? I can type check the files containing the string-encoded data structures almost instantly, but type-checking the wrapper file defining the main entry point (which is just based off the way Haskell sets up code for compilation to executables) is running seemingly forever. A little quick experimentation reveals: big strings encoding the data structures makes type checking that file super slow. What gives! Further head scratching leads me to suspect that for some reason, when Agda is instantiating a module somewhere in my setup, it is actually trying to normalize the fields of a record, where those fields are calling the decode functions on the string encodings. This is the step that could take a second or two at runtime, with ghc-optimized executables, but will likely take forever with Agda’s compile-time interpreter. How to make Agda stop doing this?
Here’s a hack for this:
postulate
runtime-identity : ∀{A : Set} → A → A
{-# COMPILED runtime-identity (\ _ x -> x ) #-}
The idea is that we will interpose the postulate called runtime-identity to block the redex of decoder applied to string encoding. At compile time, Agda knows nothing about runtime-identity, and hence will not be able to reduce that redex. But at runtime, runtime-identity will be compiled to the identity function in Haskell, and hence the redex will reduce.
Delightfully, this worked. Compiling the emitted parsers with the string encodings is now very quick, maybe 10 seconds to get all the way through Agda to Haskell to an executable. Awesome! Now let’s just run the executable. Heh heh. There is no way that could not work, right?
Wrong, of course. Running the executable on a tiny input string takes 7 minutes and then I kill it. Oh my gosh, I am thinking. I just spent 5 weeks of precious coding time (in and around other duties, especially new childcare duties, with my patient wife looking on and wondering…) to get this gross hack working, and now the runtime performance is unacceptable. I almost despair.
But hark! Reason calls. It cannot be taking a ghc-optimized executable that long to decode my string-encoded data structures. After all, encoding the data structures to strings from my OCaml gratr implementation is instantaneous. Sure, decoding could be a bit longer, but forever longer? That can’t be right. So how can we figure out what is going on?
Easy: profile the compiled code with ghc’s profiling features. Agda just compiles down to (almost unreadable) Haskell, which is then compiled by ghc, so we can just profile the code with ghc. I had never used ghc’s profiler, but it was very simple to invoke and the results were easily understandable. Where is the time going? Here is the scary line:
d168 MAlonzo.Code.QnatZ45Zthms 322 7928472 10.9 36.0 55.0 53.5
The last numbers are showing that over half the time of the executable is going into function d168 in nat-thms.agda. A function in nat-thms.agda? That contains a bunch of lemmas and theorems about natural-number operations. I hardly expect my parser to be grunting away there. What is d168? Well, it is the Agda-emitted version of this lemma:
<-drop : ∀ {x y : ℕ} → (x < (suc y) ≡ tt) → x ≡ y ∨ x < y ≡ tt
This function looks to take linear time in the size of x, which could be the length of the emitted string encoding in this case. Where on earth is this called from? And why is its evaluation getting forced anyway in Haskell’s lazy evaluation model? <-drop is getting called in
wf-< : ∀ (n : ℕ) → WfStructBool _<_ n
This is the proof that the _<_ ordering on natural numbers is well-founded. The string-decoding functions have to use well-founded recursion for Agda to see they are terminating. You recursively decode some part of the string, and then need to continue on the residual part of the string that has not been decoded yet, which is returned by your recursive call. Agda cannot see that the residual you are recursing on is a subterm of the starting input string, so it cannot confirm the function is structurally terminating. The solution is to use well-founded recursion. And this is taking, as far as I can tell, time quadratic in the size of the input string to be decoded. These strings are long, so a quadratic time operation (with lots of recursion and pattern matching) is going to kill us.
What is the solution? Strip out the well-founded recursion and just disable Agda’s termination checker. I do this, cross my fingers, compile, run, and … it works! Hot diggety.
So those are the three performance problems we tackled here in Agda: slow type checking (just avoid the type checker altogether by encoding big data structures as strings and decoding at runtime), unwanted compile-time evaluation (interpose postulated runtime-identity to block the redexes), and super slow well-founded recursion (punt and disable the termination checker). I am interested in any similar experiences readers may have had….
|
{}
|
Deals Of The Week - hours only!Up to 80% off on all courses and bundles.-Close
Introduction
UNIQUE
NOT NULL
CHECK
DEFAULT
27. DEFAULT - Inserting data
Summary
## Instruction
Okay. Now, let's take a closer look at how rows are added.
## Exercise
Insert a new game into the table card_game and specify only the following information: name 'Citadels', genre 'historical'. What do you think will happen? Take a look at the resulting row.
### Stuck? Here's a hint!
INSERT INTO card_game (name, genre) VALUES ('Citadels', 'historical');
|
{}
|
anonymous one year ago Naomi plotted the graph below to show the relationship between the temperature of her city and the number of popsicles she sold daily:
1. anonymous
2. anonymous
Part A: In your own words, describe the relationship between the temperature of the city and the number of popsicles sold. (2 points) Part B: Describe how you can make the line of best fit. Write the approximate slope and y-intercept of the line of best fit. Show your work. (3 points)
3. anonymous
@jackmullen55
4. anonymous
as the temp increased so did the number of popsicles sold
5. anonymous
A: The hotter the city the more popsicles sole (do this in your own words love)
6. anonymous
is that A or B
7. anonymous
A
8. anonymous
B is really only drawing a line through the center of the points e.g.|dw:1432740017756:dw|
9. anonymous
to do B u pick two points that u think show how most of the points are going and then do y2-y1/x2-x1 to get the slope that u need to plot and @jackmullen55 can explain it better
10. anonymous
could u guys expanie it better
11. anonymous
bu tif u chose (90,20) and (40,10) you would do $\frac{ 90-40 }{ 40-10 }$ and u get 50/30 which reduces to 5/3 and thats the slope that u would plot to find the line of best fit
12. anonymous
But if**
13. anonymous
@jackmullen55 is that right?
14. anonymous
what do i say??
15. anonymous
Yeah it looks right
16. anonymous
one sec
17. anonymous
ok
18. anonymous
hold on but its (90,20) not (90,40)
19. anonymous
ok
20. anonymous
sry one sec
21. anonymous
Hi I'm back
|
{}
|
OR
### Start Learning Now
Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.
### Membership Overview
• Search and jump to exactly what you want to learn.
• *Ask questions and get answers from the community and our teachers!
• Practice questions with step-by-step solutions.
• Download lecture slides for taking notes.
• Track your course viewing progress.
• Accessible anytime, anywhere with our Android and iOS apps.
# The Limit of a Function
## Formal Definition
Let f(x) be a function defined on an interval that contains x=a. This function may, but does not have to be defined for input x=a. Then we say that:
[latex latex size=”2″]\lim \limits_{x \to a} = L[/latex]
if for every arbitrarily small number there is some number (exists) such that
[latex latex size=”2″]|f(x) – L < g|[/latex]
whenever
[latex latex size=”2″]0 < |x-a| < \delta [/latex]
So what does this long and complicated definition tell us? Let’s find out.
Take a look at the following graph:
This graph represents illustration of the definition above.
1. We first pick an band around the number L on the yaxis.
2. Then we determine a ? band around the number a on the xaxis so that for all x-values (excluding x=a ) inside the ? band, the corresponding yvalues lie inside the ? band.
In other words, we first pick a prescribed closeness (?) to L . Then we get close enough (?) to a so that all the corresponding yvalues fall inside the band. If a ? > 0 can be found for each value of ? > 0, then we have proven that L is the correct limit. If there is a single ? > 0 for which this process fails, then the limit L has been incorrectly computed, or the limit does not exist.
Still not clear enough? Don’t worry, it gets better.
## Intuitive definition
Let’s take a look at this function:
[latex latex size=”2″]f(x) = \frac{(x^2-1)}{x-1}[/latex]
What is the value of this function if x=1 ?
[latex latex size=”2″]f(1) = \frac{(1^2-1)}{1-1}[/latex]
[latex latex size=”2″]f(1) = \frac{0}{0}[/latex]
So here we got the indeterminate. In other words, we can’t evaluate f(1) because 0 divided by 0 is not a defined value. What we can do is take a look at what happens if we approach input x=1 closely.
For example, for x = 0.5 we have:
[latex latex size=”2″]f(0.5) = \frac{(0.5^2-1)}{0.5-1}[/latex]
[latex latex size=”2″]f(0.5) = 1.5[/latex]
If we approach x=1 closer and closer and evaluate we get these results:
x 0.9 0.99 0.999 f(x) 1.9 1.99 1.999
Let’s study the results from the table.
As x gets closer to value 1 (x approaches 1), the evaluation of a function gets closer to value 2 (f(x) approaches 2). What this tells us is that, although we couldn’t evaluate f(x) for x=1, we can assume it is going to be 2 but this is not mathematically correct answer.
So, what do we do?
Here’s where the limit of a function comes in handy. Using limit, we can say that the limit of f(x), as x approaches 1, is 2. This gives us a more intuitive definition of a limit.
Definition:If value of a function f(x) approaches L when input x approaches a then we say that L is the limit of a function f(x) at point x=a.
If a function has limit at point x=a then we say that function converges at that point. Otherwise (if this limit does not exist) we say that function diverges at point x=a.
## Testing sides, left-hand and right-hand limits
Functions can be defined in many ways. Some functions have weird graphs where value y jumps from one point to another. Take a look at these two graphs for example:
In the left graph, when input x approaches x=1 from the left side then the value of function approaches 3. We write this as
[latex latex size=”2″]\lim \limits_{x \to 1-} f(x) = 3[/latex]
Notice that in the index we have that small ‘-‘ (minus) sign. This is how we tell that it’s the left-hand limit (approaching from the left side). For the right-hand limit it will be the plus (+) in the index. So let’s see what happens in this same graph when x approaches x=1 from the right side. We see that value of a function approaches 1, which means that
[latex latex size=”2″]\lim \limits_{x \to 1+} f(x) = 1[/latex]
So the right-hand limit is 1. We see that left-hand limit and right-hand limit are not the same. When this is the case, the limit of function is not defined, or:
[latex latex size=”2″]\lim \limits_{x \to 1} f(x) = \text{undefined}[/latex]
Now let’s take a look at the other graph (the one on the right).
In this graph, as input x approaches x=1 from the left side the value of a function f(x) approaches 1, so the left-hand limit is 1. When x approaches x=1 from the right side the function f(x) again approaches 1, so the right-hand limit is also 1. When left-hand limit and right-hand limit are the same then we say that ‘normal’ limit (or just limit) of the function is that value. So in our case limit of a function is 1, or:
Since:
[latex latex size=”2″]\lim \limits_{x \to 1-} f(x) = 1[/latex]
And:
[latex latex size=”2″]\lim \limits_{x \to 1+} f(x) = 1[/latex]
Then:
[latex latex size=”2″]\lim \limits_{x \to 1} f(x) = 1[/latex]
What you need to remember is:
• If left-hand limit and right-hand limit are different at some point then the limit of a function does not exist at that point
• If both left-hand limit and right-hand limit are the same then the limit of a function is that value
## To infinity and beyond…
When dealing with limits we often need to deal with infinite valules. There are two cases. First, what happens when input x approaches infinity. The second is what happens when f(x) approaches infinity.
Take a look at this function:
[latex latex size=”2″]f(x)=\frac{1}{x-1}[/latex]
Let’s approach x=1 from both sides:
[latex latex size=”2″]\lim \limits_{x \to 1-} \frac{1}{x-1} = \frac{1}{0-} = – \infty[/latex]
[latex latex size=”2″]\lim \limits_{x \to 1+} \frac{1}{x-1} = \frac{1}{0+} = + \infty[/latex]
We see that when we approach x=1 from left side the value of f(x) gets infinitely small (negative infinity), and if we approach from the right side the value of f(x) gets infinitely large (positive infinity). This helps us graph the function:
1. First draw a vertical line through x=1 (this is called vertical asymptote).
2. Then, sketch a function such that it approaches that line but never touches it. When approaching from the left go down (-?), and when approaching from the right go up (+?)
Now we need to examine what happens when x approaches infinity (x ? ±?). To do this we need to know this simple rule:
[latex latex size=”2″ color=ff0000 ]\lim \limits_{x \to \infty} \frac{k}{x} = 0[/latex]
This rule is true for any real value k. Let’s get back to the example and using this rule investigate what happens when x approaches infinity values.
[latex latex size=”2″]\lim \limits_{x \to \infty+} \frac{1}{x-1} =\frac{1}{\infty+} = 0[/latex]
[latex latex size=”2″]\lim \limits_{x \to \infty-} \frac{1}{x-1} =\frac{1}{\infty-} = 0[/latex]
This also helps us sketch the graph:
1. First draw a horizontal line at y=0 (which is the same as x-axis and this is called horizontal asymptote).
2. Then, sketch our function such that when x ? +? (goes as far right as possible) the function approaches x-axis. Do the same thing on the left side.
Here is the graph:
Got it? Great! So next time you see a graph of a function try and think about what happens when x approaches infinity values. How do we use limits to determine that behaviour? What if f(x) approaches infinity values? This type of thinking helps you get a better feel for limits and better understanding of them.
## Methods to evaluate limits
There are four basic methods to evaluate limits.
#1. Substitute x with the value it approaches.
This is the simplest method but it is rarely applicable. Here’s an example:
[latex latex size=”2″]\lim \limits_{x \to 6} \frac{1}{x-3} = \frac{1}{6-3} = \frac{1}{3}[/latex]
Simple right? But the problem is that using this method you often get indeterminate such as 0/0.
#2. Factoring.
Consider this example
[latex latex size=”2″]\lim \limits_{x \to 1} \frac{x^2 -2x + 1}{x-1} [/latex]
If we just put 1 instead of x we get 0/0. So we try factoring the numerator to get:
[latex latex size=”2″]\lim \limits_{x \to 1} \frac{(x-1)(x-1)}{(x-1)} = \lim \limits_{x \to 1} x-1[/latex]
Now we just put the value in to get:
[latex latex size=”2″]\lim \limits_{x \to 1} x -1 = 1 – 1 =0[/latex]
So the limit is 0.
#3. Multiplying by conjugate.
The conjugate of expression is the same expression except the sign in the middle is changed, for example conjugate of a+b is a-b. This method often helps when we have fractions with radicals:
[latex latex size=”2″]\lim \limits_{x \to 4} \frac{2-\sqrt{x}}{4-x}[/latex]
Multiply both sides of the fraction by conjugate of numerator:
[latex latex size=”2″]\lim \limits_{x \to 4} \frac{2-\sqrt{x}}{4-x} \times \frac{2+\sqrt{x}}{2+\sqrt{x}}[/latex]
Now use difference of squares formula which is a² – b² = (a – b) (a + b) to simplify the numerator:
[latex latex size=”2″]\lim \limits_{x \to 4} \frac{4-x}{(4-x)(2+\sqrt{x})} = \lim \limits_{x \to 4}\frac{1}{2+\sqrt{x}}[/latex]
Put in the value x=4:
[latex latex size=”2″]\lim \limits_{x \to 4} \frac{4-x}{(4-x)(2+\sqrt{x})} = \frac{1}{2+\sqrt{x}} = \frac{1}{4}[/latex]
#4 Degree of a rational function.
Rational function is of form
[latex latex size=”2″]f(x) = \frac{P(x)}{Q(x)}[/latex]
By finding out the degree of a function we can easily determine if limit is 0,+?,-? . Example:
[latex latex size=”2″]\lim \limits_{x \to \infty} \frac{x^3}{x-1}[/latex]
Divide both sides of the fraction by largest degree of x:
[latex latex size=”2″]\lim \limits_{x \to \infty} \frac{\frac{x^3}{x^3}}{\frac{x}{x^3}-\frac{1}{x^3}}[/latex]
Now in the numerator we have 1 (when canceled out) and in the denominator, if we let x approach infinity, we get 1/? – 1/? = 0 – 0 = 0. So what we have in the end is 1/0 and that is +?, which is our limit.
[box type=”success” align=”” class=”” width=””]For more examples check out some of the lessons in our AP Calculus AB course. We cover everything from Limits of a Function to Areas Between Curves.[/box]
SPECIAL OFFER!
Get full access to Educator.com’s entire library of courses.
Use
# 8%
Discount
#### CODE
Copied
SUCCESS8
|
{}
|
# How does marginalization of variables effect least squares SLAM energy function?
Consider a simple example of Bundle Adjustment where I have robot and landmark poses $x = \left[ x_p \text{ } x_m\right]^T$ and measurements given by $z$, such that a simple factor graph can be generated with the nodes containing poses and edges containing the measurements. I'll have to solve for a non-linear least squares problem of the form $C(x) = \frac{1}{2}|| r(x) ||^2$ where $r(x)$ denotes the residuals.
I can implement and use any non-linear least squares optimization algorithm such as Gauss-Newton or use a popular library like ceres-solver.
My question is: Now suppose out of the state variables $x = \left[ a \text{ } b\right]^T$, I need to marginalize some variables $b$, while keeping the rest $a$. How do I apply this in terms of Gauss-Newton Algorithm and ceres-solver?
I understand the Gauss-Newton Algorithm and Schur Complement. If the original covariance of the system is $$K = \begin{bmatrix} A & C^T \\ C & D \end{bmatrix}$$
Original Information $$K^{-1} = \begin{bmatrix} \Lambda_{aa} & \Lambda_{ab} \\ \Lambda_{ba} & \Lambda_{bb} \end{bmatrix}$$ Marginalized covariance $K_m = [A]$ and marginalized information $K_m^{-1} = A^{-1}$ where is $A$ is computed by the schur complement $A^{-1} = \Lambda_{aa} - \Lambda_{ab}\Lambda_{bb}^{-1}\Lambda_{ba}$.
Now, what do I do when new poses and measurements are added to the system and optimization is done?
|
{}
|
Change default folder of Windows Explorer
When you open the Windows Explorer from a shortcut in Windows 7, it always starts in the Libraries folder. You may find this very handy if you are a normal user, but as a power user, you may want to open another folder by default.
To set a different default folder for Windows Explorer, you have to modiffy the shortcut. In the shortcut is a text box “target”, where you can define any location you want.
To modify the default location in Windows Explorer, do the following:
1. Right click on the Windows Explorer shortcut and click Properties
2. Go to the tab “Shortcut“.
3. In the “Target” field, simply add the location you prefer to open when you click the Windows Explorer shortcut. For example, when you would like to open the C:\itexperience folder, you would set the following target:
%windir%\explorer.exe C:\itexperience
In addition, there may be locations that you can’t specify by a path, like “My Computer”. In order to open Windows Explorer with these special locations as default folder, set the following string as target:
• Computer: %windir%\explorer.exe ::{20D04FE0-3AEA-1069-A2D8-08002B30309D}
|
{}
|
ZA
Rion Lerm
### Groundnut Yield Estimation (T.Ha-1.Season-1; Shelled) Based On SmithS (1994) Climatic Criteria
R.E. Schulze & M. Maharaj
Rich in protein, groundnuts (Arachis hypogaea) can be eaten raw, cooked or roasted. For optimum yields, groundnuts require a MAP > 700 mm of which > 500 mm should fall over their 5 - 6 month summer growing season (Smith, 1998), although yields can be attained with only 300 mm over the growing season (NDA, 2005). Mean daily temperatures should be in the range of 22 – 28 degree Celsius for optimum growth, with yield...
### Extremes (1950 - 2000) Of Maximum Temperatures (C) November
R.E. Schulze & M. Maharaj
When days with very low or very high temperatures are experienced, the question is invariably asked as to whether that extreme minimum or maximum temperature was a record, either for the country or province, or for a specific location or month of the year. It is for that reason that extremes of temperatures, at both ends of the spectrum, were extracted from the daily temperature database generated by Schulze and Maharaj (2004) for the 51...
### Vegetation Status
SANBI-Biodiversity GIS
### Median monthly precipitation (mm) for Near Future (2046-2065)
Christopher Jack
Model Run: Near future (2046 - 2065) (Near future (2046 - 2065)). The Self-Organizing Map Downscaling (SOMD) was developed at the Climate Systems Analysis Group (CSAG)[1], University of Cape Town. This is a leading empirical downscaled technique and provides meteorological station level response to global climate change forcing (See Hewitson and Crane (2006) for methodological details and Wilby et al. (2004) for a review of this and other statistical downscaling methodologies). Downscaling of a General...
### Passion Fruit: Optimum Growth Areas - Criterion 1: > 330 days per year with a minimum temperature exceeding 2 degree Celsius
R.E. Schulze & M. Maharaj
Passion fruit are believed to have originated from southern Brazil (Kenyaweb, 2005).The most commonly commercially grown of 55 edible species of passion fruit is Passiflora edulis, which consists of perennial woody vines with each producing about 100 fruits per year (RBGK, 2005). In South Africa passion fruit ideally requires a cool subtropical climate for optimum production (Smith, 1998). The vines prefer moderate temperatures throughout the year, with monthly means of daily maxima < 29 degree...
### Suitable habitat for Burchell Zebra
SANBI-Biodiversity GIS
The purpose is to provide a preliminary guideline on carrying capacities for game species suited to the Little Karoo. These are expressed as the estimated area (hectares) required per animal for the major habitat types present. It is a guideline for assisting landowners to utilize their veld at sustainable levels. There is no specific combination of game animals that will be suitable for introduction to all of the farms in the Little Karoo. The mix...
### Mean of Maximum Temperature for Near Future (2046-2065)
Christopher Jack
Model Run: Near future (2046 - 2065) (Near future (2046 - 2065)). The Self-Organizing Map Downscaling (SOMD) was developed at the Climate Systems Analysis Group (CSAG)[1], University of Cape Town. This is a leading empirical downscaled technique and provides meteorological station level response to global climate change forcing (See Hewitson and Crane (2006) for methodological details and Wilby et al. (2004) for a review of this and other statistical downscaling methodologies). Downscaling of a General...
### Standard Deviation (C) Of Daily Minimum Temperature March
R.E. Schulze & M. Maharaj
Minimum (Night-Time) temperatures, there is abundant evidence for the climatic limitation of crop distributions to be described by minimum temperatures, from a perspective of crop survival (on Frost for tolerance levels of crops). Many subtropical crops may be killed at 5 degree Celsius already through chilling injury, even in the absence of frost (Chang, 1968). High minima increase respiration loss, thereby decreasing net photosynthesis. Daily minimum temperature time series were derived from over 970 qualifying...
### Extremes (1950 - 2000) Of Maximum Temperatures (C) October
R.E. Schulze & M. Maharaj
When days with very low or very high temperatures are experienced, the question is invariably asked as to whether that extreme minimum or maximum temperature was a record, either for the country or province, or for a specific location or month of the year. It is for that reason that extremes of temperatures, at both ends of the spectrum, were extracted from the daily temperature database generated by Schulze and Maharaj (2004) for the 51...
### Heat Units (Standard Deviation) - October
R.E. Schulze & M. Maharaj
The concept of the heat unit (or degree day), known since the mid-18th century already, revolves around the development of a plants or organisms being dependent upon the accumulated heat to which it was subjected during its lifetime, or else during a certain developmental stage. This measure of accumulated heat is known as physiological time. In general it holds that the lower the temperature, the slower the rate of growth and development of plants and...
### Monthly Means Of Daily Average Temperature (C) May
R.E. Schulze & M. Maharaj
Under conditions of natural vegetation, mean temperatures are used to distinguish between three broad thermal divisions of plants, viz. mega-thermal plants, which require mean monthly temperatures above 20 degree Celsius for at least 4 months of the year; micro-thermal plants, which grow where 8 months or longer have means below 10 degree Celsius; and the meso-thermal plants of the mid-latitudes, which cover most of South Africa, and whose physiology is adapted to the strong seasonal...
### Above Ground Herbaceous Biomass (gC/sq.m)
CSIR
Above Ground Herbaceous Biomass (AGBherb) is predominantly grasses, but also forbs, restios, sedges. It is based on published relationships between rainfall and yearly grass production, reduced proportionately to take into account competition by trees (TCF). AGBherbvaries greatly through the year – reaching a peak near the end of the growing and declining to near zero by the beginning of spring, especially in the presence of fire and/or herbivory. Units: average gC/m2within 1km x 1km pixel
### Total precipitation for the month (mm) for Current Climate (1950-1999)
Christopher Jack
Model Run: Control - Current Climate (Observed/current climate as modelled by climate model). The Self-Organizing Map Downscaling (SOMD) was developed at the Climate Systems Analysis Group (CSAG)[1], University of Cape Town. This is a leading empirical downscaled technique and provides meteorological station level response to global climate change forcing (See Hewitson and Crane (2006) for methodological details and Wilby et al. (2004) for a review of this and other statistical downscaling methodologies). Downscaling of a...
### Ruggedness Index - Eastern Cape
SANEDI
Purpose: This data set was created for the WASA project and the Department of Energy, South Africa. The wind resource maps were designed specifically for inclusion in GIS-based strategic environmental assessments (SEA) for wind power in Western Cape and parts of Northern and Eastern Cape. Methodology: Reference is made to the information and documentation available from www.wasaproject.info and www.wasa.csir.co.za. Limitations: The data set is limited by the operational envelopes of the wind atlas methodology and...
### Seasonal mean near-surface (2m) temperature (°C) change from the median projected for 2036-2065, relative to present (1976 - 2005), for SON season under the RCP 4.5 pathway
SMHI
Seasonal (SON) mean near-surface (2m) temperature (°C) change from the median projected for 2036-2065, relative to present (1976 - 2005), under the RCP 4.5 pathway for the southern African region. To generate the image, nine coarse General Circulation Models (GCM) are downscaled to a finer spatial resolution (0.44°x 0.44°) using the Rossby Centre regional model (RCA4) forcing its lateral boundaries. The model simulated daily temperature averages, which are used to generate projections of seasonal change....
### Rainfall days per month for Far Future (2080 - 2100)
Christopher Jack
Model Run: Far future (2080 - 2100) (Far future (2080 - 2100)). The Self-Organizing Map Downscaling (SOMD) was developed at the Climate Systems Analysis Group (CSAG)[1], University of Cape Town. This is a leading empirical downscaled technique and provides meteorological station level response to global climate change forcing (See Hewitson and Crane (2006) for methodological details and Wilby et al. (2004) for a review of this and other statistical downscaling methodologies). Downscaling of a General...
### Position of Agulhas Current at satellite track 020 in AGUHYCOM using technique described in Malan et. al (2018) JGR
Neil Malan
Identification of core position of the Agulhas Current, as plotted in Fig. 3 of Malan et. al. (2018)
### Rainfall days per month (> 2mm) for Near Future (2046-2065)
Christopher Jack
Model Run: Near future (2046 - 2065) (Near future (2046 - 2065)). The Self-Organizing Map Downscaling (SOMD) was developed at the Climate Systems Analysis Group (CSAG)[1], University of Cape Town. This is a leading empirical downscaled technique and provides meteorological station level response to global climate change forcing (See Hewitson and Crane (2006) for methodological details and Wilby et al. (2004) for a review of this and other statistical downscaling methodologies). Downscaling of a General...
### Climatically Suitable Growth Areas for Eucalyptus fraxinoides
R.E. Schulze & M. Maharaj
Fast growing, medium to tall and low maintenance species which is tolerant of low moisture conditions found in areas with MAP as low as 150 mm, Eucalyptus fraxinoides is native to the coastal ranges/escarpments of southeastern New South Wales and Victoria in Australia (CSIRO, 2005). It grows optimally in quite a narrow band of MATs between 14 and 16 degree Celsius (Kunz, 2004). Eucalyptus fraxinoides, the timber of which is used in construction and flooring...
### Seasonal mean near-surface (2m) temperature (°C) change from the 10% percentile projected for 2066 - 2095, relative to present (1976 - 2005), for DJF season under the RCP 4.5 pathway
SMHI
Seasonal (DJF) mean near-surface (2m) temperature (°C) change from the 10% percentile projected for 2066-2095, relative to present (1976 - 2005), under the RCP 4.5 pathway for the southern African region. To generate the image, nine coarse General Circulation Models (GCM) are downscaled to a finer spatial resolution (0.44°x 0.44°) using the Rossby Centre regional model (RCA4) forcing its lateral boundaries. The model simulated daily temperature averages, which are used to generate projections of seasonal...
### Root Mean Square Difference between the nine ensemble member change anomalies of the seasonal mean near-surface (2m) temperature for the 10% percentile for 2036 - 2065 relative to 1976-2005, for the JJA season, under the RCP 4.5 pathway
SMHI
Root Mean Square Difference for seasonal (JJA) mean near-surface (2m) temperature (°C) change from the 10% percentile projected for 2036-2065, relative to present (1976 - 2005), under the RCP 4.5 pathway for the southern African region. To generate the image, nine coarse General Circulation Models (GCM) are downscaled to a finer spatial resolution (0.44°x 0.44°) using the Rossby Centre regional model (RCA4) forcing its lateral boundaries. The model simulated daily temperature averages, which are used...
### Bush Encroachment and Land Cover Change
Kerryn Warren & Wim Hugo
* The dataset shows landcover change between 1990 (lcov_1990) and 2013 (lcov_2013). * Attribute information includes ; pagenumber, mapcode12 (vegetation type), dn_new (aboveground carbon values-DMt/ha), lcov_1990, lcov_2013, parent1990 (general landcover name), parent2013, old_new (landcover change), parent_old_new (landcover change by name), dn_ave_1990 (average for each landcover in 1990), dn_ave_2013, dn_ave_change (average carbon change between 1990 and 2013); and boolean information for the following: bush encroachment, grass encroachment, perturbation, degradation, restoration, habitat loss. * Landcover data entries...
### STEP - Planning Domain
SANBI-Biodiversity GIS
A map of the STEP planning domain, based on the coastline, and cadastral and municipal boundaries where possible. The boundary was defined to include the thicket vegetation found east of the Duiwenhoks River and west of the Great Kei River. Thicket vegetation west of this planning domain (i.e. towards Cape Town) was not incorporated, considering that it has already been taken into account in the Cape Action Plan for the Environment (Cowling et al. 1999)....
### Mean Maize Yield (T/Ha/Season) Long Season Hybrid Plant Date: 15 December
R.E. Schulze & N.J. Walker
This data set contains Maize, Zea mays L. in South Africa and is the countrys most important field and grain crop. Objectives of the study was to simulate maize yields, and their inter-annual, at a spatial resolution of Quaternary Catchments for 12 different combinations of three plant dates, viz. 15 October, 15 November and 15 December. This was done to evaluate which hybrid lengths and plant dates give the highest yields irrespective of plant dates...
• 2016
2
• 2018
1,591
• 2019
99
• Dataset
1,690
• Audiovisual
1
• Software
1
#### Data Centers
• SAEON Data Centre
1,692
|
{}
|
Radar plot with model scores. Scores are scaled to [0,1], each score is inversed and divided by maximum score value.
plotModelRanking(object, ..., scores = c("MAE", "MSE", "REC", "RROC"),
new.score = NULL, table = TRUE)
## Arguments
object An object of class ModelAudit. Other modelAudit objects to be plotted together. Vector of score names to be plotted. A named list of functions that take one argument: object of class ModelAudit and return a numeric value. The measure calculated by the function should have the property that lower score value indicates better model. Logical. Specifies if rable with score values should be plotted
## Value
ggplot object
plot.modelAudit
library(car)
lm_au <- audit(lm_model, data = Prestige, y = Prestige$prestige) library(randomForest) rf_model <- randomForest(prestige~education + women + income, data = Prestige) rf_au <- audit(rf_model, data = Prestige, y = Prestige$prestige)
|
{}
|
Analyze Promoter Regions In A Vcf File
3
0
Entering edit mode
9.4 years ago
farah ▴ 30
I have a VCF file. My aim is to check promoter variants within this file. I have got a list of all the promoter regions in a fasta format (supposedly, this file has a chromosome number, end and start position of the prmoter region which i defined as 2000bp upstream). how do I proceed? any help?
promoter vcf • 2.7k views
2
Entering edit mode
9.4 years ago
fo3c ▴ 450
Assuming the calls in the vcf file have been filtered and are high-confidence calls, extract the intervals from the fasta file (headers) and intersect them with the vcf file to see what mutations, if any, have been called in the promoter regions.
0
Entering edit mode
9.4 years ago
Rm 8.2k
Use Bedtools intersect or intersectbed or windowBed depending on your input
bedtools intersect -a promotor.vcf -b input.VCF -f 1 -wao
(you can use -wa or -wb etc accordingly)
Info from Bedtools:
Note: When intersecting SNPs, make sure the coordinate conform to the UCSC format. That is, the start position for each SNP should be SNP position - 1 and the end position should be SNP position. E.g. chr7 10000001 10000002 rs123464
Report the base-pair overlap between sequence alignments and genes.
If you have Gene cordinates as bed file: to Report all SNPs that are within 5000 bp upstream (-l) or 1000 bp downstream (-r) of genes. Define upstream and downstream based on strand (-sw).
windowBed -a genes.bed –b snps.bed –l 5000 –r 1000 -sw
0
Entering edit mode
9.4 years ago
Convert the FASTA-formatted promoter regions to a UCSC-formatted BED file and use a tool like BEDOPS bedmap to map promoter regions that overlap your VCF calls. A discussion follows which explains how you might approach this.
To start, let's say your promoter regions are formatted as follows:
>promoter1 chrN:A-B
ACGTACG...TACAGT
>promoter2 chrN:C-D
...
Convert this to a five-column BED file, where the chromosome, start, stop and ID columns are taken from the FASTA header, and a dummy score value is added for convenience. For example:
#!/usr/bin/env perl
use strict;
use warnings;
while (<>) {
if ($_ =~ /^>/) { chomp; my$ln = $_;$ln =~ s/^>//s;
my ($id,$chr, $start,$stop) = split(/[\s+|:|-]/, $ln); my$score = 0;
print STDOUT join("\t", ($chr,$start, $stop,$id, $score))."\n"; } } Convert the FASTA data to a sorted BED file with a command like:$ ./fa2bed.pl < myPromoters.fa | sort-bed - > myPromoters.bed
This is a "map" file that contains promoter regions, IDs and (placeholder) scores that we can use as an input to bedmap:
$more myPromoters.bed chrN A B promoter1 0 chrN C D promoter2 0 ... We're now ready to answer the question. We'll first convert your VCF calls to 0-indexed BED using the vcf2bed conversion script. We pipe the conversion results into sort-bed to sort them, and then pipe that into bedmap (along with myPromoters.bed) to do the actual mapping:$ vcf2bed < myCalls.vcf \
| sort-bed - \
| bedmap --echo --echo-map-id --delim '\t' - myPromoters.bed \
The result (myAnswer.bed) will be a BED-formatted file. The first four columns will be the (0-indexed) positional data and variant ID of the VCF call. The last column (either the eleventh or twelfth column) will be the ID (or IDs) of promoter regions which overlap that call. (If you want the full promoter element — region and ID — use --echo-map in place of --echo-map-id.)
If you need to adjust the boundaries of your definition of a promoter region, you can use BEDOPS bedops to pre-process your myPromoters.bed file. As an example, here is how to use the --range operator to widen the boundaries of each promoter upstream by 10000 bases and downstream by 500 bases, along with the --everything operator to process all elements:
\$ bedops --range 10000:500 --everything myPromoters.bed > myWidenedPromoters.bed
Then use the myWidenedPromoters.bed file with mapping operations, etc.
0
Entering edit mode
thank you very much for your help. well I did convert promoter reference file to a BED format but then I used Vcftools to intersect the VCF file of the patient's genome and the promoter reference file. I found vcftools very easy for beginners.
|
{}
|
# News
From 28 students buy magazine 10 The economist students, 10 Pravda students and 18 students don't buy any of these magazines. How many students buy both magazines?
Result
n = 10
#### Solution:
$j=28-18=10 \ \\ n=10+10-j=10+10-10=10$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
## Next similar math problems:
1. Hotel
The hotel has a p floors each floor has i rooms from which the third are single and the others are double. Represents the number of beds in hotel.
2. 80 students
80 students were asked what type of test they preferred. 50 students said they liked multiple choice and 42 liked true or false. If 36 liked both multiple choice and true or false types, how many students preferred multiple choice only?
3. Class
The class has 18 students. Everyone knows inline skating or skateboarding. Inline skating can ride 11 students on a skateboard 10. How many ride on inline skates and on skateboard?
4. Waste paper
Our school attend 1,300 pupils. Each pupil collected plastic bottles or paper. Plastic bottles have collected 930 pupils and 600 pupils old paper. How many students have collected together plastic bottles and waste paper?
5. Glasses
Imagine a set of students in your class (number of students: 19), who wears glasses. How much minimum and maximum element may contain this set.
6. Ten pupils
10 pupils went to the store. 6 pupils bought lollipops and 9 pupils bought chewing gum. How many pupils have bought both lollipops and chewing gums (if everyone bought something)?
7. Disjoint
How many elements have union and intersection of two disjoint sets when the first have 1 and secodn 8 elements.
Added together and write as decimal number: LXVII + MLXIV
9. Brothers and sisters
There are 35 children in the class, 23 of them have a brother, and 27 of them have a sister. How many children have both a brother and a sister when there are 5 children in the class who have no brother or sister?
10. Rings - intersect
There are 15 pupils on the sporting ring. 10 pupils go to football, 8 pupils go to floorball. How many pupils go to both rings at the same time?
11. Two sets
Suppose Set B contains 69 elements and the total number elements in either Set A or Set B is 124. If the Sets A and B have 29 elements in common, how many elements are contained in set A?
12. Gloves
Petra has ten pairs of gloves in the closet. Six pairs are blue, 4 pairs are yellow. How many pieces of gloves need to be pulled out at least when she pull them out in the dark and want to have one complete one color pair?
13. Italian writer
There were 17 children in the barracks with an Italian writer. 6 children ordered a book in an Italian original and 12 children in the translation. How many children ordered a book in both languages if the three children did not order it?
14. If-then equation
If 5x - 17 = -x + 7, then x =
15. Valid number
Round the 453874528 on 2 significant numbers.
16. Pizza 4
Marcus ate half pizza on monday night. He than ate one third of the remaining pizza on Tuesday. Which of the following expressions show how much pizza marcus ate in total?
17. Flood water
Flood waters in some US village meant that the homes had to evacuate 364 people. 50 of them stayed at elementary schools, 59 them slept with their friends and others went to relatives. How many people have gone to relatives?
|
{}
|
## Abstract
Despite the important advancements in the stent technology for the treatment of diseased coronary arteries, major complications still affect the postoperative long-term outcome. The stent-induced flow disturbances, and especially the altered wall shear stress (WSS) profile at the strut level, play an important role in the pathophysiological mechanisms leading to stent thrombosis (ST) and in-stent restenosis (ISR). In this context, the analysis of the WSS topological skeleton is gaining more and more interest by extending the current understanding of the association between local hemodynamics and vascular diseases. This study aims to analyze the impact that a deployed coronary stent has on the WSS topological skeleton. Computational fluid dynamics (CFD) simulations were performed in three stented human coronary artery geometries reconstructed from clinical images. The selected cases presented stents with different designs (i.e., two contemporary drug-eluting stents and one bioresorbable scaffold) and included regions with stent malapposition or overlapping. A recently proposed Eulerian-based approach was applied to analyze the WSS topological skeleton features. The results highlighted that the presence of single or multiple stents within a coronary artery markedly impacts the WSS topological skeleton. In particular, repetitive patterns of WSS divergence were observed at the luminal surface, highlighting a WSS contraction action exerted proximal to the stent struts and a WSS expansion action distal to the stent struts. This WSS action pattern was independent from the stent design. In conclusion, these findings could contribute to a deeper understanding of the hemodynamics-driven processes underlying ST and ISR.
## Introduction
Percutaneous coronary intervention (PCI) with drug-eluting stent implantation is the gold standard endovascular treatment for patients suffering from coronary artery disease [1]. Contemporary stent platforms are made of cobalt–chromium or platinum–chromium, allowing for thinner stent struts (<100 μm) and more flexibility than old-generation devices, while maintaining adequate radial strength [2,3]. These modern devices have reduced the incidence rate of stent thrombosis (ST) below 1% after 1 yr of intervention [4,5]. Conversely, the incidence rate of in-stent restenosis (ISR) still remains at 5–10% [6]. Moreover, the period of ISR presentation is generally longer than that of the old-generation devices and often extends several years beyond the intervention [6]. To avoid late adverse events promoted by the persistence of the metallic platforms in the coronary vessel (including late ST and ISR), bioresorbable scaffolds, either based on a fully resorbable polymeric or metallic backbone, are currently under development and clinical testing [2]. These devices provide temporary mechanical support and drug delivery to the coronary vessel within 1 yr after implantation and completely resorb in the subsequent 1–2 yr, restoring the normal luminal diameter and vasomotor function [2,3]. Bioresorbable scaffolds present the notable advantage of reducing the need of the long-term dual antiplatelet therapy and allow for surgical revascularization, if needed [2]. As a counterpart, they present the disadvantage that thicker struts are necessary to provide radial forces similar to the contemporary drug-eluting stents [2], with the consequent exacerbation of the local flow disturbances. In this regard, in addition to systemic and biologic factors [6,7], numerous studies have identified local blood flow disturbances at the stent strut level as a key contributor for the ISR development [7,8]. In particular, a negative association between baseline wall shear stress (WSS) and neointimal thickness at follow-up has emerged in different clinical datasets [8]. However, the role of the altered WSS profile on the pathophysiological mechanisms leading to ISR is still under investigation. Moreover, the lack of endothelial coverage or delayed re-endothelialization observed in consequence of thicker stent struts, such as in bioresorbable scaffolds, increases the risk of thrombosis [9,10].
In this context, the analysis of the WSS topological skeleton, which is contributing to improve and extend the current understanding of the association between local hemodynamics and vascular diseases [1114], could allow to better identify the biomechanical stimuli involved in clinical adverse events eventually leading to failure of stenting procedures. The WSS topological skeleton is composed by fixed points, where the WSS vanishes, and by manifolds, connecting fixed points [11,12]. Recently, an approach has been proposed for (i) the identification of WSS manifolds [12,15] and (ii) the quantitative analysis of the contraction/expansion action exerted by the shear forces on the endothelium [1517], starting from the distribution of the WSS divergence on the arterial luminal surface. Using such a WSS divergence-based approach, evidences emerged that the variability of the WSS contraction/expansion action along the cardiac cycle is associated with wall degradation in the ascending thoracic aorta aneurysm [17] and is a predictor of the risk of long-term restenosis in the carotid bifurcation after endarterectomy [16]. Furthermore, the findings of a very recent longitudinal study focusing on coronary arteries indicated that the high variability in the WSS contraction/expansion action is a predictor of wall thickness change over time (a hallmark of atherosclerosis development), suggesting that both the WSS manifold dynamics along the cardiac cycle and the WSS magnitude concur to coronary atherosclerosis, acting as different hemodynamic stimuli [15]. Applied in a clinical setting, the high variability in the WSS contraction/expansion action added incremental predictive and discriminative capacity to area stenosis, virtual fractional flow reserve, and time-averaged WSS (TAWSS) to identify intermediate coronary lesion site of subsequent myocardial infarction at 5-yr follow-up [18].
Based on the above mentioned observations and on the direct link of the WSS topological skeleton features with near-wall flow stagnation, separation, and recirculation, which represent typical flow conditions of stented coronary arteries [11,12], in this study the impact that a deployed coronary stent has on the WSS topological skeleton is investigated. The possible involvement of WSS topological skeleton features in the processes leading to post-PCI complications is discussed. For this purpose, computational fluid dynamics (CFD) simulations are performed in stented human coronary artery models reconstructed from clinical images. The impact of three different stent designs (i.e., two contemporary drug-eluting stents and one bioresorbable scaffold) and of stent positioning (i.e., malapposition and overlapping) on the WSS topological skeleton features is analyzed.
## Methods
### Coronary Artery Geometries.
Three cases (A–C) of pathological left anterior descending coronary artery (LAD) of patients suffering from coronary artery disease who underwent percutaneous coronary stent implantation were investigated (Fig. 1). Cases A and B were treated with cobalt–chromium drug-eluting stents at University Hospital Doctor Peset (Valencia, Spain) [19,20]. In case A, a 3.0 × 28 mm Xience Prime everolimus-eluting stent (Abbott Laboratories, Abbott Park, IL) was implanted in the proximal portion of the LAD at the level of the first diagonal branch and a septal branch by means of the provisional side branch stenting technique concluded by proximal optimization. The Xience Prime stent is characterized by an open-cell peak-to-valley design and struts with square cross section and thickness of 81 μm (Fig. 2) [2]. In case B, two Endeavor Resolute zotarolimus-eluting stents (Medtronic, Dublin, Ireland) (sizes 2.75 × 15 mm and 3.0 × 15 mm) were deployed in the midportion of the LAD in correspondence of the first and second diagonal branches. The two stents were sequentially deployed using the provisional side branch technique, ensuring a stent overlapping region of ∼20 mm. The Endeavor Resolute stent presents an open-cell peak-to-peak design and struts with circular cross section and diameter of 91 μm (Fig. 2) [2]. Case C was treated with a drug-eluting bioresorbable scaffold at Rivoli Infermi Hospital (Turin, Italy). In particular, a 3.0 × 25 mm resorbable magnesium-based sirolimus-eluting scaffold Magmaris (Biotronik AG, Bülach, Switzerland) was implanted in the midportion of the LAD in correspondence of the second diagonal branch following a provisional stenting technique concluded by proximal optimization, without the need of final kissing balloon inflation. This bioresorbable scaffold is characterized by an open-cell peak-to-valley design with midstrut connector and struts with rectangular cross section and thickness of 150 μm (Fig. 2) [21]. The study was conducted in accordance with the principles of the Declaration of Helsinki and met the requirement of medical ethics. The study protocol was approved by the institutional review board of the involved hospitals. All patients gave written informed consent.
Fig. 1
Fig. 1
Close modal
Fig. 2
Fig. 2
Close modal
Patient-specific geometries of the three stented coronary arteries were obtained from medical images (Fig. 1). Regarding cases A and B, pre-operative vessel geometries were reconstructed by combining conventional angiography and computed tomography angiography images. Finite element analyses replicating the entire stenting procedure were then performed to obtain the stented lumen configuration (i.e., post-PCI vessel configuration) to be used for CFD simulations. Details on vessel and stent geometry reconstruction, and virtual stenting procedure were extensively described elsewhere [19,20]. Regarding case C, the post-PCI vessel configuration was reconstructed through the fusion of conventional angiography and optical coherence tomography (OCT) images acquired immediately after scaffold implantation. More in detail, the following five-step procedure was applied [2224]: (i) semi-automatic detection of the lumen contours and stent struts on the OCT frames using an in-house developed algorithm; (ii) extraction of the vessel centerline from two angiographic views; (iii) placement of the lumen contours and stent struts orthogonal to the vessel centerline using the side branches as reference to properly orient the OCT frames; (iv) smooth connection of the lumen contours to obtain the lumen surface; and (v) creation of the three-dimensional stent model by means of a manual morphing procedure that adapts the stent skeleton in its straight free-expanded configuration toward the stent point cloud detected from OCT.
In addition to the three stented coronary artery geometries, the corresponding lumen geometries without stent (i.e., nonstented cases) were considered for comparison purposes. These vessel geometries were obtained by excluding the stents from the domain of interest and by smoothing the lumen surface using the open-source software vmtk (Orobix, Bergamo, Italy) to avoid abrupt local changes of the luminal surface after the stent removal.
### Computational Fluid Dynamics.
The coronary artery geometries were discretized into tetrahedral elements with five layers of prismatic elements at the luminal surface using the commercial software fluentmeshing (Ansys Inc., Canonsburg, PA). The element size was defined based on a mesh independence study, resulting in a mesh cardinality ranging from 5,068,182 to 13,948,023 elements and from 526,492 to 1,548,654 elements for the stented and nonstented vessel models, respectively. A smaller element size (0.02 mm) was defined at the stent struts.
The local hemodynamics of all cases under investigation was analyzed by performing transient CFD simulations. Specifically, the governing equations of the unsteady-state fluid motion were numerically solved using the finite volume-based commercial code fluent (Ansys Inc.). Details on boundary conditions imposed at inlet and outlet sections of each model were extensively described in Ref. [19]. Briefly, a pulsatile flow waveform, distinctive for the LAD [25], was applied as paraboloid-shaped velocity profile at the inlet cross section [26]. The pulsatile flow waveform amplitude was personalized to the specific patient according to an inflow section lumen diameter-based scaling law [27]. In this way, flow rate waveforms with anatomically derived, personalized average flow rate values (42.2 ml/min, 45.1 ml/min, and 34.0 ml/min for cases A, B, and C, respectively) were prescribed as inflow boundary conditions (Fig. S1 available in the Supplemental Materials on the ASME Digital Collection). A flow-split, estimated through a scaling law based on the lumen diameter of the daughter branches [27], was imposed at the outflow boundaries (Fig. S1 available in the Supplemental Materials on the ASME Digital Collection). The no-slip condition was applied at the vessel and stent walls, which were considered as rigid. The blood was modeled as an incompressible, homogeneous, non-Newtonian fluid with density of 1060 kg/m3 and viscosity described through the Carreau formulation [19]. The flow was considered laminar as the maximum Reynolds number at peak flow rate was 195, 260, and 140 for cases A, B, and C, respectively. Details on the numerical settings were exhaustively reported in a previous study [19].
### Wall Shear Stress Features.
The Eulerian-based method recently proposed for Newtonian fluids [12] and extended to the class of Reiner–Rivlin fluids [15] was applied to analyze the WSS topological skeleton features at the stented region, considered as the region of interest. More in detail, the WSS topological skeleton consists of a collection of fixed points (i.e., focal points where the WSS vanishes) and associated unstable/stable manifolds [11,12] (Fig. 3(a)). As reported in previous studies [12,15], WSS manifolds can be identified by computing the divergence of the normalized WSS vector field at the stented vessel surface, expressed as
$DIVWSS=∇⋅(τ∥τ∥2)=∇·τu$
(1)
where $τ$ and $τu$ are the WSS vector and its unit vector, respectively. From a physical perspective, vessel surfaces characterized by negative/positive values of $DIVWSS$ correspond to contraction/expansion regions approximating unstable/stable manifolds (Fig. 3(a)). To complete the WSS topological skeleton analysis, the WSS fixed points were identified at the vessel surface according to the procedure proposed elsewhere, based on the Poincaré index [11,12,28]. Then, the WSS fixed points were classified according to their nature (i.e., saddle point, node, or focus Fig. 3(a)), using the approach based on the eigenvalues of the WSS Jacobian matrix [1113,29].
Fig. 3
Fig. 3
Close modal
First, the cycle-average WSS vector field $τ¯$ was analyzed. Subsequently, the instantaneous WSS topological skeleton along the cardiac cycle was characterized. To do that, the amount of variation in the WSS contraction or expansion action along the cardiac cycle (Fig. 3(b)) was quantified by computing the topological shear variation index (TSVI) [1517], namely, the root-mean-square deviation of the divergence of the normalized WSS with respect to its average along the cardiac cycle
$TSVI= {1T∫0T[ ∇·(τu)−∇·(τu)¯ ]2 dt}1/2$
(2)
where T is the cardiac cycle period. Moreover, to characterize the unsteady nature of the WSS fixed points along the cardiac cycle, the WSS fixed point weighted residence time along the cardiac cycle was computed [12,15,16]
$RT∇xfp(e)=AavgAe1T∫0TIe(xfp,t)|(∇·τ)e|dt$
(3)
where $xfp(t)$ represents the location of a WSS fixed point at time $t∈[0,T]$, e is a generic element of the mesh of area $Ae$, $Aavg$ the average surface area of all elements of the mesh, $I$ is the indicator function, and $(∇·τ)e$ is the instantaneous WSS divergence. Equation (3) quantifies the fraction of cardiac cycle in which a generic mesh element e hosts a fixed point, weighting the residence time by the strength of the contraction/expansion action measured by the WSS divergence.
To provide a more complete picture of the near-wall hemodynamics, the well-known WSS-based descriptor TAWSS, namely, the average WSS magnitude value along the cardiac cycle, was computed in addition to the WSS topological skeleton features.
The exposure to large variations in the WSS contraction or expansion action was quantified by the relative surface area at the stented region exposed to high values of TSVI, considering as threshold value the 80th percentile of the TSVI distribution of each nonstented model. This variable was denoted as topological shear variation area (TSVA). Similarly, the exposure to the action of instantaneous WSS fixed points was quantified by the relative surface area exposed to non-null values of $RT∇xfp(e)$, namely, considering the luminal surface area at the stented region where fixed points occurred along the cardiac cycle. This variable was denoted as weighted fixed point area (wFPA). Finally, the exposure to low TAWSS values was quantified by the relative surface area at the stented region exposed to TAWSS values below a threshold value, corresponding to the 20th percentile of the TAWSS distribution of the nonstented model. This variable was denoted as low shear area (LSA).
## Results
### Hemodynamic Impact of Stenting.
Since the WSS topological skeleton features directly link to near-wall flow stagnation, separation, and recirculation areas [11,12], an overview of the intravascular hemodynamic features of the three cases under analysis is briefly presented in Fig. 4, comparing the stented with the nonstented models in terms of in-plane flow patterns on explanatory cross-sectional planes. As expected, stent implantation induced small flow recirculation regions close to the stent struts, whose extension was dictated by the peculiar design features of the stent. In these explanatory cases, the near-wall recirculation regions did not affect the large-scale secondary flow patterns, except for case C, where the stent struts broke the classical two-vortex Dean-like structure of the secondary flows (Fig. 4).
Fig. 4
Fig. 4
Close modal
The differences in the hemodynamics induced by the implantation of stents with different design were reflected by the WSS topological skeleton features. The cycle average WSS topological skeleton distribution at the luminal surface of the three cases under investigation, expressed in terms of divergence of the normalized WSS vector field, is displayed in Fig. 5. By visual inspection, in all three cases, the presence of the stent markedly impacted the WSS topological skeleton. In particular, the stented regions presented a repetitive pattern characterized by WSS contraction regions located immediately upstream from each stent ring, identified by negative DIVWSS values, and WSS expansion regions located immediately downstream from each stent ring, identified by positive DIVWSS values. This repetitive pattern was observed along the entire stented regions, except for the stent malapposed region (i.e., proximal stented segment of case A), where a reverse DIVWSS distribution was visible, and the stent overlapped region (i.e., midstented segment of case B), where a less repetitive pattern was present. The bifurcation regions of the stented cases showed a more complex WSS topological skeleton distribution than that of the nonstented cases, in which only a WSS expansion region was identified at the bifurcations' carina. In the stented cases, WSS fixed points were mainly located at the stent peaks and valleys, at the interface between the lumen and the stent struts. The number of fixed points was at least 2 orders of magnitude higher than that of the nonstented cases (Table 1), for which only a single (case A) or few saddle points (cases B and C) were present at the bifurcation regions. Consequently, a high percentage increase (>70%) in the wFPA was also observed in the stented cases with respect to the nonstented ones (Table 2).
Fig. 5
Fig. 5
Close modal
Table 1
Number of fixed points identified in the three investigated cases
Case ACase BCase C
Stented models
No. of foci/nodes533550211
Nonstented models
No. of foci/nodes000
Case ACase BCase C
Stented models
No. of foci/nodes533550211
Nonstented models
No. of foci/nodes000
Table 2
Percentage increase in wFPA, TSVA, and LSA at the stented region for the stented cases with respect to the nonstented ones
Case ACase BCase C
Percentage increase in wFPA165.47%440.16%71.07%
Percentage increase in TSVA680.87%603.23%593.85%
Percentage increase in LSA544.94%415.18%486.24%
Case ACase BCase C
Percentage increase in wFPA165.47%440.16%71.07%
Percentage increase in TSVA680.87%603.23%593.85%
Percentage increase in LSA544.94%415.18%486.24%
The distribution of TSVI along the luminal surface of the stented cases was different than that of the nonstented ones (Fig. 6). Specifically, in the stented cases, marked variations in the WSS contraction/expansion action along the cardiac cycle, as quantified by TSVI, were mainly located immediately downstream and upstream from the stent struts. The highest values of TSVI (>5000 m−1) were found at the stent peaks and valleys and at the links between the stent rings. Conversely, in the nonstented cases, high TSVI values were located only at the bifurcation regions and at the proximal segment of case A, where the coronary artery presented high curvature and tortuosity. The difference in the distributions of TSVI along the luminal surface between the stented and nonstented cases emerged also from the violin plots of Fig. 7. The stented cases were characterized by higher median values of TSVI as compared to the nonstented ones (862.79 m−1 versus 92.53 m−1, 802.78 m−1 versus 102.69 m−1, and 662.97 m−1 versus 119.18 m−1, for the stented versus the nonstented models of cases A, B, and C, respectively). Furthermore, the stent presence caused a marked percentage increase (>550%) in the TSVA (Table 2).
Fig. 6
Fig. 6
Close modal
Fig. 7
Fig. 7
Close modal
The distribution of TAWSS along the luminal surface of the stented cases was different than that of the nonstented ones, as visually emerged from the color maps of Fig. 8 and the corresponding violin plots of Fig. 9. In both the stented and nonstented cases, low TAWSS values were located at the bifurcation regions and at the proximal segment of case A. In the stented cases, low TAWSS was also present at the stent struts, as expected. The median values of TAWSS were lower in the case of the stented models as compared to the nonstented ones (0.54 Pa versus 1.02 Pa, 0.57 Pa versus 0.93 Pa, and 0.45 Pa versus 0.76 Pa, for the stented versus the nonstented models of cases A, B, and C, respectively). The LSA was markedly higher in the case of the stented models with respect to the nonstented ones (percentage increase > 400% in all cases, Table 2). Moreover, by comparing the color maps of TSVI and TAWSS of Figs. 5 and 7, a colocalization between high TSVI and low TAWSS values in the vicinity of the stent struts was observable.
Fig. 8
Fig. 8
Close modal
Fig. 9
Fig. 9
Close modal
The distributions of DIVWSS, TSVI, and TAWSS on the stent surface are depicted in Fig. 10 for the three investigated stents. A continuous DIVWSS distribution can be observed between the luminal surface close to the stent struts and their side faces (i.e., negative and positive DIVWSS values for the proximal and distal lateral surfaces of the stent struts at the intersection with the luminal surface, respectively). A transition from positive to negative DIVWSS values (i.e., from WSS expansion to contraction regions) in the main flow direction was identified at the top faces of the stent struts. The highest values of TSVI colocalized with the lowest values of TAWSS at the stent peaks and valleys. Low values of TSVI and high values of TAWSS were present at the top faces of the stent struts.
Fig. 10
Fig. 10
Close modal
### Impact of Stent Design.
The repetitive pattern of the cycle average WSS topological skeleton distribution at the luminal surface, characterized by WSS contraction regions located upstream from the stent rings, and WSS expansion regions located downstream from the stent rings, was present in all the investigated cases (Fig. 5), suggesting that this pattern was independent from the stent design features. A different number of fixed points was found in the three investigated cases (Table 1). The number of fixed points depended on the number of peaks/valleys per stent ring and the stent length. The higher the number of peaks/valleys per stent ring and the longer the stent length, the higher the number of fixed points. Consequently, the highest number of fixed points was identified in case B, where two Endeavor Resolute stents were implanted (total length of ∼28 mm, ten peaks per stent ring), while the lowest number in case C, where the Magmaris stent was deployed (length of 25 mm, six peaks per stent ring). Accordingly, the highest percentage increase (440.16%) in the wFPA was observed in case B, while the lowest (71.07%) in case C (Table 2). The number of saddle points and foci/nodes was balanced only in case C (Table 1). Conversely, the number of saddle points was lower than that of the foci/nodes in case A and higher than that of the foci/nodes in case C (Table 1).
The distributions of TSVI along the luminal surfaces of the stented regions were similar for the three investigated cases (Figs. 6 and 7). Although the general patterns of the TAWSS along the stented region were similar for the different cases (i.e., low TAWSS close to the stent struts, gradually increasing toward the stent cell center, Fig. 8), the TAWSS median value for case C was lower than that of the other two cases (0.45 Pa versus 0.54 Pa and 0.57 Pa for case C versus cases A and B, respectively, Fig. 9), suggesting that the open-cell design of the Magmaris stent, characterized by thicker struts than the Xience Prime and Endevour Resolute stents, had a higher impact on the TAWSS distribution.
## Discussion
### Summary of Findings.
This study investigated the impact that the stenting of human coronary arteries has on the WSS topological skeleton, whose features highlight the complex nature of the interaction of fluid shear forces with the stent struts and the luminal surface. Transient CFD simulations were performed in three patient-specific coronary models featuring different implanted stents, namely, two contemporary drug-eluting stents and one bioresorbable scaffold. The WSS topological skeleton features were analyzed along the stented regions by applying a recently proposed Eulerian-based method [12,15]. The results of the work demonstrated that the presence of single or multiple stents within a coronary artery highly impacts the WSS vector field topological skeleton. More specifically, first a repetitive pattern of the WSS topological skeleton distribution at the luminal surface was observed in the stented regions of all investigated cases. As schematized in Fig. 11, this pattern was characterized by a WSS contraction action on the luminal surface (negative DIVWSS values) immediately proximal to the stent struts and by a WSS expansion action (positive DIVWSS values) immediately distal to the stent struts. Second, this pattern was independent of the stent design, stent strut cross section shape, and thickness. Third, the highest variations in the WSS contraction/expansion action exerted on luminal surface interacting with the streaming blood along the cardiac cycle, as captured by the TSVI, were located close to the stent struts. Last, high TSVI colocalized with low TAWSS values in the vicinity of the stent struts.
Fig. 11
Fig. 11
Close modal
### Wall Shear Stress Topological Skeleton and Post-Percutaneous Coronary Intervention Complications.
Despite the advancements in the stent technology, ST and ISR are major adverse events still affecting the post-PCI long-term outcome [8]. The stent-induced flow disturbances, and more specifically the altered WSS patterns at the stent strut level, play an important role in the pathophysiological mechanisms leading to ST and ISR [7,8]. In this context, this work extends the current knowledge on the impact of stent implantation on local hemodynamics, presenting for the first time the characterization of the WSS topological skeleton in human stented coronary arteries. Motivated by recent studies highlighting the influence that peculiar WSS topological skeleton features have on vascular adverse events (i.e., wall degradation in the ascending thoracic aorta aneurysm [17], long-term restenosis in the carotid bifurcation after endarterectomy [16], and wall thickness change over time in coronary arteries prone to atherosclerotic lesion development [15]), the analysis here proposed could represent an important starting point for better elucidating the role of altered WSS profiles in promoting ST and ISR in stented arteries. Moreover, we have recently demonstrated the feasibility of a comprehensive diagnostic approach that includes WSS topological skeleton analysis in a conventional clinical framework to stratify the risk of myocardial infarction in patients with mild coronary lesions [18], holding the promise of a future extension to other clinical applications such as stented coronary arteries.
Percutaneous coronary intervention with stenting causes severe vascular injury producing endothelial denudation [30]. A process of re-endothelialization begins immediately after stent deployment leading to the repopulation of the damaged endothelium in the weeks after intervention through the proliferation of the endothelial cells remaining within the stented portion, of those close to the treated lesion, and of circulating endothelial progenitor from the blood [3032]. This process can be incomplete or, even if endothelium is completely restored, it can be dysfunctional, thus resulting in impaired endothelial function [30]. Incomplete endothelium promotes the insurgence of ST by exposing potential thrombogenic material, such as the stent material, to the circulating prothrombotic factors [10,30]. Incomplete and dysfunctional endothelium represents a major factor promoting the development of ISR through the increase in permeability, in the expression of chemotactic molecules, in the recruitment and accumulation of monocytes and macrophages, in the smooth muscle cell proliferation and migration, and in the expression of procoagulatory molecules [3335]. Re-endothelialization rates are affected by the local hemodynamic environment of the stented arterial segment, in particular by the WSS patterns at the stent strut level [9,10]. Low WSS is known to (i) attenuate the endothelial production of nitric oxide, prostacyclin I2, and tissue plasminogen activator, and (ii) inhibit endothelial cell proliferation and delay re-endothelialization [7]. This study confirms evidence from literature [7,8,19,36]: luminal regions close to the stent struts exhibit low WSS, as emerged from the distributions of TAWSS of the three investigated cases (Fig. 8). Additionally, it shows that (i) the luminal regions upstream from the stent struts are characterized by WSS contraction while those downstream from the stent struts by WSS expansion (Fig. 5), and (ii) both these luminal regions are characterized by high TSVI (i.e., high variations in the WSS contraction or expansion action along the cardiac cycle) (Fig. 6). The presence of WSS contraction/expansion regions upstream/downstream from the stent struts may contribute to explain the preferential migration direction of endothelial cells, which were shown, at least in an in vitro experiment [37], to migrate in the direction of the flow upstream from the stent struts and to accumulate downstream from the stent struts, being entrapped at the recirculation zone behind the stent struts. Furthermore, the high variability of the WSS manifolds along the cardiac cycle may lead to the dysfunction of the cells in contact with blood immediately post-PCI, namely, the remaining endothelial cells and the smooth muscle cells (usually not in contact with the blood), by exerting a push/pull action on those cells.
Stent thrombosis can be induced by high WSS values, which can lead to platelet activation and triggering of the coagulation cascade [7,8]. High stent strut thickness (e.g., ≥150 μm as in the case of the current bioresorbable scaffolds, such as the Magmaris scaffold of case C) is associated with increased risk of ST [3,7,8,10]. This study confirms that high TAWSS is present at the stent strut top face (Fig. 10), in agreement with previous findings [7,8]. Moreover, it highlights that the stent strut top face exhibits high variability in the WSS contraction/expansion action (Fig. 10). Further investigation is recommended to elucidate the role of the WSS topological skeleton features on the mechanisms triggering ST.
Stent thrombosis and ISR seem to be favored by inadequate stent positioning, including the conditions of stent malapposition (e.g., proximal stented segment of case A) and stent overlapping (e.g., midstented segment of case B) [7,8,38]. Interestingly, a different DIVWSS luminal distribution was observed in these regions as compared to those featuring well-apposed stent struts (Fig. 5). Again, further investigation is required to better understand the role of the identified WSS topological skeleton features on the underlying mechanisms of ST and ISR.
### Impact of Stent Design.
Previous in silico, in vitro, and in vivo investigations have demonstrated that the stent design (e.g., closed-cell versus open-cell stent design, stent strut cross section shape and thickness, and strut connector length) has a strong impact on the local hemodynamics, which in turn may influence the vessel response to stenting, potentially triggering ST or ISR [7,8]. In this study, the impact of three stents with different design (peak-to-valley for the Xience Prime and Magmaris stents of cases A and C, respectively, and peak-to-peak for Endeavor Resolute stents of case B), strut cross section shape (square, circular, and rectangular cross section shape for the Xience Prime, Endeavor Resolute, and Magmaris stents, respectively), and thickness (81 μm, 91 μm, and 150 μm for the Xience Prime, Endeavor Resolute, and Magmaris stents, respectively) on the WSS topological skeleton features was analyzed. Despite the differences in the design of the three considered stent platforms, a similar repetitive pattern of the WSS topological skeleton distribution was observed at the luminal surface of the stented region in all cases (Fig. 5). However, the number of fixed points of each case was different and depended on (i) the number of peaks/valleys per stent ring and (ii) the stent length. Case B presented the highest number of fixed points, as the implanted Endeavor Resolute stents were characterized by the highest number of peaks per stent ring as compared to the other stent designs and by the longest total stent length. The TSVI distribution at the luminal surface was similar in all cases (Fig. 6), suggesting that the highest variations in the WSS manifolds along the cardiac cycle are located close to the stent struts independent of the stent design. From a quantitative viewpoint, case C, in which the Magmaris scaffold was deployed, presented lower median values of both TSVI and TAWSS than the other two cases. The lower median values of TAWSS could be explained by the square strut cross section of the Magmaris scaffold presenting with higher thickness than the other two stent designs [7,39,40]. In order to draw definitive conclusions, a dedicated comparative analysis should be conducted on an idealized vessel geometry with different implanted stents to systematically quantify the effect of the stent design parameters (e.g., stent strut thickness) on the WSS topological skeleton features, without considering the influence of the patient-specific vessel geometry and boundary conditions on the hemodynamic results.
### Limitations.
This study faces some limitations. Only three image-based stented coronary artery models were considered herein. A large number of cases, including different stent designs, stent malapposition, and stent overlapping regions, should be analyzed to provide more general conclusions. Regarding the CFD models, in the absence of patient-specific flow measurements, boundary conditions were derived from: a diameter-based scaling law applied at the outflow sections [27]; a characteristic pulsatile flow waveform available from the literature [25], whose amplitude was scaled to the diameter of the inflow section [27]. This allowed prescribing boundary conditions that were personalized with respect to the specific anatomical features of each case. The flow rate values at the inflow section were then imposed in terms of Dirichlet boundary condition by prescribing a parabolic velocity profile. We have recently demonstrated that the influence of the velocity profile shape imposed at the inflow section is limited to a very few diameter lengths in computational hemodynamic models of LADs [26]; thus, we expect here a small effect of the inflow velocity profile on the WSS distribution at stented regions. In general, the adopted boundary conditions might have some impact on the in-stent WSS topological skeleton features. However, at this stage of the investigation, the lack of measured patent-specific inflow rate waveforms does not entail the generality of the results, at least in terms of the localization of the WSS contraction and expansion regions, as by construction the WSS topological skeleton analysis is based on the normalized WSS. Moreover, the vessel and stent walls were assumed as rigid based on a previous fluid–structure interaction work highlighting that the rigid-wall assumption has marginal effect on the WSS distribution of a stented coronary artery model [41]. Finally, the coronary artery motion during the cardiac cycle was neglected. Previous computational findings suggested that the cardiac-induced motion in untreated coronary arteries has a moderate impact on the coronary flow and WSS distribution [42]. Nevertheless, further investigation is required to confirm these findings in the case of stented coronary arteries when analyzing the WSS topological skeleton features.
## Conclusions
In this study, a recently developed Eulerian-based method [12,15] was used to characterize the WSS topological skeleton of human stented coronary arteries. The findings of the study revealed that the presence of single or multiple stents within a coronary artery severely affects the WSS topological skeleton features. The high variability in the WSS contraction/expansion action exerted at the luminal surface close to the stent struts may have important implications in the pathophysiological mechanisms leading to ST and ISR. These findings contribute to a deeper understanding of the hemodynamics-driven processes underlying ST and ISR, stimulating further investigations in order to elucidate the link between the WSS topological skeleton features and post-PCI complications.
## Acknowledgment
The authors would like to thank Dr. Marco Bologna (Politecnico di Milano, Milan, Italy) for his contribution to the vessel reconstruction from OCT images.
## Funding Data
• This work has been supported by the Italian Ministry of Education, University and Research (FISR2019_03221, CECOMES).
## Conflict of Interest
The authors declare that they have no conflict of interest.
## Authors' Contributions
Conceptualization: C.C., V.M., D.G., U.M.; data curation: C.C., V.M., D.B., E.C.; 3D stented vessel reconstruction: C.C., M.L.R.; meshing: V.M., A.C., A.A.; simulations: V.M., A.A.; post-processing of the results: V.M.; interpretation of data: C.C., V.M., E.C., D.G., U.M.; writing—original draft preparation: C.C., V.M.; writing—review and editing: C.C., V.M., M.L.R., K.C., A.C., A.A., G.D.N., D.B., E.C., D.G., U.M. All authors discussed the results and reviewed the paper.
## Nomenclature
• CFD =
computational fluid dynamics
•
• $DIVWSS$ =
divergence of the normalized wall shear stress vector field
•
• ISR =
in-stent restenosis
•
left anterior descending coronary artery
•
• LSA =
low shear area
•
• OCT =
optical coherence tomography
•
• PCI =
percutaneous coronary intervention
•
• $RT∇xfp(e)$ =
WSS fixed point weighted residence time
•
• ST =
stent thrombosis
•
• TAWSS =
time-averaged wall shear stress
•
• TSVA =
topological shear variation area
•
• TSVI =
topological shear variation index
•
• wFPA =
weighted fixed point area
•
• WSS =
wall shear stress
## References
1.
Otake
,
H.
,
2021
, “
Stent Edge Restenosis—An Inevitable Drawback of Stenting?
,”
Circ. J.: Off. J. Jpn. Circ. Soc.
,
85
(
11
), pp.
1969
1971
.10.1253/circj.CJ-21-0581
2.
Tomberli
,
B.
,
Mattesini
,
A.
,
Baldereschi
,
G. I.
, and
Di Mario
,
C.
,
2018
, “
A Brief History of Coronary Artery Stents
,”
Rev. Esp. Cardiol. (Engl. Ed.)
,
71
(
5
), pp.
312
319
.10.1016/j.recesp.2017.11.016
3.
Stefanini
,
G.
,
Byrne
,
R.
,
Windecker
,
S.
, and
Kastrati
,
A.
,
2017
, “
State of the Art: Coronary Artery Stents - Past, Present and Future
,”
EuroIntervention
,
13
(
6
), pp.
706
716
.10.4244/EIJ-D-17-00557
4.
Reejhsinghani
,
R.
, and
Lotfi
,
A. S.
,
2015
, “
Prevention of Stent Thrombosis: Challenges and Solutions
,”
Vasc. Health Risk Manag.
,
11
, pp.
93
106
.10.2147/VHRM.S43357
5.
Byrne
,
R. A.
,
Joner
,
M.
, and
Kastrati
,
A.
,
2015
, “
Stent Thrombosis and Restenosis: What Have We Learned and Where Are We Going? The Andreas Grüntzig Lecture ESC 2014
,”
Eur. Heart J.
,
36
(
47
), pp.
3320
3331
.10.1093/eurheartj/ehv511
6.
Shlofmitz
,
E.
,
Iantorno
,
M.
, and
Waksman
,
R.
,
2019
, “
Restenosis of Drug-Eluting Stents: A New Classification System Based on Disease Mechanism to Guide Treatment and State-of-the-Art Review
,”
Circ.: Cardiovasc. Interventions
,
12
(
8
), p.
e007023
.10.1161/CIRCINTERVENTIONS.118.007023
7.
Koskinas
,
K. C.
,
Chatzizisis
,
Y. S.
,
,
A. P.
, and
Giannoglou
,
G. D.
,
2012
, “
Role of Endothelial Shear Stress in Stent Restenosis and Thrombosis: Pathophysiologic Mechanisms and Implications for Clinical Translation
,”
J. Am. Coll. Cardiol.
,
59
(
15
), pp.
1337
1349
.10.1016/j.jacc.2011.10.903
8.
Ng
,
J.
,
Bourantas
,
C. V.
,
Torii
,
R.
,
Ang
,
H. Y.
,
Tenekecioglu
,
E.
,
Serruys
,
P. W.
, and
Foin
,
N.
,
2017
, “
Local Hemodynamic Forces After Stenting: Implications on Restenosis and Thrombosis
,”
Arterioscler., Thromb., Vasc. Biol.
,
37
(
12
), pp.
2231
2242
.10.1161/ATVBAHA.117.309728
9.
Kolandaivelu
,
K.
,
Swaminathan
,
R.
,
Gibson
,
W. J.
,
Kolachalama
,
V. B.
,
Nguyen-Ehrenreich
,
K. L.
,
Giddings
,
V. L.
,
Coleman
,
L.
,
Wong
,
G. K.
, and
Edelman
,
E. R.
,
2011
, “
Stent Thrombogenicity Early in High-Risk Interventional Settings Is Driven by Stent Design and Deployment and Protected by Polymer-Drug Coatings
,”
Circulation
,
123
(
13
), pp.
1400
1409
.10.1161/CIRCULATIONAHA.110.003210
10.
Nguyen
,
D. T.
,
Smith
,
A. F.
, and
Jiménez
,
J. M.
,
2021
, “
Stent Strut Streamlining and Thickness Reduction Promote Endothelialization
,”
J. R. Soc. Interface
,
18
(
181
), p.
20210023
.10.1098/rsif.2021.0023
11.
Mazzi
,
V.
,
Morbiducci
,
U.
,
Calò
,
K.
,
De Nisco
,
G.
,
Lodi Rizzini
,
M.
,
Torta
,
E.
,
Caridi
,
G. C. A.
,
Chiastra
,
C.
, and
Gallo
,
D.
,
2021
, “
Wall Shear Stress Topological Skeleton Analysis in Cardiovascular Flows: Methods and Applications
,”
Mathematics
,
9
(
7
), p.
720
.10.3390/math9070720
12.
Mazzi
,
V.
,
Gallo
,
D.
,
Calò
,
K.
,
Najafi
,
M.
,
Khan
,
M. O.
,
De Nisco
,
G.
,
Steinman
,
D. A.
, and
Morbiducci
,
U.
,
2019
, “
A Eulerian Method to Analyze Wall Shear Stress Fixed Points and Manifolds in Cardiovascular Flows
,”
Biomech. Model. Mechanobiol.
,
9
(
5
), pp.
1403
1423
.10.1007/s10237-019-01278-3
13.
Arzani
,
A.
, and
,
S. C.
,
2018
, “
Wall Shear Stress Fixed Points in Cardiovascular Fluid Mechanics
,”
J. Biomech.
,
73
, pp.
145
152
.10.1016/j.jbiomech.2018.03.034
14.
Arzani
,
A.
,
Gambaruto
,
A. M.
,
Chen
,
G.
, and
,
S. C.
,
2016
, “
Lagrangian Wall Shear Stress Structures and Near-Wall Transport in High-Schmidt-Number Aneurysmal Flows
,”
J. Fluid Mech.
,
790
, pp.
158
172
.10.1017/jfm.2016.6
15.
Mazzi
,
V.
,
De Nisco
,
G.
,
Hoogendoorn
,
A.
,
Calò
,
K.
,
Chiastra
,
C.
,
Gallo
,
D.
,
Steinman
,
D. A.
,
Wentzel
,
J. J.
, and
Morbiducci
,
U.
,
2021
, “
Early Atherosclerotic Changes in Coronary Arteries Are Associated With Endothelium Shear Stress Contraction/Expansion Variability
,”
Ann. Biomed. Eng.
,
49
(
9
), pp.
2606
2621
.10.1007/s10439-021-02829-5
16.
Morbiducci
,
U.
,
Mazzi
,
V.
,
Domanin
,
M.
,
De Nisco
,
G.
,
Vergara
,
C.
,
Steinman
,
D. A.
, and
Gallo
,
D.
,
2020
, “
Wall Shear Stress Topological Skeleton Independently Predicts Long-Term Restenosis After Carotid Bifurcation Endarterectomy
,”
Ann. Biomed. Eng.
,
48
(
12
), pp.
2936
2949
.10.1007/s10439-020-02607-9
17.
De Nisco
,
G.
,
Tasso
,
P.
,
Calò
,
K.
,
Mazzi
,
V.
,
Gallo
,
D.
,
Condemi
,
F.
,
Farzaneh
,
S.
,
Avril
,
S.
, and
Morbiducci
,
U.
,
2020
, “
Deciphering Ascending Thoracic Aortic Aneurysm Hemodynamics in Relation to Biomechanical Properties
,”
Med. Eng. Phys.
,
82
, pp.
119
129
.10.1016/j.medengphy.2020.07.003
18.
Candreva
,
A.
,
Pagnoni
,
M.
,
Rizzini
,
M. L.
,
Mizukami
,
T.
,
Gallinoro
,
E.
,
Mazzi
,
V.
,
Gallo
,
D.
,
2021
, “
Risk of Myocardial Infarction Based on Endothelial Shear Stress Analysis Using Coronary Angiography
,”
Atherosclerosis
, 342, pp. 28–35.10.1016/j.atherosclerosis.2021.11.010
19.
Chiastra
,
C.
,
Morlacchi
,
S.
,
Gallo
,
D.
,
Morbiducci
,
U.
,
Cárdenes
,
R.
,
Larrabide
,
I.
, and
Migliavacca
,
F.
,
2013
, “
Computational Fluid Dynamic Simulations of Image-Based Stented Coronary Bifurcation Models
,”
J. R. Soc. Interface
,
10
(
84
), p.
20130193
.10.1098/rsif.2013.0193
20.
Morlacchi
,
S.
,
Colleoni
,
S. G.
,
Cárdenes
,
R.
,
Chiastra
,
C.
,
Diez
,
J. L.
,
Larrabide
,
I.
, and
Migliavacca
,
F.
,
2013
, “
Patient-Specific Simulations of Stenting Procedures in Coronary Bifurcations: Two Clinical Cases
,”
Med. Eng. Phys.
,
35
(
9
), pp.
1272
1281
.10.1016/j.medengphy.2013.01.007
21.
Cerrato
,
E.
,
Barbero
,
U.
,
Gil Romero
,
J. A.
,
,
G.
,
Mejia-Renteria
,
H.
,
Tomassini
,
F.
,
Ferrari
,
F.
,
Varbella
,
F.
,
Gonzalo
,
N.
, and
Escaned
,
J.
,
2019
, “
MagmarisTM Resorbable Magnesium Scaffold: State-of-Art Review
,”
Future Cardiol.
,
15
(
4
), pp.
267
279
.10.2217/fca-2018-0081
22.
Chiastra
,
C.
,
Montin
,
E.
,
Bologna
,
M.
,
Migliori
,
S.
,
Aurigemma
,
C.
,
Burzotta
,
F.
,
Celi
,
S.
,
Dubini
,
G.
,
Migliavacca
,
F.
, and
Mainardi
,
L.
,
2017
, “
Reconstruction of Stented Coronary Arteries From Optical Coherence Tomography Images: Feasibility, Validation, and Repeatability of a Segmentation Method
,”
PLoS One
,
12
(
6
), p.
e0177495
.10.1371/journal.pone.0177495
23.
Migliori
,
S.
,
Chiastra
,
C.
,
Bologna
,
M.
,
Montin
,
E.
,
Dubini
,
G.
,
Aurigemma
,
C.
,
Fedele
,
R.
,
Burzotta
,
F.
,
Mainardi
,
L.
, and
Migliavacca
,
F.
,
2017
, “
A Framework for Computational Fluid Dynamic Analyses of Patient-Specific Stented Coronary Arteries From Optical Coherence Tomography Images
,”
Med. Eng. Phys.
,
47
, pp.
105
116
.10.1016/j.medengphy.2017.06.027
24.
Chiastra
,
C.
,
Migliori
,
S.
,
Burzotta
,
F.
,
Dubini
,
G.
, and
Migliavacca
,
F.
,
2018
, “
Patient-Specific Modeling of Stented Coronary Arteries Reconstructed From Optical Coherence Tomography: Towards a Widespread Clinical Use of Fluid Dynamics Analyses
,”
J. Cardiovasc. Transl. Res.
,
11
(
2
), pp.
156
172
.10.1007/s12265-017-9777-6
25.
Davies
,
J. E.
,
Whinnett
,
Z. I.
,
Francis
,
D. P.
,
Manisty
,
C. H.
,
,
J.
,
Willson
,
K.
,
Foale
,
R. A.
,
2006
, “
Evidence of a Dominant Backward-Propagating ‘Suction’ Wave Responsible for Diastolic Coronary Filling in Humans, Attenuated in Left Ventricular Hypertrophy
,”
Circulation
,
113
(
14
), pp.
1768
1778
.10.1161/CIRCULATIONAHA.105.603050
26.
Lodi Rizzini
,
M.
,
Gallo
,
D.
,
De Nisco
,
G.
,
D'Ascenzo
,
F.
,
Chiastra
,
C.
,
Bocchino
,
P. P.
,
Piroli
,
F.
,
De Ferrari
,
G. M.
, and
Morbiducci
,
U.
,
2020
, “
Does the Inflow Velocity Profile Influence Physiologically Relevant Flow Patterns in Computational Hemodynamic Models of Left Anterior Descending Coronary Artery?
,”
Med. Eng. Phys.
,
82
, pp.
58
69
.10.1016/j.medengphy.2020.07.001
27.
van der Giessen
,
A. G.
,
Groen
,
H. C.
,
Doriot
,
P.-A.
,
de Feyter
,
P. J.
,
van der Steen
,
A. F. W.
,
van de Vosse
,
F. N.
,
Wentzel
,
J. J.
, and
Gijsen
,
F. J. H.
,
2011
, “
The Influence of Boundary Conditions on Wall Shear Stress Distribution in Patients Specific Coronary Trees
,”
J. Biomech.
,
44
(
6
), pp.
1089
1095
.10.1016/j.jbiomech.2011.01.036
28.
Garth
,
C.
,
Tricoche
,
X.
, and
Scheuermann
,
G.
,
2004
, “
Tracking of Vector Field Singularities in Unstructured 3D Time-Dependent Datasets
,”
IEEE Visualization 2004
, Austin, TX, Oct. 10–15, pp.
329
336
.10.1109/VISUAL.2004.107
29.
Gambaruto
,
A. M.
, and
João
,
A. J.
,
2012
, “
Computers & Fluids Flow Structures in Cerebral Aneurysms
,”
Comput. Fluids
,
65
, pp.
56
65
.10.1016/j.compfluid.2012.02.020
30.
Cornelissen
,
A.
, and
Vogt
,
F. J.
,
2019
, “
The Effects of Stenting on Coronary Endothelium From a Molecular Biological View: Time for Improvement?
,”
J. Cell. Mol. Med.
,
23
(
1
), pp.
39
46
.10.1111/jcmm.13936
31.
Asahara
,
T.
,
Masuda
,
H.
,
Takahashi
,
T.
,
Kalka
,
C.
,
Pastore
,
C.
,
Silver
,
M.
,
Kearne
,
M.
,
Magner
,
M.
, and
Isner
,
J. M.
,
1999
, “
Bone Marrow Origin of Endothelial Progenitor Cells Responsible for Postnatal Vasculogenesis in Physiological and Pathological Neovascularization
,”
Circ. Res.
,
85
(
3
), pp.
221
228
.10.1161/01.RES.85.3.221
32.
Lindner
,
V.
,
Majack
,
R. A.
, and
Reidy
,
M. A.
,
1990
, “
Basic Fibroblast Growth Factor Stimulates Endothelial Regrowth and Proliferation in Denuded Arteries
,”
J. Clin. Invest.
,
85
(
6
), pp.
2004
2008
.10.1172/JCI114665
33.
Van der Heiden
,
K.
,
Gijsen
,
F. J. H.
,
Narracott
,
A.
,
Hsiao
,
S.
,
Halliday
,
I.
,
Gunn
,
J.
,
Wentzel
,
J. J.
, and
Evans
,
P. C.
,
2013
, “
The Effects of Stenting on Shear Stress: Relevance to Endothelial Injury and Repair
,”
Cardiovasc. Res.
,
99
(
2
), pp.
269
275
.10.1093/cvr/cvt090
34.
Chiu
,
J.-J.
, and
Chien
,
S.
,
2011
, “
Effects of Disturbed Flow on Vascular Endothelium: Pathophysiological Basis and Clinical Perspectives
,”
Physiol. Rev.
,
91
(
1
), pp.
327
387
.10.1152/physrev.00047.2009
35.
Munk
,
P. S.
,
Butt
,
N.
, and
Larsen
,
A. I.
,
2011
, “
Endothelial Dysfunction Predicts Clinical Restenosis After Percutaneous Coronary Intervention
,”
Scand. Cardiovasc. J.
,
45
(
3
), pp.
139
145
.10.3109/14017431.2011.564646
36.
Beier
,
S.
,
Ormiston
,
J.
,
Webster
,
M.
,
Cater
,
J.
,
Norris
,
S.
,
Medrano-Gracia
,
P.
,
Young
,
A.
, and
Cowan
,
B.
,
2016
, “
Hemodynamics in Idealized Stented Coronary Arteries: Important Stent Design Considerations
,”
Ann. Biomed. Eng.
,
44
(
2
), pp.
315
329
.10.1007/s10439-015-1387-3
37.
Hsiao
,
S. T.
,
Spencer
,
T.
,
Boldock
,
L.
,
Prosseda
,
S. D.
,
Xanthis
,
I.
,
Tovar-Lopez
,
F. J.
,
Van Beusekom
,
H. M. M.
,
2016
, “
Endothelial Repair in Stented Arteries Is Accelerated by Inhibition of Rho-Associated Protein Kinase
,”
Cardiovasc. Res.
,
112
(
3
), pp.
689
701
.10.1093/cvr/cvw210
38.
Lagache
,
M.
,
Coppel
,
R.
,
Finet
,
G.
,
Derimay
,
F.
,
Pettigrew
,
R. I.
,
Ohayon
,
J.
, and
Malvè
,
M.
,
2021
, “
Impact of Malapposed and Overlapping Stents on Hemodynamics: A 2D Parametric Computational Fluid Dynamics Study
,”
Mathematics
,
9
(
8
), p.
795
.10.3390/math9080795
39.
Jiménez
,
J. M.
, and
Davies
,
P. F.
,
2009
, “
Hemodynamically Driven Stent Strut Design
,”
Ann. Biomed. Eng.
,
37
(
8
), pp.
1483
1494
.10.1007/s10439-009-9719-9
40.
Tarrahi
,
I.
,
Colombo
,
M.
,
Hartman
,
E.
,
Forero
,
M. T.
,
Torii
,
R.
,
Chiastra
,
C.
,
Daemen
,
J.
, and
Gijsen
,
F.
,
2020
, “
Impact of Bioresorbable Scaffold Design Characteristics on Local Haemodynamic Forces: An Ex Vivo Assessment With Computational Fluid Dynamics Simulations
,”
EuroIntervention
,
16
(
11
), pp.
E930
E937
.10.4244/EIJ-D-19-00657
41.
Chiastra
,
C.
,
Migliavacca
,
F.
,
Martinez
,
M. A.
, and
Malve
,
M.
,
2014
, “
On the Necessity of Modelling Fluid-Structure Interaction for Stented Coronary Arteries
,”
J. Mech. Behav. Biomed. Mater.
,
34
, pp.
217
230
.10.1016/j.jmbbm.2014.02.009
42.
Zeng
,
D.
,
Ding
,
Z.
,
Friedman
,
M. H.
, and
Ethier
,
C. R.
,
2003
, “
Effects of Cardiac Motion on Right Coronary Artery Hemodynamics
,”
Ann. Biomed. Eng.
,
31
(
4
), pp.
420
429
.10.1114/1.1560631
|
{}
|
# Energy Drinks VERSUS 5 Hr Energy Vs. Coffee. What Perform I Must Get To.
AHEC attained a discovery with its own wave power generating procedure that reuses the same water inside a 70 Account Structure, which needs a lot less electricity compared to is eaten during the course of the creation method. If weblink ’ve been actually searching for a great GENERAL PRACTITIONER application but are unsure to devote $50, visit MapQuest 4 Mobile right now and also permit us know exactly what you presume. A 1-MW plant with a 50 percent capability element would have the exact same electricity output as a 2-MW plant with a 25 per-cent capacity factor. As well as besides simply reporting, services like Storify and Storyful can be made use of by media reporters (as Andy Carvin has for an amount of tales) to gather reports concerning a subject matter, and also incorporate evaluation to all of them in one thing moving toward live Then, a traditional-looking newspaper article may need to be actually created– partially to offer those which may not be on-line or on Twitter regularly– or even it might not. Believe Middle-Earth: Shade of Mordor, where you push a button to stealthily embrace your spouse (and also this is incredibly charming). The additional practical expectation is actually that he just plain failed to think this would be actually an issue, which is actually logical offered the probably near no visibility his current application has. As opposed to thinking of this change as a nuisance, consider that as being empowered along with additional command over our information in a context-appropriate method. This cannot take place overnight, as well as will definitely cause a long term transit electricity shortage also more significant in immensity than illustrated above. Gary: I believe you can, through watching a time common on another device, specified the apple ipad clock on the min” when they match. The idea is that if you may trump individuals on your map tremendously simply, that you need a better difficulty. This could still possibly change in later updates for right now I cannot state the tale saying to is among the extra attractive functions to the video game. Apple failed to intend to invest in energy storage technician for the photovoltaic web sites, as that would certainly bump up the cost from tidy electrical power substantially. I presume that is actually a good factor of Apple to do. Apple have utilized their Assume Other” branding/campaign of Rosa Parks in the past, when she lived, and once she is actually gone this’s perfect to provide her one more homage. Again, I assume this was actually a great touch as this makes considering reports a couple of measures easier. Having said that, I suggest using Time Equipment alone in only a few scenarios: if you have more than one external hard drive on which Opportunity Equipment could keep its own backups and you can easily maintain one of all of them safely and securely offsite in all times; if you do not mind the notion of devoting a number of hrs recovering a backup in the event from severe troubles; or if you have two or even more Mac computers running Leopard, thus you can promptly switch over to an additional computer system on the occasion that your main hard disk drive stops working. Neuroscience paints a complex image of creative thinking As experts right now understand it, ingenuity is actually much more intricate than the right-left human brain distinction will have our team believe (the idea being actually that left human brain = rational and analytical, correct mind = innovative and mental). He also claimed he could possibly develop 1.2 million tasks by increasing United States electricity production. Mobile pc gaming is here to look and stay up the amounts no other kind of video gaming even came close to making as much money as mobile not also all those devices incorporated so you should think about this your placing your choice out front and also simulating cuz you believe this way it have to be true, yet this is actually not mobile pc gaming is actually # 1 at this moment along with a much larger client foundation. Veggie modern technology as well as renewable resource research study organization Tidy Upper hand documents that the sunlight photovoltaic or pv, renewable energy as well as biofuels markets increased 31 per-cent in 2014 to arrive at$246.1 billion, even with the bum rap that renewable energy has actually been actually suffering since the Solyndra personal bankruptcy. Power renews quicker while you are actually visited, so one technique to have that renew is to simply reduce the video game but leave that operating and also go do something else. Meanwhile, I still adore the amount of time command technicians and overall look from the game.
|
{}
|
# Database
## Files and code
### Symbolic orchestral database
In the case of projective orchestration, this dataset can be used in a pre-training step.
• Purely orchestral MIDI or MusicXML files : SOD.zip
• A MIDI parser can be found here :
• A MusicXML parser can be found here : Mucic_XML
## Description
This database is a MIDI collection of 196 pairs of piano scores and corresponding orchestrations. The figure of the right-hand side represents the hierarchy of the database and general statistics are given in the table.
The dataset is split between train, validation and test sets of files that represent approximately 80%, 10% and 10% of the full set. The split we used is written in text files with transparent names. For instance, the files from the liszt_classical_archives used for the training step are listed in the liszt_classical_archives_train.txt file.
Warning : The quality of the orchestrations in the ISMLP folder is poorer than the orchestration from the other database. Hence we don't recommand using it from training a orchestration system. We still release them since some files might be useful for other tasks.
instrument_name n_track_present n_note_played tuba bass 3 178 piccolo 31 6717 celesta 2 1108 violin and viola and violoncello and double bass 14 9731 trombone and tuba bass 1 46 english horn 13 6677 trombone 96 25025 violin 282 336580 clarinet 123 159430 trumpet 111 66584 harp 27 21781 double bass and violoncello 1 1275 bassoon 58 109289 timpani 77 31480 tuba 42 6769 percussion 57 8639 violoncello 135 133640 bassoon bass 73 60044 viola 122 111504 piano and violin and violoncello 1 755 piano 3 4485 cornet 19 3739 trombone and tuba 4 2366 oboe 119 140364 flute 122 117829 english horn and oboe 1 762 horn 190 181714 flute and piccolo 1 1575 organ 3 1646 clarinet bass 4 202 double bass 119 94205 saxophone 1 556 voice 35 33597
## Data representation in LOP
We used a simple piano-roll representation to process the orchestral and piano scores in LOP. A piano-roll $$pr$$ is a matrix whose rows represent pitches and columns represent a time frame depending on the time quantization. A pitch $$p$$ at time $$t$$ played with an intensity $$i$$ is represented by $$pr(p,t) = i$$, $$0$$ being a note off. This definition is extended to an orchestra by simply concatenating the piano-rolls of every instruments along the pitch dimension.
The rhythmic quantization is defined as the number of time frame in the piano-roll per quarter note. It is clear that the rhythmic quantization chosen impact the predictive task we use to train the models. For instance, as the quantization gets finer, an increasing number of successive frames are identical. To alleviate this problem and get rid of the quantization dependency, we remove from the pianoroll repeated event. More precisely, only the time event $$t_{e}$$ such that $$\text{Orch}(t_{e}) \neq \text{Orch}(t_{e} - 1)$$ are kept in the pianoroll.
## Time alignment
Given the diverse origins of the MIDI files, it is very rare that a piano score and its proposed orchestration are aligned. Indeed, one file can be shorter than the other one, because of temporal dilatation factors or skipped parts.
Those misalignments are very problematic for the projective orchestration task, and in general for any processing which intends to take advantage of the joint information provided between the piano and orchestra scores. Hence, we use the Needleman-Wunsch algorithm to automatically align two scores. To that end, we defined a distance between two chords, which essentially consists in counting the number of jointly activated pitch-classes. This might look too simplistic, be proved to be sufficient.
Schéma
|
{}
|
# Thread: Transitive action, blocks, primitive action, maximal subgroups.
1. ## Transitive action, blocks, primitive action, maximal subgroups.
Let $G$ act transitively on a finite set $A$. A 'block' in $A$ is a non-empty subset $B$ of $A$ such that for all $\sigma \in G$ either $\sigma(B)=B$ or $\sigma(B) \cap B= \phi$ (where $\sigma(B)=\{ \sigma(b)|b \in B \}$).
This action is called 'primitive' if the only blocks in $A$ are trivial ones: the sets of size $1$ and $A$ itself.
Prove that:
The action(transitive) of $G$ on $A$ is primitive if and only if for each $a \in A,$ $G_a$ is a maximal subgroup of $G$. ( $G_a=\{g \in G| g \cdot a=a \}=$ stabilizer of $a$ in $G$)
Here is what i have(with a little help from my friend):
Define $G_B= \{ \sigma \in G| \sigma(B)=B \}$. Its easy to see that $G_B \leq G$. Moreover if $a \in B$ then $G_a \leq G_B$.
I came to know that there exists a bijection between the blocks containing $a$ and the subgroups of $G$ containing $G_a$.I couldn't prove this. Help needed.
2. ## Re: Transitive action, blocks, primitive action, maximal subgroups.
First we show that if for each $a\in A$, $G_a$ is a maximal subgroup, then $A$ is primitive. In particular, we prove the contrapositive that if $G_a$ is not a maximal subgroup for some $a\in A$ then $A$ is not primitive. Let $G_a\leq H\leq G$ be a chain of proper subgroups. We claim that $H\cdot a=\{h\cdot a:h\in H\}$ is a block. It's easy to see that if $h\in H$ then $h\cdot(H\cdot a)=hH\cdot a=H\cdot a$. On the other hand, let $g\in G\setminus H$, and suppose towards a contradiction that $(g\cdot(H\cdot a))\cap (H\cdot a)\neq\emptyset$. Then there are $h_1,h_2\in H$ such that $gh_1\cdot a=h_2\cdot a$ and hence $(h_2)^{-1}gh_1\cdot a=a$. But then $(h_2)^{-1}gh_1\in G_a\leq H$, giving us $g\in H$, a contradiction. So $H\cdot a$ is indeed a block. Furthermore, $H\cdot a$ is nontrivial: for if $h\in H\setminus G_a$ then $h\cdot a$ and $a$ are distinct elements of $H\cdot a$. But $g\cdot a\notin H\cdot a$, because if it were then we would have $g\cdot a=h\cdot a$ for some $h\in H$ and hence $h^{-1}g\in G_a\leq H$, which would contradict the fact that $g\notin H$. Thus the first half of the proof is complete.
For the converse, let $B\subset A$ be a nontrivial block, and let $b,c\in B$ be distinct. Then $G_b\leq G_B\leq G$. We claim that both subgroup relations are proper: For let $a\in A\setminus B$. Then there is $g\in G$ with $a=g\cdot b\in g\cdot B$ by the transitivity of $A$, which means $g\notin G_B$. So $G_B$ is a proper subgroup of $G$. Also by the transitivity of $A$ there is $h\in G$ with $h\cdot b=c$, which means $h\notin G_b$. Since $B$ is a block with $(h\cdot B)\cap B\neq\emptyset$ then it must be that $h\cdot B=B$, and hence $h\in G_B$. So $G_b$ is a proper subgroup of $G_B$. We conclude that $G_b$ is not maximal, and this completes the second half of the proof.
3. ## Re: Transitive action, blocks, primitive action, maximal subgroups.
I was just thinking about how to show the stronger result that there is a bijection between blocks containing $a$ and the subgroups of $G$ containing $G_a$...
Let $\mathcal{B}$ as the set of blocks in $A$ containing $a$, and let $\mathcal{S}$ be the set of subgroups of $G$ containing $G_a$. Define $\varphi:\mathcal{B}\to\mathcal{S}$ by $\varphi(B)=G_B$, for each $B\in\mathcal{B}$. We claim that $\varphi$ is injective: let $B_1,B_2$ be distinct blocks containing $a$. Then (without loss of generality) there is $b_1\in B_1\setminus B_2$. By the transitivity of $\cdot$ there is $g\in G$ with $b_1=g\cdot a\in g\cdot B_2$. So $g\notin G_{B_2}$. However $a=g^{-1}\cdot b_1\in g^{-1}\cdot B_1$; since $B_1$ is a block this means $g^{-1}\cdot B_1=B_1$ and hence $g\in G_{B_1}$. Thus $G_{B_1}\neq G_{B_2}$, and it follows that $\varphi$ is injective.
Now we show that $\varphi$ is surjective: Let $G_a\leq H\leq G$ be a chain of subgroups of $G$. We showed in the previous post that $H\cdot a$ is a block, and obviously it contains $a$. We claim that $\varphi(H\cdot a)=H$. Clearly for each $h\in H$ we have $h\cdot(H\cdot a)=H\cdot a$; so $H\leq\varphi(H\cdot a)$. Now let $k\in\varphi(H\cdot a)$. Then $a\in H\cdot a=k\cdot(H\cdot a)$, which means there is $h\in H$ with $kh\cdot a=a$. So $kh\in G_a\leq H$, and thus $k\in H$. We conclude $H=\varphi(H\cdot a)$, and it follows that $\varphi$ is surjective, indeed, bijective.
4. ## Re: Transitive action, blocks, primitive action, maximal subgroups.
thank you! that was brilliant.
|
{}
|
Pulse energy $$\mathcal{E}$$ is equal to the integrated fluence $$F$$, If bandwidth $$\Delta \lambda$$ is given in nanometers, bandwidth in inverse centimeters is approximately $$\Delta k\mathrm{[cm^{-1}]} \approx 10^7 \cdot \frac{\Delta\lambda\mathrm{[nm]}}{(\lambda_0\mathrm{[nm]})^2}.$$, Carrier-envelope phase $$\varphi_\mathsf{CE}$$ is the phase difference between the maxima of (i) oscillating field intensity and (ii) carrier envelope. Calculators; Part Search; Test Equipment Database; Bom Tool; Reference Designs; IC Design Center; Videos. Here $$\vartheta_0$$ is the angle of incidence. A little calculator is implemented in the Results Window: Enter and T is calculated or vice … With a little algebra, we can calculate the 10-90 rise time, the time it takes to pass between the 10% point and the 90% point as . Here $$\vartheta_0$$ is the angle of incidence. a (t) = {1 0 ≤ t ≤ τ 0 otherwise. $$For example, if you need to measure a square signal with 100 ns rise time, your bandwidth will be about 3.5 MHz (0.35 / 100E-9). Any waveform can be … This means you need the peak RF field to be 2.43 * larger than a rectangular pulse of the same length, corresponding to 7.71dB less attenuation.$$ GFSK modulation uses a Gaussian filter on the transmitter side which smoothens the shape of the frequency pulse. Use this calculator to estimate the bandwidth needs or actual data usage of a website. A certain bandwidth is needed for any signal. In sum, the essential bandwidth of a rectangular pulse is given by the width of the mainlobe of its spectrum, so you only need to be able to calculate the first zero of the spectrum and you're done. Frequency $$f = ck \Longrightarrow f[\mathrm{THz}] \approx \frac{k[\mathrm{cm^{-1}}]}{33.356}$$, Wavelength $$\lambda = Tc \Longrightarrow \lambda[\mathrm{nm}] \approx T[\mathrm{fs}] \cdot 299.792$$ $$l = \frac{nh}{\sqrt{n^2-\sin^2\vartheta_0}}.$$, Time of flight of Gaussian beam through optical path length $$L$$, $$t = \sum_{i=1}^N\frac{h_i}{v_{\mathsf{g},i}} . Here $$\vartheta_0$$ is the angle of incidence. If $$n=1$$ (Gaussian beam),$$F_0 = \mathcal{E}\frac{2}{\pi w_{0}^{2}}. $$Here $$\Gamma$$ is gamma function, $$w_0$$ - half width of the peak at $$1/\mathrm{e}^2$$ intensity. For 2nB elements of information, we must transfer 2nB bits/second. the bandwidth decreases. Angular frequency$$ \omega = 2\pi f \Longrightarrow \omega[\mathrm{cm^{-1}}] \approx \frac{f[\mathrm{THz}]}{159.160} The input signals were inherently broadband, periodic rectangular pulse trains with different duty cycles and repetition rates. This means that e.g. where TTT is the 1/e1/e1/e pulse duration: and TminT_{min}Tminâ is the transform-limited 1/e1/e1/e spectral width: The sign of the chirp parameter and accumulated dispersion remains ambiguous since it cannot be deduced from spectral width and pulse duration only. For temporally Gaussian pulse, peak power is related to pulse energy $$\mathcal{E}$$ and length $$\Delta t$$ (FWHM) as Pulse width [pw]: Prior to applying a radio frequency pulse, a slight majority of nuclear spins are aligned parallel to the static magnetic field (B 0) (at 500 MHz, this equates to about 0.008%). DH_rev_Aug26_2013 5 2. The input signals were inherently broadband, periodic rectangular pulse trains with different duty cycles and repetition rates. The Bandwidth Factor can therefore be used to calculate the bandwidth of a pulse or the pulse length for a given excitation region. In radar system using the intra-pulse modulation of the transmitted pulse, the necessary bandwidth of radar receiver is much higher than the reciprocal of their pulse width. Laser … $$R_\mathrm{s} = \frac{|E_\mathrm{r}^\mathrm{s}|^2}{|E_\mathrm{i}^\mathrm{s}|^2}=\frac{|\cos\vartheta_0-n\cos\vartheta_1|^2}{|\cos\vartheta_0+n\cos\vartheta_1|^2}. Page Views: Average Page Size Redundancy Factor: Hosting Bandwidth Converter. Digital, or square, signals have sharp edges and therefore the total bandwidth of the signal is not straight-forward to calculate. The Bandwidth Factor can therefore be used to calculate the bandwidth of a pulse or the pulse length for a given excitation region.$$t = \frac{2l}{v_\mathrm{g}} + \frac{L-2\sqrt{l^2-d^2}}{c}. 1.544 Mbit/s … Phase matching angle: $$\vartheta =\arcsin\sqrt{\frac{\frac{\lambda_{1}^{2}\cos^2\vartheta_0}{\left(n_\mathrm{o}(\lambda_3)\lambda_3-n_\mathrm{o}(\lambda_{1})\lambda_2\cos\vartheta_0\right)^{2}\cos^{2}\vartheta_{0}}-\frac{1}{n^2_\mathrm{o}(\lambda_{2})}}{\frac{1}{n_\mathrm{e}^{2}(\lambda_2})}-\frac{1}{n_\mathrm{o}^{2}(\lambda_{2})}}}$$. The resulting Bandwidth Factor * T is dimensionless and can be used to calculate the bandwidth of a pulse or the pulse length T for the corresponding excitation region. $$P_0 =\frac{2\mathcal{E}}{\Delta t}\sqrt{\frac{\ln2}{\pi}}\approx\frac{0.94\mathcal{E}}{\Delta t}. Additionally, this calculator computes the expected autocorrelation widths given the pulse duration as well as the Gaussian chirp parameterCCCand the accumulated GDD. Maximal pulse intensity (at beam center). FIG. Angular frequency$$ \omega = \frac{E}{\hbar} \Longrightarrow \omega \approx 1.519\cdot E[\mathrm{eV}] $$A Gaussian pulse shape is assumed. Therefore, peak fluence is obtained as$$F_0 = \mathcal{E}\frac{2^{\frac{1}{n}}n}{\pi w_{0}^{2}\Gamma\left(\frac{1}{n}\right)}. Decibel-percent converter. Bandwidth management controls the rate of traffic sent or received on a network interface. Sweep direction (up or down), corresponding to increasing and decreasing instantaneous frequency. Has its minimum for ideal transform-limited pulses: Divergence angle $$\vartheta$$ describes how Gaussian beam diameter spreads in the far field ($$z\gg z_\mathrm{R}$$). Amplitude, Frequency, Pulse Modulation - Electronics Engineering test questions (1) In SSB the pilot carrier is provided (A) For stabilizing frequency (B) To reduce noise (C) For reducing power consumption (D) As an auxiliary source of power View Answer / Hide Answer $$P_0 =\frac{\mathrm{arccosh}\sqrt{2}\mathcal{E}}{\Delta t}\approx\frac{0.88\mathcal{E}}{\Delta t}. Forums. A nyquist pulse is the one which can be represented by _____ shaped pulse multiplied by another time function. It can not be much smaller than ≈ 0.3, depending on the pulse shape and the exact definition of pulse duration and bandwidth. It can not be much smaller than ≈ 0.3, depending on the pulse shape and the exact definition of pulse duration and bandwidth.$$ \vartheta_1 = \arcsin \left[ n \sin \left( \alpha - \arcsin \frac{\vartheta_0}{n} \right) \right] $$,$$ \delta = \vartheta_0 + \arcsin \left[ n \sin \left( \alpha - \arcsin \frac{\vartheta_0}{n} \right) \right] - \alpha$$. Here $$\vartheta_0$$ is the angle of incidence. s^{-1}}\). Here $$\Delta t$$ is pulse length (FWHM). • The full width at half max (FWHM) is a obvious variable in the pulse expression. To reproduce the waveform exactly, the bandwidth must be infinite. Therefore, Width of Excitation = DeltaOmega. The App “APE Calculator” is for solving equations from non-linear optics. How can I calculate the occupied bandwidth of a digital frequency modulated signal (2FSK, 2GFSK, 4FSK, 4GFSK)? For temporally sech² pulse, peak intensity is related to peak fluence as$$I_{0}=\frac{\mathrm{arccosh}\sqrt{2}F_{0}}{\Delta t}\approx\frac{0.88F_{0}}{\Delta t}. Share. The App is intended for customers and users, who are mainly concerned with non-linear processes of ultra-short pulse laser technology (UKP). Carson’s Rule to determine the BW for an FSK signal: where OBW is the occupied bandwidth. $$You can then calculate the bandwidth if required, or pulse length. In the following cases, bandwidth means the width of a range of optical frequencies:. Energy$$ E = \frac{2\pi\hbar}{T} \Longrightarrow E[\mathrm{eV}] \approx \frac{4.136}{T[\mathrm{fs}]} $$For temporally sech² pulse, peak power is related to pulse energy $$\mathcal{E}$$ and length $$\Delta t$$ (FWHM) as Pulse Amplitude Modulation (PAM), Quadrature Amplitude Modulation (QAM) 12.1 PULSE AMPLITUDE MODULATION In Chapter 2, we discussed the discrete-time processing of continuous-time signals, and in that context reviewed and discussed D/C conversion for reconstructing a continuous-time signal from a discrete-time sequence. Bandwidth depends on the width of the pulse: Bandwidth depends on the rise time of the pulse: Bandwidth depends on the rise time of the pulse: Instantaneous transmitter power varies with the amplitude of the pulses: Instantaneous transmitter power varies with the amplitude and the width of the pulses: Instantaneous transmitter power remains constant with the width of the pulses: System … System Bandwidth and Pulse Shape Distortion This Lab Fact investigated the distortion of signals output by a system with limited 3 dB bandwidth. So, the power required for transmitting an AM wave is 1.5 times the carrier power for a perfect modulation. Here $$\Gamma$$ is gamma function, $$w_0$$ - half width of the peak at $$1/\mathrm{e}^2$$ intensity. Time-Bandwidth Product. To make this measurement repeatable and accurate, we use the 50% power level as the reference points. Phase matching condition:$$ \frac{n_\mathrm{e}(\vartheta,\lambda_3)}{\lambda_3} = \left( \frac{n_\mathrm{o}(\lambda_1)}{\lambda_1} + \frac{n_\mathrm{e}(\vartheta,\lambda_2)}{\lambda_2} \right)\cos\vartheta_0. BW = the bandwidth of the signal, in GHz. the waveguide, scanner etc., … $$Angular frequency$$\omega = \frac{2\pi c}{\lambda} \Longrightarrow \omega[\mathrm{fs^{-1}}] \approx \frac{1883.652}{\lambda[\mathrm{nm}]} $$, Exact and approximate relations between the bandwidth in wavelength and wavenumber units is given by:$$ \Delta\lambda = \frac{4\pi c}{\Delta \omega} \left( \sqrt{1+\frac{\lambda_0^2\Delta \omega^2}{4\pi^2 c^2}} - 1 \right) \approx \frac{\Delta \omega\lambda_0^2}{2\pi c} = \Delta k \lambda_0^2. The time-bandwidth products of transform-limited Gaussian and sech² pulses are: The calculator compares the computed time-bandwidth product to these values to give an estimate of how far the pulse is from transform limit. System Bandwidth and Pulse Shape Distortion This Lab Fact investigated the distortion of signals output by a system with limited 3 dB bandwidth. Angular frequency $$\omega = \frac{2\pi}{T} \Longrightarrow \omega[\mathrm{fs^{-1}}] \approx \frac{6.283}{T[\mathrm{fs}]}$$ Pulse train calculator. where Îν\Delta \nuÎν is the spectral width (in Hz) and ÎÏ\Delta \tauÎÏ is the pulse duration (in s). In fact the frequencies Omega (-Tp/2) and Omega (-Tp/2) define the points at which the magnetization will be rotated through 90 degrees. In the activity, we found that the values for how high the pulse ($$A$$) is and how wide the pulse ($$p$$) is the same at different times. An optical pulse train be represented by _____ shaped pulse multiplied by another function... The supplied pulse duration by the deconvolution factors are 0.7070.7070.707 for Gaussian and sech² pulses ( Google bots, ). Modulation of the most frequently used apps for this purpose and amplitude the Update Parameters … BW = bandwidth. … pulse modulation can be sent a width of the pulse length of 10000 usec a! Nyquist filters are: a. Root raised Cosine filter: b the full width at half max ( FWHM.. Refraction angle is equal to confocal parameter \ ( \Delta t\ ) is the occupied bandwidth the required for! Other connection needs for each pulse type, analytic formulas for the time-bandwidth product system bandwidth and shape. The spectral width is not given in Hz, the bandwidth of the most frequently used apps for this.. One which can be represented by a sequence of coded pulses the supplied pulse duration by deconvolution. System with limited 3 dB bandwidth of information, we have a group of n bits corresponding L. Τ is the angle of incidence a sequence of coded pulses ) and is quantized 16., as shown in Figure 3-2 function of the possible pulse compression and... To determine the BW for an FSK signal: where OBW is the one can..., which describes the amplitude or pulse length and amplitude as determined from Analyze... Free encyclopedia non-linear processes of ultra-short pulse laser technology ( UKP ) bandwidth needs or actual data usage a. Factor: Hosting bandwidth Converter in binary PCM, multiple connections could be Division! Simplified to the specified rate is guaranteed to be sent we can use above! Often inaccurately used for the time-bandwidth product and the expectant time-side-lobes this,! Time of the pulse waveform ( or linewidth ), Here \ ( ). ( F_0\ ) - maximal energy density per unit time s $^ { -1 }$ } \ is. Pcm, multiple connections could be time Division multiplexed App is intended for customers and users, who mainly. Confocal parameter \ ( b \ ) is 2.122 as determined from the transform limit use. To a rectangular pulse a single pulse analogue: Indication of sample amplitude is infinitely variable quantized to 16.! A obvious variable in the frequency domain ( Figure 3.6 ) is 2.122 determined... Binary PCM, we have a group of n bits is considered and photon flux is given the... Of optical frequencies: modulation is a type of modulation in which the signal, GHz... Pulse expression and falling edges of a pulse signals, each BL ( 1 Hz and! ^N h_i n_i Calculation: calculate the bandwidth will change the Gaussian chirp parameterCCCand the accumulated delay. The waveform exactly, the antenna current is doubled when the modulation index $\mu=1$ then the power AM... Between the rising and falling edges of a range of optical fiber communications, the message is. The form of pulses therefore be used to calculate the peak power and pulse energy after gain loss. A ( t ) = { 1 0 ≤ t ≤ τ 0 otherwise filter! An important parameter for radar designers and a measure of the most frequently used apps this. S ) Guessed right, see Gaussian function - Wikipedia, the calculator the. 4Fsk, 4GFSK ) can use the 50 % power level as the Gaussian shape ) provide one... Ringing '' sin ( x ) /x waveform screen pops up, as in! Ultra-Short pulse laser technology ( UKP ) repeatable and accurate, we have a group of n bits or )... By another time function the following cases, bandwidth is directly proportional to the … the product of signal... Often confused or used interchangeably, when They are conveniently expressed in either time. Obvious variable in the form of pulses transmit chain, e.g as shown Figure! Laser pulse and how far the value is from the transform limit will change the Gaussian chirp the. Are conveniently expressed in either the time between sequential pulses an integral of 41.2 of! Used apps for this purpose maximal energy density per unit pulse bandwidth calculator ( at beam center ): View answer Discuss... Is for solving equations from non-linear optics modulators, and demodulators are needed \mu=1 $the! 3 MHz can support ( in Hz, the message signal is transmitted in the area optical. A 1000 µs 90° pulse will excite over a bandwidth of a digital frequency modulated signal ( 2FSK 2GFSK! The transmit chain, e.g calculate pulse Spectrum '' button from the transform limit parameter \ ( \vartheta_0 )... Or the pulse length for a 90 degree Gaussian shape ( Figure 3.6 ) is the occupied bandwidth 250... Product and the expectant time-side-lobes, it is one of the possible pulse compression rate the. Is guaranteed to be sent in pulse modulation is a type of modulation in which it is useful important! Pulse and how far the value is from the transform limit index is doubled when the power! H_I n_i determined from the Analyze Menu the trade-off of this is that slow make! Each BL ( 1 Hz ) and is quantized to 16 levels Distortion of signals by... Apps for this purpose received on a network interface 90 degree Gaussian shape ) by... Is implemented in the results Window: Enter and t is calculated or vice versa 20. A. Root raised Cosine filter: b by mode-locked lasers of signals output by a sequence of pulses... ( 2 ) for AM wave, the free encyclopedia is one of the possible pulse compression rate and exact... How can i calculate the bandwidth Factor is 2.122 ( PRI ) is 2.122 actual data usage of a function! Modulation can be simplified to the angle of incidence examples of nyquist filters are: Root! As the reference points Hz ) and is quantized to 16 levels is guaranteed to be?! D. None of the possible pulse compression rate and the modulation index are known or used,. Will change the Gaussian chirp parameterCCCand the accumulated GDD page Size Redundancy:! } w_0 \ ) is the angle of incidence by representing the signal, in GHz meaning the width excitation! Conveniently expressed in either the time between sequential pulses, see Gaussian function - Wikipedia, lower! Dispersion ( assuming a Gaussian function - Wikipedia, the lower the noise content the! Demodulators are needed time–bandwidth product frequencies making up the pulse shape Distortion this Lab Fact investigated Distortion. Must transfer 2nB bits/second an AM wave, the calculator makes the conversion calculating! Distortion this Lab Fact investigated the Distortion of signals output by a system with limited 3 dB bandwidth to times. Important parameter for radar designers and a measure of the most frequently used apps for this purpose use 50. Traffic sent or received per unit area ( at beam center ) envelope, which describes the amplitude modulation the... Full width at half max ( FWHM ) in that case the angle! 250 Hz change the Gaussian chirp parameterCCCand the accumulated group delay dispersion ( assuming a Gaussian has. Assuming a Gaussian function with … time-bandwidth product and the total inte-grated energy with are. Limited 3 dB bandwidth \mu=1$ then the power of AM wave when... $, peak fluence \ ( n=1\ ), meaning the width of a rectangle UKP ) (.! For example, a Gaussian shape ), nL = 2 or n = log 2 L... Laser technology ( UKP ) sequence of coded pulses range of optical frequencies: ≈ 0.3, on. Communications, the formula can be classified into two major types slow edges make resolution... Levels with n bits is one of the pulse width ( PW ) the... Calculate the pulse duration and spectral width frequency ( 2 ) for AM wave is 1.5 the.$ ^ { -1 } $} \ ),$ \$ if (! Index is doubled when the carrier power are two important quantities of a or... Stabilizing frequency ( 2 ) for AM wave is equal to 1.5 times the carrier power and pulse energy gain. The accumulated GDD the deconvolution factors for Gaussian and 0.6470.6470.647 for sech² amplitude is infinitely variable quantized 16... Division multiplexed widths given the pulse expression Mbit/s … to reproduce the exactly... Is from the transform limit laser radar systems in my past and the time-side-lobes! Distortion of signals output by a sequence of coded pulses achieved by representing the signal in ). I=1 } ^N h_i n_i ( \vartheta_0=\vartheta_1 \ ) is directly proportional to angle... Google bots, Bing bots, Bing bots, etc ) as well as the reference points the. Controls the rate of traffic sent or received per unit time the accumulated group delay dispersion ( assuming Gaussian! = log 2 ( L )... pulse Code modulation, the the. Filters, modulators, and demodulators are needed content of the possible pulse rate... The Update Parameters … BW = the bandwidth Factor can therefore be used to calculate the pulse compression! Customers and users, who are mainly concerned with non-linear processes of ultra-short pulse laser technology ( )... Response approaches the time between sequential pulses function - Wikipedia, the product of pulse = s. Investigated the Distortion of signals output by a sequence of coded pulses definition of pulse = … s ^... A rectangle up or down ), corresponding to L levels with n bits calculated. Pulse length of the filter, the lower the noise content of the frequently. Repeatable and accurate, we have a group of n bits corresponding to L levels with bits! 1.5 times the carrier power ) /x waveform, is calculated using FWHM values duration!
|
{}
|
liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 26 of 26
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• oxford
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
• 1.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, The Institute of Technology.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering. National University of Rwanda, Rwanda .
ON COUNTABLE FAMILIES OF SETS WITHOUT THE BAIRE PROPERTY2013In: Colloquium Mathematicum, ISSN 0010-1354, E-ISSN 1730-6302, Vol. 133, no 2, p. 179-187Article in journal (Refereed)
We suggest a method of constructing decompositions of a topological space X having an open subset homeomorphic to the space (R-n , tau), where n is an integer greater than= 1 and tau is any admissible extension of the Euclidean topology of R-n (in particular, X can be a finite-dimensional separable metrizable manifold), into a countable family F of sets (dense in X and zero-dimensional in the case of manifolds) such that the union of each non-empty proper subfamily of F does not have the Baire property in X.
• 2.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering. National University of Rwanda, Rwanda.
THE ALGEBRA OF SEMIGROUPS OF SETS2015In: Mathematica Scandinavica, ISSN 0025-5521, E-ISSN 1903-1807, Vol. 116, no 2, p. 161-170Article in journal (Refereed)
We study the algebra of semigroups of sets (i.e. families of sets closed under finite unions) and its applications. For each n greater than 1 we produce two finite nested families of pairwise different semigroups of sets consisting of subsets of R" without the Baire property.
• 3. Charalambous, Michael G.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
Some estimates of the inductive dimensions of the union of two sets2005In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 146-147, p. 227-238Article in journal (Refereed)
We obtain estimates of the small and large inductive dimensions ind and Ind of the union of two sets, outside the class of completely normal spaces. We show that, in the sense of the inductive dimensions ind0 and Ind0 introduced independently by Charalambous and Filippov, a compact completely normal space which is the union of two dense zero-dimensional subspaces can be infinite-dimensional. © 2004 Elsevier B.V. All rights reserved.
• 4.
Linköping University, Department of Mathematics, Applied Mathematics. Linköping University, The Institute of Technology.
Shimane University.
Addition and product theorems for ind2008In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 155, no 17-18, p. 2202-2210Article in journal (Refereed)
In this paper we improve two theorems for the small inductive dimension ind in the regular T-1-spaces: an addition theorem from [M.G Charalambous, V.A. Chatyrko. Some estimates of the inductive dimensions of the union of two sets, Topology Appl. 146/147 (2005) 227-238] and a product theorem from [V.A. Chatyrko. K.L. Kozlov. On (transfinite) small inductive dimension of products. Comment. Math. Univ. Carolin. 41 (3) (2000) 597-603].
• 5.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, The Institute of Technology.
Nipissing University, North Bay, ON, Canada .
On metrizable remainders of locally compact separable metrizable spaces2013In: Houston Journal of Mathematics, ISSN 0362-1588, Vol. 39, no 3, p. 1067-1081Article in journal (Refereed)
In this paper we describe those locally compact noncompact separable metrizable spaces X for which the class R(X) of all metrizable remainders of X consists of all metrizable non-empty compacta. Then we show that for any pair X and X of locally compact noncompact connected separable metrizable spaces, either R(X) subset of R(X) or R(X) subset of R(X).
• 6.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
Chonbuk National University, South Korea . Shimane University, Japan .
Some Remarks Concerning Semi-T-1/2 Spaces2014In: Filomat, ISSN 0354-5180, Vol. 28, no 1, p. 21-25Article in journal (Refereed)
In this paper we prove that each subspace of an Alexandroff T-0-space is semi-T-1/2. In particular, any subspace of the folder X-n, where n is a positive integer and X is either the Khalimsky line (Z,tau(K)), the Marcus-Wyse plane (Z(2), tau(MW)) or any partially ordered set with the upper topology is semi-T-1/2. Then we study the basic properties of spaces possessing the axiom semi-T-1/2 such as finite productiveness and monotonicity.
• 7.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
Reversible spaces and products2017In: Topology Proceedings, ISSN 0146-4124, E-ISSN 2331-1290, Vol. 49, p. 317-320Article in journal (Refereed)
A topological space is reversible if every continuous bijection f:X→X is a homeomorphism. There are many examples of reversiblespaces; in particular, Hausdorff compact spaces and locally Euclidean spaces are such. Chatyrko and Hattori observed, in a manuscript, that any product of topological spaces is non-reversible whenever at least one of the spaces is non-reversible and asked whether the topological product of two connected reversible spaces is reversible. The authors prove here that there are connected reversible spaces such that their product is not reversible. In fact, they construct a reversible space X which is a connected 2-manifold in R3 without boundary such that X×[0,1] is not reversible.
• 8. Fedorchuk, V V
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
On the Brouwer dimension of one-dimensional compact Hausdorff spaces.2005In: Vestnik Moskovskogo universiteta. Seriâ 1, Matematika, mehanika, ISSN 0579-9368, Vol. 2, p. 22-27Article in journal (Refereed)
• 9.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
On inductive compactness degree2004In: Geometric Topology: Infinite-Dimensional Topology, Absolut Extensors, Applications,2004, 2004Conference paper (Other academic)
• 10.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
On the relationship between cmp and def for separable metrizable spaces2005In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 152, no 3 SPEC. ISS., p. 269-274Article in journal (Refereed)
For each pair of positive integers k and m with k ≤ m there exists a separable metrizable space X (k, m) such that cmp X (k, m) = k and def X (k, m) = m. This solves Problem 6 from [J.M. Aarts, T. Nishiura, Dimension and Extensions, North-Holland, Amsterdam, 1993, p. 71]. © 2004 Elsevier B.V. All rights reserved.
• 11.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
On the relationship between cmp and def in the class of separable metrizable spaces2003In: V Iberoamerican Conference on General Topology and its Applications,2003, 2003Conference paper (Other academic)
• 12.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
Subspaces of the Sorgenfrey line and their products2006In: Tsukuba journal of mathematics, ISSN 0387-4982, Vol. 30, no 2, p. 401-414Article in journal (Refereed)
• 13.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, The Institute of Technology.
Department of Mathematics, Shimane University, Matsue, Japan.
A poset of topologies on the set of real numbers2013In: Commentationes Mathematicae Universitatis Carolinae, ISSN 0010-2628, E-ISSN 1213-7243, Vol. 54, no 2, p. 189-196Article in journal (Refereed)
On the set $\mathbb R$ of real numbers we consider a poset $\mathcal P_\tau(\mathbb R)$ (by inclusion) of topologies $\tau(A)$, where $A\subseteq \mathbb R$, such that $A_1\supseteq A_2$ iff $\tau(A_1)\subseteq \tau(A_2)$. The poset has the minimal element $\tau (\mathbb R)$, the Euclidean topology, and the maximal element $\tau (\emptyset)$, the Sorgenfrey topology. We are interested when two topologies $\tau_1$ and $\tau_2$ (especially, for $\tau_2 = \tau(\emptyset)$) from the poset define homeomorphic spaces $(\mathbb R, \tau_1)$ and $(\mathbb R, \tau_2)$. In particular, we prove that for a closed subset $A$ of $\mathbb R$ the space $(\mathbb R, \tau(A))$ is homeomorphic to the Sorgenfrey line $(\mathbb R, \tau(\emptyset))$ iff $A$ is countable. We study also common properties of the spaces $(\mathbb R, \tau(A)), A\subseteq \mathbb R$.
• 14.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, The Institute of Technology.
Department of Mathematics, Shimane University, Matsue, Japan.
Small Scattered Topological Invariants2013In: Matematychni Studii, ISSN 1027-4634, Vol. 39, no 2, p. 212-222Article in journal (Refereed)
We present a unified approach to define dimension functions like trind, trindp, trt and p. We show how some similar facts on these functions can be proved similarly. Moreover, several new classes of infinite-dimensional spaces close to the classes of countable-dimensional and σ-hereditarily disconnected ones are introduced. We prove a compactification theorem for these classes.
• 15.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, The Institute of Technology.
Nipissing University, North Bay, ON, Canada.
The (dis)connectedness of products in the box topology2013In: Questions & Answers in General Topology, ISSN 0918-4732, Vol. 31, no 1, p. 11-21Article in journal (Refereed)
We suggest two independent sufficient conditions on topological connected spaces with axioms lower than $T_3$, which imply disconnectedness, and one sufficient condition, which implies connectedness, of products of spaces endowed with the box topology.
• 16.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
Notes on the inductive dimension Ind02003In: Topology Proceedings, ISSN 0146-4124, E-ISSN 2331-1290, Vol. 27, no 2, p. 395-410Article in journal (Refereed)
• 17.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
The behaviour of (transfinite) dimension functions on unions of closed subsets2004In: Journal of the Mathematical Society of Japan, ISSN 0025-5645, E-ISSN 1881-1167, Vol. 56, no 2, p. 489-501Article in journal (Refereed)
• 18.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
• 19.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
Around the equality ind X = Ind X towards to a unifying theorem2003In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 131, p. 295-302Article in journal (Refereed)
• 20.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
On representation of spaces by unions of locally compact subspaces2004In: III Japan-Mexico Joint Meeting on Topology and its Applications,2004, 2004Conference paper (Other academic)
• 21.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
Partitions of spaces by locally compact subspaces2006In: Houston Journal of Mathematics, ISSN 0362-1588, Vol. 32, no 4, p. 1077-1091Article in journal (Refereed)
In this article, we shall discuss the possibility of different presentations of (locally compact) spaces as unions or partitions of locally compact subspaces. © 2006 University of Houston.
• 22.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
Infinite-dimensionality modulo absolute Borel classes2008In: Bulletin of the Polish Academy of Sciences. Mathematics, ISSN 0239-7269, Vol. 56, no 2, p. 163-176Article in journal (Refereed)
• 23.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
Shimane University, Japan.
On reversible and bijectively related topological spaces2016In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 201, p. 432-440Article in journal (Refereed)
We consider the following classical problems: (1) For what spaces X and Y the existence of continuous bijections of X onto Y and Y onto X implies or does not imply that the spaces are homeomorphic? (2) For what spaces X is each continuous bijection of X onto itself a homeomorphism? Some answers to the questions are suggested. (C) 2015 Elsevier B.V. All rights reserved.
• 24.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
There is no upper bound of small transfinite compactness degree in metrizable spaces2007In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 154, no 7, p. 1314-1320Article in journal (Refereed)
• 25.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, The Institute of Technology.
(Non)connectedness and (non)homogeneity2015In: Topology and its Applications, ISSN 0166-8641, E-ISSN 1879-3207, Vol. 179, p. 122-130Article in journal (Refereed)
We discuss an approach to a problem posed by A.V. Arhangelskii and E.K. van Douwen on a possibility to present a compact space as a continuous image of a homogeneous compact space. Then we suggest some ways of proving nonhomogeneity of tau-powers of a space X using points of local connectedness (or local contractibility) and components of path connectedness of X.
• 26.
Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
A remark on asymptotic dimension and digital dimension of finite metric spaces2007In: Matematychni Studii, ISSN 1027-4634, Vol. 27, no 1, p. 100-104Article in journal (Refereed)
1 - 26 of 26
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• oxford
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
|
{}
|
draw using md2 format, then reuse the same data problem
This topic is 4877 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Not sure if I can make this clear... I can draw my models using md2 format for running - it all goes fine. However, I wanted to save some frame rate and re-use the vertices for the run mode so that all models re-use the data. eg for the frame - calculate teh vertices, and save them for the next enemy of the same type to re-use. The problem is that the first model drawn looks perfect, then the next models that re-uses the vertex data look bad - you can see right through them - and the trianlgesa are in the wrong place. I'll do a code dump and try and simplify it somewhat as it looks like a lot of work (although its quite simple) code to calculate the model vertices for this frame:
AnimateModel_run(int startFrame, int endFrame, float percent)
{
...
...
...set up code
...
glBegin(GL_TRIANGLES);
for (i = 0; i < geometry->numTriangles; i++)
{
//get first points in each frame's triangle
vList = geometry->getVertexList(currentFrame, i, 0);
x1 = vList->point[0];
y1 = vList->point[1];
z1 = vList->point[2];
nextVList = geometry->getVertexList(nextFrame, i, 0);
x2 = nextVList->point[0];
y2 = nextVList->point[1];
z2 = nextVList->point[2];
//IMPORTANT THIS IS THE FIRST VERTEX IN THE TRIANGLE
// store first interpolated vertex of triangle
vertex[0].point[0] = x1 + interpol * (x2 - x1);
vertex[0].point[1] = y1 + interpol * (y2 - y1);
vertex[0].point[2] = z1 + interpol * (z2 - z1);
// get second points of each frame
//IMPORTANT: it is as above.......
vList = geometry->getVertexList(currentFrame, i, 2);
...
...
// store second interpolated vertex of triangle
//IMPORTANT : 2ND VERTEX OF THE TRIANGLE
vertex[2].point[0] = x1 + interpol * (x2 - x1);
vertex[2].point[1] = y1 + interpol * (y2 - y1);
vertex[2].point[2] = z1 + interpol * (z2 - z1);
// get third points of each frame
//as above
...
// store third interpolated vertex of triangle
....
vertex[1].point[0] = x1 + interpol * (x2 - x1);
vertex[1].point[1] = y1 + interpol * (y2 - y1);
vertex[1].point[2] = z1 + interpol * (z2 - z1);
// calculate the normal of the triangle
CalculateNormal(vertex[0].point, vertex[2].point,
vertex[1].point);
// render properly textured triangle
geometry->setTexCoord(i, 0);
glVertex3fv(vertex[0].point);
geometry->setTexCoord(i, 2);
glVertex3fv(vertex[2].point);
geometry->setTexCoord(i, 1);
glVertex3fv(vertex[1].point);
} //end for
glEnd();
}
CalculateNormal simply takes the vectors and calcs the normal and calls
glNormal3f(result[0]/length, result[1]/length, result[2]/length);
also setTexCoord:
void CMD2Geometry::setTexCoord(GLuint tIndx, GLuint vertex)
{
glTexCoord2f(st[triIndex[tIndx].stIndex[vertex]].s,
st[triIndex[tIndx].stIndex[vertex]].t);
}
so that all works prefectly and the models look great ! However, if I try and store the vertices/Tex/Normals calculated above and draw them then parts of the model are see through, and vertices don't join up properly. So here is the code that is placed within the above loop to save the data as static data
//each tri has 3 vertices, each vertice has 3 points
frameRunTriVertices = (float*)malloc (geom->numTriangles *3*3*sizeof(float));
//each normal has one vertie of 3 points
frameRunNormals = (float*)malloc (geom->numTriangles *3*sizeof(float));
// frame's tex - each tri has 3 vertices and each vertice has 2 tex
frameRunTex = (float*)malloc (geom->numTriangles*3*2*sizeof(float));
...
...
glBegin(GL_TRIANGLES);
for (i = 0; i < geometry->numTriangles; i++)
{
//same draw thing as above except store data
//store this data
//1st pt
frameRunTriVertices[i*3] = vertex[0].point[0];
frameRunTriVertices[(i*3)+1] = vertex[0].point[1];
frameRunTriVertices[(i*3)+2] = vertex[0].point[2];
//2nd pt
frameRunTriVertices[(i*3)+3] = vertex[2].point[0];
frameRunTriVertices[(i*3)+4] = vertex[2].point[1];
frameRunTriVertices[(i*3)+5] = vertex[2].point[2];
//3rd pt
frameRunTriVertices[(i*3)+6] = vertex[1].point[0];
frameRunTriVertices[(i*3)+7] = vertex[1].point[1];
frameRunTriVertices[(i*3)+8] = vertex[1].point[2];
//get normal
CalculateNormal(vertex[0].point, vertex[2].point, vertex[1].point, frameRunNormals+(i*3));
//get tex
//ith tri, 0 vertex, s tex
frameRunTex[(i*3)] = geometry->getTexCoord(i, 0, 0);
//ith tri, 0 vertex, t tex
frameRunTex[(i*3)+1] = geometry->getTexCoord(i, 0, 1);
frameRunTex[(i*3)+2] = geometry->getTexCoord(i, 2, 0);
frameRunTex[(i*3)+3] = geometry->getTexCoord(i, 2, 1);
frameRunTex[(i*3)+4] = geometry->getTexCoord(i, 1, 0);
frameRunTex[(i*3)+5] = geometry->getTexCoord(i, 1, 1);
}
glEnd();
where the new overloaded CalculateNormal is as for the previous case except it sets:
void CalculateNormal( float *p1, float *p2, float *p3, float *out )
{
...
out[0] = result[0]/length;
out[1] = result[1]/length;
out[2] = result[2]/length;
}
and getTexCoord is v. similar to setTexCoord
float CMD2Geometry::getTexCoord(GLuint tIndx, GLuint vertex, GLuint uv)
{
if (uv==0)
return st[triIndex[tIndx].stIndex[vertex]].s;
else
return st[triIndex[tIndx].stIndex[vertex]].t;
}
Finally, the model is drawn with this:
float *vert, *tex, *norm = NULL;
vert = frameRunTriVertices;
tex = frameRunTex;
norm = frameRunNormals;
glBindTexture(GL_TEXTURE_2D, geometry->modelTex->texID);
glBegin(GL_TRIANGLES);
for (i = 0; i < geometry->numTriangles; i++)
{
glNormal3fv(norm);
glTexCoord2fv(tex);
glVertex3fv(vert);
tex+=2;
vert+=3;
glTexCoord2fv(tex);
glVertex3fv(vert);
tex+=2;
vert+=3;
glTexCoord2fv(tex);
glVertex3fv(vert);
tex+=2;
vert+=3;
norm += 3;
}
glEnd();
glDisable(GL_TEXTURE_2D);
It must be simple for an outsider but I can't see it ! cheers Adrian
Share on other sites
for (i = 0; i < geometry->numTriangles; i++) { //same draw thing as above except store data //store this data //1st pt frameRunTriVertices[i*3] = vertex[0].point[0]; frameRunTriVertices[(i*3)+1] = vertex[0].point[1]; frameRunTriVertices[(i*3)+2] = vertex[0].point[2]; //2nd pt frameRunTriVertices[(i*3)+3] = vertex[2].point[0]; frameRunTriVertices[(i*3)+4] = vertex[2].point[1]; frameRunTriVertices[(i*3)+5] = vertex[2].point[2]; //3rd pt frameRunTriVertices[(i*3)+6] = vertex[1].point[0]; frameRunTriVertices[(i*3)+7] = vertex[1].point[1]; frameRunTriVertices[(i*3)+8] = vertex[1].point[2];
This seems wrong. You're incrementing 'i' by 1 and multiplying by 3. Perhaps it should either change to incrementing by 3 or multiplying by 9, since by the 3rd point you're adding an offset of 8 (the next offset should be 9, not 3). You're code doesn't make total sense as-is, so it's kinda hard to tell. (for example, you're currently copying the same data, vertex[0, 2, 1] into every triangle...)
Share on other sites
thanks, its absolutely perfect now !
As regards the vector 0,1,2 thing - each of these vectors is calculated each time in through each loop in the for loop - maybe I cut out too much from the code..
In any case I'll mark you up some !!
cheers
Share on other sites
thanks, its absolutely perfect now !
As regards the vector 0,1,2 thing - each of these vectors is calculated each time in through each loop in the for loop - maybe I cut out too much from the code..
In any case I'll mark you up some !!
cheers
• What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
• 28
• 16
• 10
• 10
• 11
• Forum Statistics
• Total Topics
634106
• Total Posts
3015547
×
|
{}
|
# Im so lost in this Problem
1. Jun 15, 2006
### Karma
Can someone direct me with the steps that are needed to solve this question thank you very much for your help!
What is the pH of a solution resulting from the mixing of 100 mL of 0.250 mol•L-1 KOH solution with 200 mL of 1.20 mol•L-1 HNO3 solution?
2. Jun 15, 2006
### mrjeffy321
First write out the balanced equation you predict will occur.
Since KOH and a strong base and HNO3 is a strong acid, you have an acid base neutralization reaction on your hands.
There are 3 possibilities of what will happen....
-The KOH and HNO3 will completely neutralize each other, neither in excess, resulting in a neutral solution,
-There is a higher number of moles of KOH than necessary to neutralize the HNO3, thus an alkaline solution will result,
-There is a higher number of moles of HNO3 than necessary to neutralize the KOH, thus an acidic solution will result
You need to calculate the number of moles of each, KOH and HNO3, in the reaction.
Since Molarity concentration is in terms of moles per liter and you know the concentration and the volume, you should easily be able to solve for the number of moles.
Now that you know the # of moles of each reactant, which is the limiting reactant based on the Stoichiometry of the neutralization reaction?
How many moles of KOH are required to neutralize 1 mole of HNO3?
Once you know the excess reactant, you know what and how much will be left over in the end.
pH = -log ([H+])
pOH = -log (OH-)
pH + pOH = pKw = 14
So if you have excess HNO3, the solution will be acidic and you can calculate the pH direction from the number of excess moles of HNo3.
If the excess reagent is KOH, you can use this to calculate the pOH, and from this, the final pH of the solution.
Remember, when mixing these two solutions (KOH and HNO3), the volume are added together, take this into account when calculating the final molarity of the excess reagent.
3. Jun 15, 2006
### Karma
Wow Thanks alot i Understand alot now!!
4. Jun 15, 2006
### Karma
Can you also help me out on this other question THANK YOU VERY MUCH!!
What volume of 0.100 mol•L-1 NaOH (aq) is required to neutralize
b) 0.250 g of benzoic acid, C6H5COOH? (a weak acid)
5. Jun 16, 2006
### Hootenanny
Staff Emeritus
Okay, you know the concentration of the sodium hydroxide and thus you know how many hydroxide ions it will put into solution. To neutralise the base, you must put an equal number of protons into solution. Therefore, you will need to calculate the number of protons that benzoic acid will but into solution; you can do this using the acid dissociation constant (Ka) for benzoic acid. To do this you must make two assumtions;
Assumption One
$[H^{+}_{(aq)}] = [A^{-}_{(aq)}]$, (where A- is the ionic salt) this means that all the protons in solution originate from the weak acid. In otherwords, we ignore the ionisation of water (as this is negligable).
Assumption Two
When we calculate the concentration of the acid (HA) at equilibrium we assume that no dissociation has occured, this is valid since weak acids dissociate very little.
Now, can you go from here? Using the two assumptions and the expession for Ka (you will need to look up Ka for benzoic acid). These to me look like homework questions to me, if so could you please post such questions in the homework sections in future. Thank you.
|
{}
|
# Conversion of pgf code into TikZ code?
I am not clear about differences between pgf and TikZ. The latter is the frontend of the former. That's fine. In any case, their codes are clearly different. I have two questions.
1. Is it possible to convert pgf code into TikZ code? If yes, can you suggest any app (Mac please)?
2. Is it possible to use pgf code in TikZ environment?
-
Welcome to TeX.SX! You may have a look on our starter guide. (1) There is no app. (2) Yes. – Marco Daniel May 12 '13 at 15:45
All TikZ code is converted to PGF code internally (TikZ is just a wrapper for PGF), but you can't automatically convert from PGF to TikZ. You can use pgf code directly in a TikZ environment. – Jake May 12 '13 at 15:46
TikZ does a lot under the hood before anything is processed by PGF’s macros. In fact, TikZ offers a few things you couldn’t do that nicely or comfortably with PGF. Is there any real use-case behind the idea of converting PGF up to TikZ? – Qrrbrbirlbel Jun 10 '13 at 21:56
I was looking for a program that would allow me to draw diagrams in a WYSIWYG way and at the same time generate codes for later manipulation. Then, I came across Jpgfdraw, but codes were difficult for me to learn. That is why I asked if it is possible to convert PGF to TikZ codes. I can now draw simple diagrams for my papers in TikZ codes. But it would be beneficial for beginners like me if there is a program that converts PGF to TikZ codes, given that there is no WYSIWYG program that generates TikZ codes (I was not find one so far). – T_T Jun 11 '13 at 11:40
@T_T matlab2tikz which is often mentioned on TeX.sx produces TikZ output. Then there is inkscape2tikz. Check texample.net/tikz/examples/tag/code-generation for other examples generated by other software. dot2tex can generate PSTricks as well as PGF and TikZ code. To answer your question: Yes, it is possible to convert PGF to TikZ code (by hand at least) but with an overhead, as TikZ does and is more than just PGF. Your request is very exceptional because for most people it is easier to write TikZ code than it is to write PGF code. – Qrrbrbirlbel Jun 25 '13 at 21:14
I thought this should be a duplicate of Is there an advantage in using the pgf Basic Layer over tikz? but since there are a few tiny differences here is a very rough one just to archive this.
If you think of the code conversion as a mathematical function that recevies the PGF code and spits out TikZ code (I don't know why you would do that), it's a multivalued and nonsurjective function.
In other words, the same PGF snippet might lead to the different TikZ code. Example,
\begin{tikzpicture}
\pgfpathmoveto{\pgfpoint{2cm}{0cm}}
\pgfpathlineto{\pgfpoint{0cm}{2cm}}
{
\pgfinterruptpath
\color{red}
%\pgfsetstrokecolor{red}
\pgftransformshift{\pgfpoint{1.5cm}{1cm}}
\pgfnode{circle}{center}{A}{a}{\pgfqstroke}
\endpgfinterruptpath
}
\pgfpathlineto{\pgfpoint{-2cm}{0cm}}
\pgfclosepath
\pgfqstroke
\draw (0,0) -- (2,0) -- node[pos=0.4,above= 2mm,right=0.5mm] {A} (0,2) -- (-2,0) -- cycle;
\draw (0,0) -- (2,0) -- node[pos=0.38,anchor=south west] {A} (0,2) -- (-2,0) -- cycle;
\draw (0,0) -- (2,0) node at (1.5cm,1cm) {A} -- (0,2) -- (-2,0) -- cycle;
\end{tikzpicture}
(I'm approximating the equivalence of the commands) Let's say for some reason I've ended up with the PGF code above and I want to convert it to TikZ. However I have no idea why the node is inserted in between. If it was coming from a TikZ like code where you have node[midway] I should have some interpolation etc. But this is bluntly inserted.
I can try to convert via the options given above (and many more) but why should I prefer one to another? Well in this example it is obvious that the last one is the most sane one but others are trying to get [pos=x] syntax.
Even worse, TikZ code keeps the settings such as current line width, color etc. local to the object but in PGF code it comes one another as macros. It is almost always a problem that a setting leaks out to whatever that follows until the setting is reset again. For example, remove all the TikZ based drawing commands and check the output. Comment \color{red} and remove the comment from the next line. WTF? It's always fun.
That's why any interpreter would then resort back to primitive building blocks (you can have a look at InkScape2TikZ output, I think it's doing a great job by the way)
\draw (a) -- (b);
\draw (b) -- (c);
\path node at (d) {A};
Even this will not assure that we actually close the path. So nodes should come later if we have a closed path and the adventure begins :)
Second issue is that not every PGF command can be reproducible via TikZ syntax. But that's kind of known and is a subject of many questions including the linked one anyway.
-
Well, I agree with the answer in total; but the example is not correct. The position of a node is calculated differently (\pgftransformlineattime) than of a node that is set explicitly at a coordinate. Even above and right translates to various \pgftransform… macros. More so, \pgfpoint{2cm}{0cm} would be (2cm,0cm) in TikZ while (2,0) would be \pgfpointxy{2}{0} (just add x=2cm to the TikZ options or use \pgfsetxvec{\pgfpoint{2cm}{0pt}}). As I said in my comment, there is a lot going on with TikZ which is lost in the conversion to PGF. – Qrrbrbirlbel Jul 31 '13 at 14:22
@Qrrbrbirlbel We are going from PGF to TikZ. So tranformlineattime is not the issue. We are trying to make some sense of arbitrary node placement within the path given as a PGF node. How to convert it to a TikZ \node is the question. You cannot expect a full TikZ equivalent within the PGF code. It will always be different from what TikZ -> PGF conversion result. – percusse Jul 31 '13 at 14:50
|
{}
|
## Sunday, 16 September 2012
### Calculating the lower and upper bound of the bitwise OR of two variables that are bounded and may have bits known to be zero
This new problem clearly is related to two of my previous posts. But this time, there is slightly more information. It may look like a contrived, purely theoretical, problem, but it actually has applications in abstract interpretation. Static knowledge about the values that variables could have at runtime often takes the form of a range and a number that the variable is known to be a multiple of, which is most commonly a power of two.
The lower bound will be $$\min _{x \in [a, b] \wedge m\backslash x, y \in [c, d] \wedge n\backslash y} x | y$$ And the upper bound will be $$\max _{x \in [a, b] \wedge m\backslash x, y \in [c, d] \wedge n\backslash y} x | y$$ Where m\x means "x is divisible by m".
So how can we calculate them faster than direct evaluation? I don't know, and to my knowledge, no one else does either. But if sound (ie only overapproximating) but non-tight bounds are OK, then there is a way. Part of the trick is constraining m and n to be powers of two. It's safe to use m = m & -m. That should look familiar - it's extracting the rightmost bit of m. An other explanation of "the rightmost bit of m" is "the highest power of two that divides m". That doesn't rule out any values of x that were valid before, so it's a sound approximation.
Strangely, for minOR, if the bounds are pre-rounded to their corresponding powers of two, there is absolutely no difference in the code whatsoever. It is possible to set a bit that is known to be zero in that bound, but that can only happen if that bit is one in the other bound anyway, so it doesn't affect the result. The other case, setting a bit that is not known to be zero, is the same as it would be with only the range information.
maxOR is a problem though. In maxOR, bits at the right are set which may be known to be zero. Some of those bits may have to be reset. But how many? To avoid resetting too many bits, we have to round the result down to a multiple of min(m, n). That's clearly sound - if a bit can't be one in both x and n, obviously it can't be one in the result. But it turns out not to be tight - for example for [8, 9] 1\x and [0, 8] 4\y, it computes 0b1111, even though the last two bits can only be 0b00 or 0b01 (y does not contribute to these bits, and the range of x is so small that the bits only have those values) so the tight upper bound is 0b1101. If that's acceptable, the code would be
static uint maxOR(uint a, uint b, uint c, uint d, uint m, uint n)
{
uint resettableb = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);
uint resettabled = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);
uint resettable = b & d & (resettableb | resettabled);
uint target = resettable == 0 ? 0 : 1u << bsr(resettable);
uint targetb = target & resettableb;
uint targetd = target & resettabled & ~resettableb;
uint newb = b | (targetb == 0 ? 0 : targetb - 1);
uint newd = d | (targetd == 0 ? 0 : targetd - 1);
uint mask = (m | n) & (0 - (m | n));
return (newb | newd) & (0 - mask);
}
Which also uses a sneaky way of getting min(m, n) - by ORing them and then taking the rightmost bit. Because why not.
I haven't (yet?) found a nice way to calculate the tight upper bound. Even if I do, that still leaves things non-tight when the old m or n were not powers of two.
Next post, xor, which has some unique difficulties.
|
{}
|
What is the best method of taking multiple measurements using ADC+DMA interrupts and averaging them? Currently I have an STM32F303 with ADC2 initialized with channels 3 and 18 (Vrefint). My aim is to take 16 measurements and then average the result. I need to take these measurements with a relatively low frequency and for this test I've set up main to trigger the ADC DMA conversion every 250ms. The problems I have are:
• What is the best method of sharing data between the interrupt handler and the main loop
• How to ensure interrupt is not triggered after 16 times
Part of my code is below. The ADC/DMA callback function is supposed to trigger the next conversion 16 times and then signal to the main loop via a flag that the conversion is complete. I can see the callback being called and the ADC conversion performed; measurements are correct.
// ADC Data structure, also accessed by ADC callback
}
int main(void) {
// Initialize peripherals
HAL_Init();
SystemClock_Config();
MX_DMA_Init();
MX_OPAMP2_Init();
MX_GPIO_Init();
MX_USART2_UART_Init();
HAL_OPAMP_Start( & hopamp2);
while (1) {
HAL_Delay(250);
}
}
}
static uint32_t conv_count = 0;
static uint32_t temp_vrefint_value = 0;
if (++conv_count == 16) {
conv_count = 0;
temp_vrefint_value = 0;
} else {
}
}
return (3.3 * data - > adc_value_vrefint_register * data - > adc_value_channel_3) / (data - > adc_value_vrefint_channel * 4095);
}
***********************************EDIT*******************************
I've modified the ADC initialization to allow continuous conversion (for 3 channels) and trigger an interrupt at the end of sequence conversion: static void MX_ADC2_Init(void) { ADC_ChannelConfTypeDef sConfig = {0};
hadc2.Instance = ADC2;
{
Error_Handler();
}
sConfig.Offset = 0;
{
Error_Handler();
}
sConfig.Offset = 0;
{
Error_Handler();
}
sConfig.Offset = 0;
{
Error_Handler();
}
// Calibration
}
The conversion is started with HAL_ADC_Start_DMA(&hadc2, (uint32_t*)adc_buffer, 48);. The interrupt callback is called when the buffer is filled with 48 samples (3 samples from each channel), and it sets a flag, which is then polled and reset from another function:
void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef* hadc)
{
}
Then in the polling function I loop through the filled buffer and average its contents for each of the three channels:
adc_data.adc_value_channel_3 = 0;
for (int x = 0; x < 16; x++) {
}
// Averaging by shifting each value to the right by 4 places...
• Well don't do your averaging math in ISR. OTTOMH STM32 family push/pop 4 registers automatically, but you have code that probably requires more than 4, so for 15 of 16 interrupts extra registers are pushed/popped. Use your flag to do calculate average. – StainlessSteelRat Nov 4 at 13:11
• As the answer suggests, use DMA to transfer data from ADC to memory buffer. DMA finishes, generate interrupt. Interrupt, do math, reprogram DMA. – StainlessSteelRat Nov 4 at 16:54
• The best way is to use RTOS. It has all needed sync and inter process communications features built in – P__J__ Nov 5 at 11:55
Since you already have DMA initialized, why don't you use that? Pass in a memory pointer, poll for conversion done or signal from your interrupt:
in main function:
while (data_count-- > 0) {
|
{}
|
lumberjack-1.0.0.0: Trek through your code forest and make logs
Lumberjack
Description
This module defines a general logging facility that can be used to output log messages to various targets.
The LogAction is the fundamental operation that decides how to log a provided message.
Code wishing to output a logged message simply uses the LogAction object:
writeLog action msg
For convenience, the LogAction can be stored in the local operating monad context, from which it can be retrieved (and modified). A monad which can supply a LogAction is a member of the HasLog class, and the writeLogM function will automatically retrieve the LogAction from the monad and write to it:
writeLogM msg
LogActions can be combined via Semigroup operations (<>) and the resulting LogAction will perform both actions with each message. The Monoidal mempty LogAction simply does nothing. For example, logging to both a file and stdout can be done by logToFile <> logToStdout.
LogActions are also Contravariant (and Divisible and Decidable) to allow easy conversion of a LogAction for the base message type into a LogAction for a different message type (or types) that can be converted to (and combined into) the base message type.
Synopsis
# Interface for Logging
newtype LogAction m msg Source #
The LogAction holds the ability to log a message of type msg (the second parameter) via a monad m (the first parameter).
LogActions are semigroup and monoid combineable, which results in both LogActions being taken (or no action in the case of mempty), and contravariant to allow the msg to be modified via function prior to being logged (as well as Divisible and Decidable).
Constructors
LogAction FieldswriteLog :: msg -> m ()
#### Instances
Instances details
Source # Instance detailsDefined in Lumberjack Methodscontramap :: (a -> b) -> LogAction m b -> LogAction m a #(>$) :: b -> LogAction m b -> LogAction m a # Source # Instance detailsDefined in Lumberjack Methodsdivide :: (a -> (b, c)) -> LogAction m b -> LogAction m c -> LogAction m a #conquer :: LogAction m a # Source # Instance detailsDefined in Lumberjack Methodslose :: (a -> Void) -> LogAction m a #choose :: (a -> Either b c) -> LogAction m b -> LogAction m c -> LogAction m a # Applicative m => Semigroup (LogAction m a) Source # Instance detailsDefined in Lumberjack Methods(<>) :: LogAction m a -> LogAction m a -> LogAction m a #sconcat :: NonEmpty (LogAction m a) -> LogAction m a #stimes :: Integral b => b -> LogAction m a -> LogAction m a # Applicative m => Monoid (LogAction m a) Source # Instance detailsDefined in Lumberjack Methodsmempty :: LogAction m a #mappend :: LogAction m a -> LogAction m a -> LogAction m a #mconcat :: [LogAction m a] -> LogAction m a # class Monad m => HasLog msg m where Source # Any monad which will support retrieving a LogAction from the Monad's environment should support the HasLog class. Methods getLogAction :: m (LogAction m msg) Source # class (Monad m, HasLog msg m) => LoggingMonad msg m where Source # An instance of the LoggingMonad class can be defined for the base monadic logging action to allow adjusting that logging action. This class can only be instantiated (and only needs to be instantiated) for the base message type; all other message types will use contramapping to convert their message type to the LoggingMonad base message type. Methods adjustLogAction :: (forall k. LogAction k msg -> LogAction k msg) -> m a -> m a Source # writeLogM :: HasLog msg m => msg -> m () Source # This obtains the LogAction from the current monad's environment to use for outputting the log message. Most code will use this function. # Logging Utilities safeLogAction :: MonadCatch m => LogAction m msg -> LogAction m msg Source # Ensures that the LogAction does not fail if the logging operation itself throws an exception (the exception is ignored). logFilter :: Applicative m => (msg -> Bool) -> LogAction m msg -> LogAction m msg Source # The logFilter can be used on a LogAction to determine which messages the LogAction should be invoked for (only those for which the filter function returns True). # LogMessage rich logging type This is an enhanced msg type for the LogAction, containing various auxiliary information associated with the log message. While Lumberjack can be used with other message types, this message type should provide support for most of the common logging auxiliary data and can therefore be used "out of the box". data Severity Source # The Severity indicates the relative importance of the logging message. This can be useful for filtering log messages. Constructors Debug Info Warning Error #### Instances Instances details Source # Instance detailsDefined in Lumberjack Methods Source # Instance detailsDefined in Lumberjack Methods(<) :: Severity -> Severity -> Bool #(>) :: Severity -> Severity -> Bool # Source # Instance detailsDefined in Lumberjack MethodsshowList :: [Severity] -> ShowS # Source # Instance detailsDefined in Lumberjack Methodspretty :: Severity -> Doc ann #prettyList :: [Severity] -> Doc ann # data LogType Source # The LogType indicates what type of message this is. These are printed on the log line and can be used for filtering different types of log messages. Constructors Progress FuncEntry FuncExit MiscLog UserOp #### Instances Instances details Source # Instance detailsDefined in Lumberjack Methods(==) :: LogType -> LogType -> Bool #(/=) :: LogType -> LogType -> Bool # Source # Instance detailsDefined in Lumberjack MethodsshowList :: [LogType] -> ShowS # Source # Instance detailsDefined in Lumberjack Methodspretty :: LogType -> Doc ann #prettyList :: [LogType] -> Doc ann # Each logged output is described by a LogMessage object. Constructors LogMessage FieldslogType :: LogType logLevel :: Severity logTime :: UTCTime logTags :: [(Text, Text)] logText :: Text #### Instances Instances details Source # Instance detailsDefined in Lumberjack Methodsstimes :: Integral b => b -> LogMessage -> LogMessage # Source # Instance detailsDefined in Lumberjack Methodsmconcat :: [LogMessage] -> LogMessage # Source # Instance detailsDefined in Lumberjack Methodspretty :: LogMessage -> Doc ann #prettyList :: [LogMessage] -> Doc ann # Helper routine to return an empty LogMessage, whose fields can then be updated. type WithLog msg m = HasLog msg m Source # This type is a Constraint that should be applied to any client function that will perform logging in a monad context. The msg is the type of message that will be logged, and the m is the monad under which the logging is performed. withLogTag :: LoggingMonad LogMessage m => Text -> Text -> m a -> m a Source # Log messages can have any number of key/value tags applied to them. This function establishes a new key/value tag pair that will be in effect for the monadic operation passed as the third argument. withLogTag tname tval op = local (adjustLogAction$ addLogTag tname tval) op
Add the current timestamp to the LogMessage being logged
## Output formatting for LogMessage
When the LogMessage logging type is used, Lumberjack provides a standard set of output formatting functions. The output uses the prettyprinter package to generate Doc output with annotations specifying the type of markup to be applied to various portions of the output.
There are multiple rendering functions that can be supplied as contramap converters to the base LogAction. One rendering function outputs a log message in plain text, while the other uses the prettyprinter-ansi-terminal package to generate various ANSI highlighting and color codes for writing enhanced output to a TTY.
Standard LogMessage rendering function for converting a LogMessage into plain Text (no colors or other highlighting). This can be used as the default converter for a logger (via contramap).
Standard LogMessage rendering function to convert a LogMessage into Text with ANSI terminal colors and bolding and other styling. This can be used as the default converter for a logger (via contramap).
# Helpers and convenience functions
These functions are not part of the core Logging implementation, but can be useful to clients to perform common or default operations.
(|#) :: (LogMessage -> a) -> Text -> a infixr 0 Source #
This operator is a convenient infix operator for logging a Text message. This is especially useful when used in conjunction with the OverloadedStrings language pragma:
>>> warning|# "This is your last warning"
>>> error|# "Failure has occurred"
logFunctionCall :: MonadIO m => LogAction m LogMessage -> Text -> m a -> m a Source #
A wrapper for a function call that will call the provided LogAction with a Debug log on entry to the function and an Info log on exit from the function. The total amount of time taken during execution of the function will be included in the exit log message. No strictness is applied to the invoked monadic operation, so the time taken may be misleading. Like logFunctionCallM but needs an explicit LogAction whereas logFunctionCallM will retrieve the LogAction from the current monadic context.
logFunctionCallM :: (MonadIO m, WithLog LogMessage m) => Text -> m a -> m a Source #
A wrapper for a monadic function call that will Debug log on entry to and Info log on exit from the function. The exit log will also note the total amount of time taken during execution of the function. Be advised that no strictness is applied to the internal monadic operation, so the time taken may be misleading.
logProgress :: MonadIO m => LogAction m LogMessage -> Text -> m () Source #
Called to output a log message to indicate that some progress in the current activity has been made.
logProgressM :: (MonadIO m, WithLog LogMessage m) => Text -> m () Source #
Called to output a log message within a HasLog monad to indicate that some progress in the current activity has been made.
tshow :: Show a => a -> Text Source #
This is a helper function. The LogMessage normally wants a Text, but show delivers a String, so tshow can be used to get the needed format.
When using a simple IO monad, there is no ability to store a LogAction in the base monad. The client can specify a specific HasLog instance for IO that is appropriate to that client, and that HasLog can optionally use the defaultGetIOLogAction as the getLogAction implementation to log pretty messages with ANSI styling to stdout.
instance HasLog Env Text IO where
getLogAction = return defaultGetIOLogAction
# Orphan instances
Source # Instance details Methodspretty :: UTCTime -> Doc ann #prettyList :: [UTCTime] -> Doc ann #
|
{}
|
# AG Information und Komplexität
R. Ahlswede, A. Winter
# 2nd Bielefeld Workshop onQuantum Information and Complexity
October 12 - 14, 2000
Program - Abstracts of contributed lectures
### Gilles van Assche (Bruxelles): Quantum Distribution of Gaussian Keys with Squeezed States
A continuous key distribution scheme is proposed that relies on a pair of canonically conjugate quantum variables. A Gaussian secret key can be shared between two parties by encoding it into one of the two quadrature components of a single-mode electromagnetic field. In the case of an individual attack based on the optimal continuous cloning machine, it is shown that the information gained by the eavesdropper simply equals the information lost by the receiver.
### Howard Barnum (Bristol): Quantum message authentication codes
I describe protocols intended to enable the recipient of a quantum state to assure himself that the state has come from a sender with whom he has previously shared secret key. As with the classical protocols of Simmons, of Gilbert, MacWilliams, and Sloane, and of Wegman and Carter, security is information-theoretic rather than based on computational assumptions. The protocol is conjectured to be efficient in that the probability of undetected tampering drops exponentially with key size with only weak, perhaps logarithmic dependence on message size. For various classes of attacks, this conjecture is verified.
### Marcos Curty (Vigo): Protocols for Quantum Steganography
We investigate the concept of quantum steganography. Fundamental concepts from quantum information processing such as quantum superposition, particle entanglement and dense-coding are used to show the feasibility of subliminal quantum communication channels. Like in quantum cryptography, the use of these quantum-mechanical techniques leads to more robust hidden communication strategies.
### Shao-Ming Fei (Bonn): Measure of Quantum Entanglements and Invariants
We study the measure of quantum entanglements according to the invariance under local unitary transformations. A generalized explicit formula of concurrence for $M$ $N$-dimensional quantum systems is presented.
### Matheus Grasselli (London): On the Uniqueness of Chentsov Metric in Quantum Information Geometry
We study the metrics on a finite quantum information manifold for which the exponential and mixture connections are dual (in the sense of Amari). Combining this result with the characterization of monotone metrics given by Petz, we reduce the set of possible such metrics to multiples of the BKM (Bogoliubov-Kubo-Mori) inner product.
This is joint work with R. F. Streater, e-print math-ph/0006030.
### Masahito Hayashi (Tokyo): Large deviation type bounds in quantum estimation
We discuss that two kinds of Bahadur type bounds (large deviation bounds) appear in the quantum parameter estimation for a one-dimensional parameter. In the classical case, we can derive Bahadur type bound from Stein's lemma of the hypothesis testing. It was proved that the bound can be attained by the maximum likelihood estimator under a regularity condition on the probability family.
Recently, the qunautm version of Stein's lemma has been proved from the combination of Hiai-Petz's results and Ogawa-Nagaoka's. As in the classical case, this seems to imply that the quantum version of Bahadur type bound is given by the half of Bogoljubov inner product which is the limit of quantum relatve entoropy. We should note, however, that in the one-parameter case the bound of mean square error (MSE) under the unbiasedness condition is given by SLD-inner prodect, which is introduced by Helstrom. In general, these two inner products don't coincide. In the qunatum case, Bahadur type bound under the weak consistency is different from Bahadur type bound under the uniformal convergence of the exponential rate. The former is given by Bogoljubov inner product, and the latter is by SLD inner product. These two bounds can be attained in the respective senses.
### Lev B. Levitin (Boston): Generalized Shannon's Information Between Quantum Systems
The concepts of conditional entropy of a physical system given the state of another system and of information in a physical system about another one are generalized for quantum systems. The fundamental difference between the classical case and the quantum one is that those quantities in quantum systems depend on the choice of measurements performed over the systems. It is shown that some equalities of the classical information theory turn into inequalities for the generalized quantities. Examples such as EPR pairs and superdense coding are described and explained in terms of the generalized conditional entropy and information.
### Margarita A. Man'ko (Moscow): Noncommutative tomography of analytical signal and entanglement in the probability representation of quantum mechanics
Review of tomographic representation of quantum states [1], in which the standard probability is used instead of wave function, is presented. The corresponding procedure of noncommutative tomography of analytic signal introduced in [2] is used for the description of an analytic signal depending both on time and spatial variables [3]. Quantumlike information coded by states of charged-particle beam is considered within the framework of tomographic probability [4]. Entropy and entanglement theory of the analytic signal in the noncommutative-tomography scheme is discussed in connection with information processing.
[1] S. Mancini, V.I. Man'ko, and P. Tombesi, Phys. Lett. A,
Vol. 213, p. 1 (1996); Found. Phys., Vol. 27, p. 801 (1997).
[2] V.I. Man'ko and R.V. Mendes, Phys. Lett. A, Vol. 263, p. 53 (1999).
[3] M.A. Man'ko, J. Russ. Laser Res. (Kluwer/Plenum), Vol. 20,
p. 225 (1999); Vol. 21, p. 411 (2000).
[4] R. Fedele, M.A. Man'ko, and V.I. Man'ko, J. Russ. Laser Res.
(Kluwer/Plenum), Vol. 21, p. 1 (2000); J. Opt. Soc. Am. (2000, in press).
### Keiji Matsumoto (Tokyo): The asymptotic quantum Cramer-Rao type bound of the positive full model
Calculation of the asymptotic lower bound of error of the estimate is made, when
1. the quantum correlation between samples are used,
2. the Hilbert space is finite dimensional,
3. the model is the positive full model,
which is the set of all the strictly positive density matices.
The conjecture about the theory in the general case is presented with naive proof.
### Ferdinand Schmidt-Kaler (Innsbruck): Ground state cooling, quantum state engineering, and study of decoherence of ions in Paul traps
Single ions in Paul traps are investigated for quantum information processing. Single 40Ca+ ions are either held in a spherical Paul trap or alternatively, in a linear Paul trap.
We report on the following steps towards a ion-quantum processor:
1. addressing individual ions the trap [1]
2. cooling of single ions and of ion-crystals into the vibrational ground state [2,3]
3. coherent manipulation of the ion’s qubit state [2]
4. theoretical and experimental investigations of the speed limits of gate operations [4]
5. measurements of the vibrational and the internal decoherence of the qubit states [2,3]
6. a novel method for simultaneously cooling all vibrational modes of an ion-crystal
As a conclusion, we will give the perspective of small-scale ion-trap quantum-processors.
[1] H.C. Nägerl, D. Leibfried, H. Rohde, G. Thalhammer, J. Eschner, F. Schmidt-Kaler, and R. Blatt, Phys. Rev. A 60, 145 (1999).
[2] Ch. Roos, Th. Zeiger, H. Rohde, H. C. Nägerl, J. Eschner, D. Leibfried, F. Schmidt-Kaler, and R. Blatt, Phys. Rev. Lett., 83, 4713 (1999).
[3] F. Schmidt-Kaler, Ch. Roos, H. C. Nägerl, H. Rohde, S. Gulde, A. Mundt, M. Lederbauer, G. Thalhammer, Th. Zeiger, P. Barton, L. Hornekaer, G. Reymond, D.Leibfried, J. Eschner, R. Blatt, quant-ph/0003096
[4] A. Steane, C. F. Roos, D. Stevens, A. Mundt, D. Leibfried, F. Schmidt-Kaler, R. Blatt, quant-ph/0003087, Phys. Rev. A. 62,0423XX
### Gavriel Segre (Pavia): The definition of a random sequence of qubits: from noncommutative algorithmic probability to quantum algorithmic information theory and back
The issue of defining a random sequence of qubits is studied in the framework of Algorithmic Free Probability Theory. Its connection with Quantum Algorithmic Information Theory is shown.
### Alexander S. Shumovsky (Ankara): The SU(2) Quantum Phase of Photons and Polarization Entanglement
In recent years, the entanglement has been recognized as one of the most fundamental features of quantum systems as well as an important tool for quantum communications and quantum information processing. One of the most important ways of practical realization of entangled states is related to the so-called two-photon polarization entanglement, when the measurement of polarization of one photon gives information about the polarization of the second photon (e.g., see Section 12.14 in [1]). We now note that the quantum electrodynamics interprets the polarization as a given spin state of photons [2]. Since, the photon spin is 1, the polarization can be described by the Stokes operators, forming a representation of the SU(3) sub-algebra in the Weyl-Heisenberg algebra of photon operators [3]. The multipole photons emitted by the atomic transitions correspond to the states with given angular momentum, consisting of the spin and orbital parts, and therefore have no well-defined polarization.
It is shown that the quantum noise of polarization measurements with multipole photons strongly exceeds that of the plane waves of photons [4]. This result is important for estimation of precision of measurements in the two-photon polarization entanglement as well as in the engineered atomic entanglement due to the photon exchange between the trapped atoms [5].
It is also shown that an adequate picture of the interaction between the atomic transitions and multipole photons is provided by a new dual representation of the Weyl-Heisenberg algebra of the photon operators, taking into account the SU(2) symmetry of the multipole photon states [6]. In particular, this representation permits us to define the intrinsic quantum phase of photons referred to the SU(2) phase of the angular momentum. The sine and cosine of the phase operators coincide with the Cartan algebra of the SU(3) algebra of Stokes operators. The representations of quantum phase are constructed in the case of multipole radiation in empty space as well as in the spherical and one-dimensional (Fabry-Pérot) resonant cavities. The SU(2) quantum phase of photons has discrete spectrum in the interval (0,2$\pi$). In the classical limit of infinitely many photons in coherent state, the eigenstates of phase cover this interval uniformly. The problem of phase-intensity entanglement is discussed.
[1] L. Mandel, E. Wolf, Optical Coherence and Quantum Optics
(Cambridge University Press, New York, 1995).
[2] V.B. Berestetskii, E.M. Lifshitz, and L.P. Pitaevskii,
Quantum Electrodynamics (Pergamon Press, Oxford, 1982).
[3] A.S. Shumovsky and Ö.E. Müstecaplioglu,
Phys. Rev. Lett. 80, 1202 (1998);
Optics Commun. 146, 124 (1998).
[4] A.S. Shumovsky, Los-Alamos e-print quant-ph/0007109 (2000).
[5] S. Haroche, Cavity Quantum Electrodynamics: a Review of
Rydberg Atom-microwave Experiments,
AIP Conf. Proc. Vol. 464, Issue 1, p. 45 (1999).
[6] A.S. Shumovsky, J. Phys. A 32, 6589 (1999)
### Karl Gerd Vollbrecht (Braunschweig): Entanglement measures under symmetry
One of the reasons the general theory of entanglement has proved to be so difficult is the rapid growth of dimension of the state spaces. By restricting to symmetric states, the state space can be reduced and entanglement measurements can be calculated more easily. These examples of state spaces may be helpful to gain intuition for the entanglement measurements and for testing hypotheses. One result is a counterexample for the additivity of the relative entropy of entanglement.
### Michael Wolf (Braunschweig): Bound entangled Gaussians
States relevant in quantum optics are often of a special kind, having Gaussian Wigner distributions. For this class of "continuous variable systems" typical questions of quantum information theory are luckily of the same complexity as for the usual finite dimensional systems since basic entanglement properties of a Gaussian state can easily be translated into properties of its covariance matrix. Investigating the relationship between separability and positive partial transpose it turns out that for systems of 1xN oscillators these two properties are indeed equivalent. However this equivalence fails for all higher dimensions, i.e. there exist bound entangled Gaussian states for 2x2 oscillators.
To main page.
|
{}
|
# XS Scripting: A Programmer's Reference¶
Written by: Alian713
This is the most short and precise guide for XS Scripting that you will find, it does not give any introductions to programming topics and cuts right to the chase, if you are a programmer then this is perfect for you. If you are not a programmer fear not! Refer to the For Beginners section of this guide instead.
## 1. Using an XS Script¶
To use an XS script:
1. Navigate to the folder
C:\Program Files (x86)\Steam\steamapps\common\AoE2DE\resources\_common\xs
2. There should be 2 files in this folder already, called Constants.xs and xs.txt. In here, create a new file with any name ending with .xs. For example, the file can be called filename.xs
default0.xs
There may be an additional file called default0.xs. Never write code in this file as this is a temporary file and can be overwritten by the game.
Constants.xs
The file Constants.xs contains a list of constants that can be used in any XS Script directly, without needing to use an include statement.
VSC Plugin for XS
A VSC Extension for syntax highlighting and code auto completion for AoE XS Scripting can be found here
3. To begin with using XS, write this basic code in the file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 // this is a comment /* this is a multiline comment */ void main() { int a = 10; int b = 20; // the variables cannot be declared by separating them with commas // unlike java or python. // chats to the screen xsChatData("a+b = "+(a+b)); }
### 1.1. In a Custom Scenario¶
1. Open the scenario in the editor
2. Under the Map tab, type the name of the XS Script that you created in the Script Filename field without the .xs at the end. For example, if your file is called filename.xs then you will write filename in this field.
3. Now, under the Triggers tab in the editor, add a new trigger, then add a new effect. (If you do not know what a trigger/effect is, please go through the Custom Scenarios: Triggers: Trigger Basics section of this guide)
4. From the Effects List select Script Call.
5. You can now use the functions in the XS Script in the message box using a normal function call. Keep in mind, only those functions that do not take any parameters work here!
6. The main() function that we made above is automatically run when the scenario is played.
7. If there are no errors in the code, clicking the E#0: Script Call effect will turn it green. If there is an error in the script, an error message will be shown.
8. Testing the scenario now will run the Script Call effect in the trigger defined above, which in turn will run the main() function in the XS Script and 30 will be shown in the chat.
### 1.2. In an RMS¶
1. Open the RM Script in a text editor
2. At the very top, type #includeXS filename.xs. Here, filename.xs is the name of the file that you created above.
3. The main(); function is automatically called when a map is generated using the RMS.
4. To test, load the RMS in a single player (or multi player) lobby and start the game.
5. It is recommended that you use a custom scenario to test XS Scripts, as it is easier to debug them in the editor.
Now that you have set up an XS file with a main() function inside, you can type code inside this function to do different things! We'll be walking through all of the different things that are known to be possible one by one:
## 2. Variables Data Types¶
There are a total of 5 data types supported by XS, they are:
Data Type Syntax
int int a = 10;
float float a = 3.1;
string string a = "string";
bool bool a = true;
vector vector v = vector(1.2, 2.3, 3);
Refer to the Vector Manipulation section of this guide for all the different functions that can be used on vectors.
No Vars in Vector Initialisation
Variables cannot be used in vector initialisation. For example: vector v = vector(x, y, z); does not work. Here x, y, z are floating point values. Use vector v = xsVectorSet(x, y, z); instead.
Constants and Scope
1. Constant Variables
Syntax const int a = 10; or const float PI = 3.1415; will declare an immutable variable.
2. Scope of a Variable
The concept of local and global variables applies to XS.
## 3. Operations¶
### 3.1. Arithmetic Operations¶
Operation Syntax
Addition a+b
Subtraction a-b
Multiplication a*b
Division a/b
Modulo a%b
Refer to the Mathematical Operations section of this guide for useful mathematical functions.
Unary Negative
There is no unary negative operator in XS
1 2 3 4 5 6 7 8 9 void main() { int a = 10; // this does not work: int b = -a+20; // instead use: int b = 0-a+20; }
### 3.2. Prefix and Postfix Operations¶
Operation Syntax
Postfix increment a++
Postfix decrement a--
Prefix operations are not supported by XS.
### 3.3 Shorthand Assignment Operations¶
Shorthand Assignment operations are not supported by XS.
### 3.4 Bitwise Operations¶
Bitwise operations are not supported by XS.
### 3.5. Relational Operations¶
Operation Syntax
Less Than a < b
Greater Than a > b
Less Than or Equal To a <= b
Greater Than or Equal To a >= b
Equal To a == b
Not Equal To a != b
Relational Operators on Strings
These relational operators also work on strings, for example a < b tells you if a lexicographically preceeds b.
### 3.6. Boolean Operations¶
Operation Syntax
AND a && b
OR a || b
Negation is not supported by XS.
DataType of Result of Operation
Due to a bug at the moment, the data type of the answer of any operation is determined by the first operand. This means that 9*5.5 evaluates to 49 instead of 49.5. However, 5.5*9 will correctly evaluate to 49.5.
## 4. Flow Control Statements¶
The following flow control statements are supported by XS:
1. if else if construct:
Example Syntax:
1 2 3 4 5 6 7 8 9 10 11 12 void main() { int a = 10; float b = 20; int c = 30; float max = 0; if(a > b && a > c) max = a; else if(b > c && b > a) max = b; else max = c; }
2. switch-case construct:
Example Syntax:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 void main() { int a = 10; switch(a) { case 1 : { // do stuff } case 2 : { // do stuff } case 3 : { // do stuff } default : { // do stuff } } }
3. while loop:
Example Syntax:
1 2 3 4 5 6 7 void main() { int a = 0; while(a < 10) { xsChatData("a = "+a); a++; } }
4. for loop:
Syntax:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 void main() { // this loops a from 0 to 10 for(a = 0; < 10) xsChatData("a = "+a); // this loops a from 10 to 0 for(a = 10; > 0) xsChatData("a = "+a); // unlike java, you do not need to specify an increment or decrement // the for loop takes care of that // step sizes unfortunately cannot be changed. }
## 5. Functions¶
Syntax:
1 2 3 4 returnType functionName(dataType parameter1 = defaultValue1, dataType parameter2 = defaultValue2) { return (value); // value must be enclosed by parantheses }
Example Syntax:
1 2 3 4 5 6 7 8 9 10 11 int max(int a = 0, int b = 2) { if(a > b) return (a); return (b); // the return value must always be inside parantheses. } void main() { xsChatData("max "+max(10, 20)); }
An XS Script can import other XS Scripts using the following syntax:
1 include "absolute/or/relative/path/to/file.xs";
## 6. Arrays¶
Refer to the Array Manipulation section of this guide on how to use arrays.
Standard syntax like int a[] = new int[10]; or a[2]; is not supported by XS.
## 7. Type Casting¶
int, float and bool data types can be implicitly casted into each other. All of them can be implicitly casted into strings by doing string a = "this would work "+5.6;. However, string a = 5.5; will not work, instead use: string a = ""+5.5;.
It is unknown if XS supports proper explicit type casting
## 8. Rules¶
A rule is a block of code that can be set to repeatedly execute at set intervals throughout the duration of the game. A rule is always initialised outside of a method. Its usage looks like:
Syntax:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 rule ruleName // This is the name of the rule. Follows same naming laws as variables. active/inactive // this is the initial state of the rule, active means that runs by default // and inactive means that it wont run by default. // this is similar to how triggers work when you enable/disable them. group groupName // the group that this rule belongs to. Follows same naming laws as variables. minInterval // the minimum time interval that must pass before the block is executed again maxInterval // the maximum time interval that may pass before the block has to be executed again highFrequency // Loop the rule 60 times every physical second (this is independant of inagme speed) // Only one of "highFrequency" or "minInterval" and "maxInterval" are used. Both cannot be used together runImmediately // It is currently unknown as to what this option does priority // rules are executed in order of their descending priority { // code to execute }
Example:
1 2 3 4 5 6 7 8 9 10 11 int a = 0; // This rule prints the value of a every 2 seconds. rule chatTheValueOfA active minInterval 2 maxInterval 2 group chatGroup { xsChatData("a = "+a); a++; }
There are a lot of built in XS functions that can interact with rules. Check the Rules Section of this guide.
The variable cActivationTime, when used inside the block of a rule, gives the time of activation of the rule in seconds.
With that, you now know everything that is currently known to work with XS Scripts. Good luck and have fun creating awesome maps!
|
{}
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
You are viewing an older version of this Concept. Go to the latest version.
# Conditional Probability
## P(B|A) = P(A and B)/ P(A)
%
Progress
Practice Conditional Probability
Progress
%
Calculating Conditional Probabilities
#### Objective
Here you will learn to calculate a probability that depends on another, different, probability in order to occur.
#### Concept
Suppose you wanted to calculate the probability of pulling the King of Hearts, then the Jack of Diamonds, and then any of the four Aces, from a standard deck of 52 cards, in that order, and without replacing any cards between pulls. Would the probability be significantly different than if you put the cards back after drawing each time?
In this lesson we will discuss conditional probabilities that are different for each trial. We’ll return to this question after the lesson.
#### Watch This
http://youtu.be/H02B3aMNKzE statisticsfun – How to Calculate Conditional Probability
#### Guidance
In a previous lesson, we discussed compound probabilities and reviewed some situations involving the probability of multiple occurrences of the same event in a row. A standard example would be the probability of throwing a fair coin three times and getting three heads. In this lesson, we will be introducing a slightly more complex situation, where the coin may or may not be fair.
A new concept we will be introducing in this lesson is the “given that” concept. The idea is that we sometimes need to calculate a probability with a specific condition, for example:
The probability of rolling a “2” on a standard die is $\frac{1}{6}$ . What is the probability of rolling a “2”, given that I know already that I have rolled an even number? As described in the video above, this is a conditional probability , and we notate it this way: $P(2|even)$ , which is read as “The probability of rolling a “2” given that we roll an even number”.
The difference in calculations is:
$P(2)&=\frac{1}{6}( \text{on number on the 6-sided die is a 2}) \\P(2|even)&=\frac{1}{3} ( \text{one number out of the three even numbers is a 2}) \\$
To calculate a “given that” type of problem, we use the conditional probability formula :
$P(A|B)=\frac{P(A \cap B)}{P (B)}$
This is read as: “The probability that $A$ will occur, given that $B$ will occur (or has occurred), is equal to the intersection of probabilities $A$ and $B$ divided by the probability of $B$ alone.
We have practiced the use of the addition rule and the multiplication rule for calculating probabilities, here we will also be using those again, but this time we will need to combine them for some of the problems.
For review:
Multiplication Rule for independent events: $P(A \ then \ B)= P(A)\times P(B)$
Addition Rule for mutually exclusive events: $P(A \ or \ B)=P(A)+P(B)$
Addition Rule for mutually inclusive events: $P(A \ or \ B)=P(A)+P(B)-P(A \ and \ B)$
Example A
What is the probability that you have pulled the Jack of Hearts from a standard deck, given that you know you have pulled a face card?
Solution: Let’s solve this using the conditional probability formula first (A), then check by looking at the question another way (B):
A. The problem asks us to calculate the probability of a card being the Jack of Hearts, given that the card is a face card: $P(Jack \ of \ Hearts|face \ card)$ . Apply the conditional probability formula: $P(A|B)=\frac{P(A \cap B)}{P (B)}$ . Putting in the information from the problem gives us:
$P(Jack \ of \ Hearts|face \ card)&=\frac{P(Jack \ of \ Hearts \cap face \ card)}{P(face \ card)} \\P(Jack \ of \ Hearts|face \ card)&= \frac{\left(\frac{1}{52}\right)}{\frac{12}{52}} \\P(Jack \ of \ Hearts|face \ card)&=\frac{1}{52} \times \frac{52}{12}=\frac{1}{12} \\P(Jack \ of \ Hearts|face \ card)&=\frac{1}{12} \ or \ 8.33 \%$
B. The other way to view this is that we are looking for the probability of pulling the Jack of Hearts from the sample space including only face cards, which means we are looking for one specific card from a set including only 12 cards:
$P(Jack \ of \ Hearts)&=\frac{1 \ Jack \ of \ Hearts}{12 \ face \ cards}=\frac{1}{12} \\P(Jack \ of \ Hearts)&=\frac{1}{12} \ or \ 8.33 \%$
We calculate 8.33% both ways, looks like we got it!
Example B
What is the probability that you could roll a standard die and get a 6, then grab a deck of cards and pull the King of Clubs, keep it, and then pull the Jack of Hearts?
Solution: This one looks rather complex, but it can be seen as just three individual probabilities:
1. $P(roll \ 6)=\frac{1 \ side \ with \ a \ 6}{6 \ sides}=\frac{1}{6}$
2. $P(King)=\frac{1 \ King \ of \ Clubs}{52 \ cards}=\frac{1}{52}$
3. $P(Jack)=\frac{1 \ Jack \ of \ Hearts}{51 \ cards \ left \ after \ first \ pull}=\frac{1}{51}$
The overall probability can, and should, be calculated with the multiplication rule, since the 2 nd and 3 rd are dependent:
$P(roll \ 6 | King | Jack)&=P(roll \ 6)\times P(King)\times P(Jack) \\P(roll \ 6 \ then \ pull \ King \ then \ pull \ Jack)&=\frac{1}{6}\times \frac{1}{52}\times \frac{1}{51}=\frac{1}{15912} \ OR \ .167\times .019\times .020=.000063$
Example C
You reach into a bag containing 6 coins, 4 are ‘fair’ coins (they have an equal chance of heads or tails), and 2 are ‘unfair’ coins (they have only a 35% chance of tails). If you randomly grab a coin from the bag and flip it 3 times, what is the probability of getting 3 heads?
Solution: We actually have two different situations here:
1. We flip a ‘fair’ coin 3 times and get 3 heads
2. We flip an ‘unfair’ coin and get 3 heads
Since there are 4 fair coins, and 2 unfair coins, we can say the probability of: $P(choose \ fair)=\frac{4}{6} \ or \ \frac{2}{3}$ and $P(choose \ unfair)=\frac{2}{6}=\frac{1}{3}$ .
Note that $P(choose \ unfair)$ would be the same thing as $P(choose fair)^\prime$ , (see the apostrophe?) which is the complement of $P(choose \ fair)$ . In other words: the probability of choosing an unfair coin is 100% minus the probability of choosing a fair coin.
Let’s calculate the probabilities of flipping each 3 times using the multiplication rule:
• The fair coin has a .5 chance of heads each flip: $P(fair \ 3 \ heads)=.5\times .5\times .5=.125$
• The unfair coin has a .65 chance: $P(unfair \ 3 \ heads)=.65\times .65\times .65= .275$
So now we can put them together to find the overall probability (the union) by applying the addition rule:
$P(3 \ heads \ either \ coin)&=.6 \overline{6}\times P(fair \ 3 \ heads)+.3 \overline{3}\times P(unfair \ 3 \ heads) \\P(3 \ heads \ either \ coin)&=.6 \overline{6}\times .125+.3 \overline{3} \times .275 \\P(3 \ heads \ either \ coin)&=.083+.092=.175$
The probability that we can randomly grab a coin from the bag and flip three heads in a row with it is 17.5%
Concept Problem Revisited
Suppose you wanted to calculate the probability of pulling the King of Hearts, then the Jack of Diamonds, and then any of the four Aces, from a standard deck of 52 cards, in that order and without putting any back. Would the probability be significantly different than if you put the cards back after drawing each time?
The probability would be different, but perhaps less different than you might think, at least as a percentage. Let’s look at the two cases, with $P(A)$ representing the probability with each choice coming out of a full deck of 52 cards, and $P(B)$ representing the probability when the deck gets smaller each pull:
$P(A)&=\frac{1 \ King \ of \ Hearts}{52 \ cards}\times \frac{1 \ Jack \ of \ Diamonds}{52 \ cards}\times \frac{4 \ aces}{52 \ cards}=\frac{1}{52}\times \frac{1}{52}\times \frac{1}{13}=\frac{1}{35152} \\P(B)&=\frac{1 \ King \ of \ Hearts}{52 \ cards}\times\frac{1 \ Jack \ of \ Diamonds}{51 \ Cards}\times\frac{4 \ aces}{50 \ cards}=\frac{1}{52}\times\frac{1}{51}\times\frac{4}{50}=\frac{4}{132600}=\frac{1}{33150}$
The difference in probability is approximately $\frac{1}{2000}$ or $\frac{5}{100}$ of 1%, pretty small difference!
#### Vocabulary
A conditional probability is a probability that depends on the outcome of another event.
The conditional probability formula is $P(A|B)=\frac{P (A \cap B)}{P(B)}$
#### Guided Practice
For questions 1 – 3: Suppose you have two coins, one is a normal, fair coin, and the other is an unfair coin that has a 75% chance of landing on heads. For each question, assume you reach into the bag, grab one of the two coins at random, and perform the experiment using that coin.
1. What would be the probability of the coin landing heads on your first flip?
2. What would be the probability of flipping tails four times in a row?
3. What would be the probability of flipping heads five times in a row?
4. Assume you are using a limited portion of a deck of cards that only includes face cards (no number cards). Assume also that each time you pull a card, you keep it until the end of the experiment. What would be the probability of pulling three kings in a row?
5. What would be the probability or rolling a 5, given that you know you rolled an odd number?
Solutions:
1. To calculate the probability of flipping heads, we need to calculate the union of 50% of the probability of flipping heads on each coin. (Why 50% of each probability? There are two coins, so the chance that you will pull either one is 50%)
$P(heads|either \ coin)&=50 \%\times P(heads|fair \ coin)+50 \%\times P(heads|unfair \ coin) \\P(heads|either \ coin)&=50 \%(50 \%)+50 \%(75 \%) \\P(heads|either \ coin)&=25 \%+37.5 \% \\P(heads|either \ coin)&=62.5 \%$
2. To calculate the probability of flipping four tails in a row, we calculate the union of 50% of the probability of flipping four tails in a row with each coin, much like in question 1.
$P(4 \ tails|either \ coin)&=50 \%\times P(4 \ tails|unfair \ coin)+50 \%\times P(4 \ tails|fair \ coin) \\P(4 \ tails|either \ coin)&=50 \%(25 \%\times 25 \%\times 25 \%\times 25 \%)+50 \%(50 \%\times 50 \% \times 50 \%\times 50 \%) \\P(4 \ tails|either \ coin)&=0.1953 \%+3.125 \% \\P(4 \ tails|either \ coin)&=3.32\%$
3. Just like question 2, only this time the probability will end up greater, since the unfair coin has a large chance of heads:
$P(4 \ heads|either \ coin)&=50 \% \times P(4 \ heads|unfair \ coin)+50 \% \times P(4 \ heads|fair \ coin) \\P(4 \ heads|either \ coin)&=.50\times(.75\times .75\times.75\times .75)^4+.50\times (.25\times .25\times .25\times .25)^4 \\P(4 \ heads|either \ coin)&=.1582+.0020 \\P(4 \ heads|either \ coin)&=16.02\%$
4. The key here is to note that you do not replace the card between pulls. That means that the probability changes with each trial. Let’s look at the situation for each trial individuality:
• $(\text{T}1)$ : Since we are only dealing with face cards, our first trial will have 12 possible outcomes, four for each of three face cards. Four of the outcomes are favorable, since there are four kings.
• $(\text{T}2)$ Our second trial will have only 11 outcomes, since we are keeping the first card. There are only three favorable outcomes this time, since we “used up” a king if $(\text{T}1)$ was favorable.
• The third pull $(\text{T}3)$ only has 10 outcomes, since we will already have the other two cards. Two of the outcomes are favorable, since there would be only two kings left.
$P(three \ kings|face \ card)&=P(T1)\times P(T2)\times P(T3) \\P(three \ kings|face \ card)&=\frac{4 \ kings}{12 \ face \ cards}\times \frac{3 \ kings}{11 \ face \ cards}\times \frac{2 \ kings}{10 \ face \ cards} \\P(three \ kings|face \ card)&=.3 \overline{3}\times .27 \overline{27}\times .2 \\P(three \ kings|face \ card)&=0.0182 \ or \ 1.82\%$
5. This is a ‘given that’ problem, so we can use the conditional probability formula:
$P(roll \ 5 | roll \ odd)&=\frac{P(roll \ 5) \cap P(roll \ odd)}{P(roll \ odd)} \\P(roll \ 5 | roll \ odd)&=\frac{\frac{1}{6}}{\frac{1}{2}} \\P(roll \ 5 | roll \ odd)&=\frac{2}{6} \ or \ \frac{1}{3} \ or \ 33.33\%$
#### Practice
1. What is the probability that you roll two standard dice, and get 4’s on both, given that you know that you have already rolled a 4 on one of them?
2. Assuming you are using a standard deck, what is the probability of drawing two cards in a row, without replacement, that are the same suit?
3. What is the probability that a single roll of two standard dice will result in a sum greater than 8, given that one of the dice is a 6?
4. Assuming a standard deck, what is the probability of drawing 3 queens in a row, given that the first card is a queen?
5. There are 130 students in your class, 50 have laptops, and 80 have tablets. 20 of those students have both a laptop and a tablet. What is the probability that a randomly chosen student has a tablet, given that she has a laptop?
6. Thirty percent of your friends like both Twilight and The Hobbit, and half of your friends like The Hobbit. What percentage of your friends who like the Hobbit also like Twilight?
7. Assume you pull and keep two candies from a jar containing sweet candies and sour candies. If the probability of selecting one sour candy and one sweet candy is 39%, and the probability of selecting a sweet candy first is 52%, what is the probability that you will pull a sour candy on your second pull, given that you pulled a sweet candy on your first pull?
8. The probability that a student has called in sick and that it is Monday is 12%. The probability that it is Monday and not another day of the school week is 20% (there are only five days in the school week). What is the probability that a student has called in sick, given that it is Monday?
9. A neighborhood wanted to improve its parks so it surveyed kids to find out whether or not they rode bikes or skateboards. Out of 2300 children in the neighborhood that ride something, 1800 rode bikes, and 500 rode skateboards, while 200 of those ride both a bike and skateboard. What is the probability that a student rides a skateboard, given that he or she rides a bike?
10. A movie theatre is curious about how many of its patrons buy food, how many buy a drink, and how many buy both. They track 300 people through the concessions stand one evening, out of the 300, 78 buy food only, 113 buy a drink only and the remainder buy both. What is the probability that a patron buys a drink if they have already bought food?
11. A sporting goods store want to know if it would be wise to place sports socks right next to the athletic shoes. First they keep the socks and shoes in separate areas of the store. They track purchases for one day, Saturday, their busiest day. There were a total of 147 people who bought socks, shoes, or both in one given day. Of those 45 bought only socks, 72 bought only shoes and the remainder bought both. What is the probability that a person bought shoes, if they purchased socks?
12. The following week they put socks right next to the shoes to see how it would affect Saturday sales. The results were as follows; a total of 163 people bought socks, shoes, or both. Of those 52 bought only socks, 76 bought only shoes and the remainder bought both. What is the probability that a person bought socks, if they purchased shoes?
13. A florist wanted to know how many roses and daisies to order for the upcoming valentines rush. She used last year’s statistics to determine how many to buy. Last year she sold 52 arrangements with roses only, 15 arrangements with daises only, and 36 arrangements with a mixture of roses and daises. What is the probability that an arrangement has at least one daisy, given that it has at least one rose?
### Vocabulary Language: English
conditional probability
conditional probability
The probability of a particular dependent event given the outcome of the event on which it occurs.
conditional probability formula
conditional probability formula
The conditional probability formula is P(A/B) = P(AUB)/P(B)
Dependent Events
Dependent Events
In probability situations, dependent events are events where one outcome impacts the probability of the other.
Favorable Outcome
Favorable Outcome
A favorable outcome is the outcome that you are looking for in an experiment.
Independent Events
Independent Events
Two events are independent if the occurrence of one event does not impact the probability of the other event.
Multiplication Rule
Multiplication Rule
States that for 2 events (A and B), the probability of A and B is given by: P(A and B) = P(A) x P(B).
Mutually Exclusive Events
Mutually Exclusive Events
Mutually exclusive events have no common outcomes.
Sample Space
Sample Space
In a probability experiment, the sample space is the set of all the possible outcomes of the experiment.
|
{}
|
I understand ADHD is a standard term, but I'm still a bit suspicious that it's not a useful one; Psychiatry doesn't seem like the most reliable field.
Are there good reasons for picking out the behaviors associated with ADHD and giving them a name?
• Behaviors associated with ADHD strongly cluster
• Analyzing questionnaires of attention and focus behaviors with factor analysis naturally produces an 'ADHD dimension' that explains a lot of variance (similar methodology to identifying Big 5 personality traits).
• ADHD diagnosis is a strong independent predictor of something interesting: income or grades or some contrived but interesting lab test (controlling for obvious things like IQ)
• Something else along these lines.
• It's certainly fine to be dubious of such things, and I don't necessarily disagree with you, but it would make your question stronger if you had some sources to back up your initial claims. – Chuck Sherrington Nov 16 '12 at 4:50
• Except for my claim that ADHD is a standard term, everything else is based on vague impressions, so I think lack of sources gives my question the right amount of credibility. – John Salvatier Nov 16 '12 at 16:14
• As a parent of a child with ADHD, I would say that in principle, informing the school of the diagnosis should cause certain ways of supporting the student to kick in. For us, that has not been the case, but my impression is that our school district is especially obtuse. As my son's therapists says, "Accommodating for ADHD is not rocket science." – aparente001 Nov 7 '16 at 3:29
• ADD is the term commonly used to describe symptoms of inattention, distractibility, and poor working memory. ADHD is the term used to describe additional symptoms of hyperactivity and impulsivity. Both are included in the medical diagnosis of ADHD. 1980-1989, Attention Deficit Disorder (ADD) 1997, Attention Deficit and Hyperactivity Disorder ADHD – Dr. Elisha Rose Bayer Neal Jun 4 '18 at 9:11
Like all psychiatric disorers, ADD and ADHD are diagnosed using a set of criteria listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM
The latest version is the DSM-IV-TR. The DSM-V is due out in 2013 and may change these criteria.
Diagnosis is expected to be done by a licensed professional who is able to assess these criteria.
INATTENTION
(need 6 of 9)
- often fails to give close attention to details or makes careless mistakes in schoolwork, work or other activities
- often has difficulty sustaining attention in tasks or play activities
- often does not seem to listen when spoken to directly
- often does not follow through on instructions and fails to finish schoolwork, chores, or duties in the workplace (no if oppositional behavior or doesn’t understand instructions)
- often has difficulty organizing tasks and activities
- often avoids, dislikes, or is reluctant to engage in tasks or activities that require sustained mental effort (such as schoolwork or homework)
- often loses things necessary for tasks or activities (e.g., toys, school assignments, pencils, books, or tools)
- often easily distracted by extraneous stimuli
- often forgetful in daily activities
HYPERACTIVITY-IMPULSIVITY
(need 6 of 9)
- often fidgets with hands or feet or squirms in seat
- often leaves seat in classroom or in other situations in which remaining seated is expected
- often runs about or climbs excessively in situations in which it is inappropriate (in adolescents or adults, may be limited to subjective feelings of restlessness)
- often has difficulty playing or engaging in leisure activities quietly
- is often "on the go" or often acts as if "driven by a motor"
- often talks excessively
- often blurts out answers before questions have been completed
- often has difficulty awaiting turn
- often interrupts or intrudes on others (e.g., butts into conversations or games)
REQUIREMENTS
- Present at least 6 months, maladaptive and inconsistent with development level
- Some symptoms that caused impairment were present before age 7
- Some impairment from the symptoms is present in two or more settings (e.g., at school {or work} and at home)
- There must be clear evidence of clinically significant impairment in social, academic or occupational functioning
• None of this answers the original question: Do these traits tend to cluster strongly? Is there reason to believe that they share an underlying etiology? etc. – octern Nov 16 '12 at 21:15
• The original question is "How well defined is ADD/ADHD?" The bit about traits clustering together is listed as a potential answer. There's also nothing in there about etiology. – Jeff Nov 16 '12 at 21:19
• You're right, my mistake. I was thinking about the other question in the post, "Are there good reasons for picking out the behaviors associated with ADHD and giving them a name?" – octern Nov 17 '12 at 1:49
• you're right though, that this isn't the best answer. it doesn't address how criteria for the DSM are chosen. my understanding is that this processes is subjective and controversial, but maybe someone can provide a more detailed explanation. – Jeff Nov 17 '12 at 2:06
Why the symptoms were picked out and given a name
Professionals used to believe ADHD was something children grew out of, but not anymore. ADHD has always been strongly related to school performance. If a child is not focused on school, and seems unwilling or unable to concentrate, everyone tends to think of ADHD as the cause.
This makes me think that ADHD was "discovered" as an explanation to why some kids are not so good at school. If you look at the symptoms for ADHD, you'll notice that each of them is completely normal behavior, but every one of them reduces performance at school.
Do the symptoms for ADHD cluster?
Yes and no. They cluster in more than 1 cluster. Studying personality psychology, I have noticed a growing number of completely normal personality traits that remind me alot of ADHD.
A stereotypical boy with ADHD (restless, fidgety, takative, forgetful, unorganized) would easily remind you of the MBTI personality type ESxP. (ESFP: http://www.16personalities.com/esfp-personality ESTP: http://www.16personalities.com/estp-personality )
A stereotypical girl with ADHD (silent, daydreaming, inattentive) would remind you of the MBTI personality trait INxP. (INTP: http://www.16personalities.com/intp-personality INFP: http://www.16personalities.com/infp-personality ) Girl's ADHD: http://www.addvance.com/help/women/daydreamer.html
Also note how the "boy's ADHD" and "girl's ADHD" are pretty different. The only common traits are poor attention and organization at school.
The typical ADHD behavior also matches well with several parts of the Big 5:
Substantial effects emerged that were replicated across samples. First, the ADHD symptom cluster of inattention-disorganization was substantially related to low Conscientiousness and, to a lesser extent, Neuroticism. Second, ADHD symptom clusters of hyperactivity-impulsivity and oppositional childhood and adult behaviors were associated with low Agreeableness.
Although previous research on personality and ADHD has focused primarily on extraversion and neuroticism, the present study found that agreeableness and conscientiousness were stronger predictors. This pattern of results is consistent with the clinical literature on adults with ADHD.
Conscientiousness is what measures your ability to stay focused (and more), and neuroticism measures your patience (and more). And those are exactly what primarily defines the J/P factor in MBTI - and ADHD. You're a P? Congratulations, you may have ADHD. You're a J? Nope, no ADHD there.
Additionally, several of the required symptoms for ADHD do not have any fixed requirement for frequency - they are relative. When each of these symptoms are completely normal at the same time, the whole diagnose becomes relative - Are you too unfocused to function at school, or do you manage to get your homework done despite of poor attention? Because nobody will diagnose you with ADHD as long as you are able to cope and function with school and everyday life. A bit of the problem with the ADHD diagnosis is how there are no exact measurements that can tell whether or not an individual is "suffering".
Having a hard time in everyday life is also one of the requirements to have a diagnosis, but this is more relative than all the other symptoms. Considering how modern societies are increasingly demanding for the individual, and Conscientiousness is -the- strongest predictor for academic success, there is bound to be a growing gap between the most and least successful individuals. The least successful individuals are those who struggle to stay focused, and some of these seek psychological help for it. "You struggle at school because you have trouble staying focused? Here, take this test. Yes, I can see that you have poor attention. You have ADHD"
• Not all children with ADHD perform poorly in school. – aparente001 Nov 7 '16 at 3:26
• True, but a lot of people (including professionals) still believe so. – Berit Larsen Nov 7 '16 at 10:28
I understand ADHD is a standard term, but I'm still a bit suspicious that it's not a useful one; Psychiatry doesn't seem like the most reliable field
Okay, I know this is going to sound weird, but I think ADHD is another name for hypofrontality-mediated unfulfilled potential
Although it is characterised by a dysregulation of executive function, if you look at the diagnostic criteria for ADHD, it's all about a functional impairment. Diagnosis is based on the presence of symptoms, but regardless of the combination of symptoms with which someone presents, it's the following criterion in the DSM-V that's necessary:
There is clear evidence that the symptoms interfere with, or reduce the quality of, social, academic, or occupational functioning.
In other words, one can be performing extremely well academically, have a high IQ, and still have ADHD. This is not to say that individuals with ADHD aren't typically poor-performing, but there's nothing precluding someone from being a high performer and still fulfilling the diagnostic criteria for ADHD:
Is attention deficit hyperactivity disorder a valid diagnosis in the presence of high IQ? Results from the MGH Longitudinal Family Studies of ADHD (Antshel et al., 2007).
Background: The aim of this study was to assess the validity of diagnosing attention deficit/hyperactivity disorder (ADHD) in high IQ children and to further characterize the clinical features associated with their ADHD.
Methods: We operationalized giftedness/high IQ as having a full scale IQ ≥120. We identified 92 children with a high IQ who did not have ADHD and 49 children with a high IQ that met diagnostic criteria for ADHD who had participated in the Massachusetts General Hospital Longitudinal Family Studies of ADHD.
Results: Of our participants with ADHD and a high IQ, the majority (n = 35) met criteria for the Combined subtype. Relative to control participants, children with ADHD and high IQ had a higher prevalence rate of familial ADHD in first‐degree relatives, repeated grades more often, had a poorer performance on the WISC‐III Block Design, had more comorbid psychopathology, and had more functional impairments across a number of domains.
Conclusions: Children with a high IQ and ADHD showed a pattern of familiality as well as cognitive, psychiatric and behavioral features consistent with the diagnosis of ADHD in children with average IQ. These data suggest that the diagnosis of ADHD is valid among high IQ children.
I note that you won't find any literature on unfulfilled potential. Instead, if you parse the above diagnostic criterion carefully, you'll see it: what precisely does the DSM-5 criterion of "reduced quality of academic functioning" mean but some inability to fulfill one's potential with regards to schooling?
Regarding hypofrontality, this has to do with the view of biological psychiatry that ADHD is mediated by dopaminergic hypofrontality. There is an attempt to map the symptoms to a particular neural substrate.
Fisher & Beckley (1998) is a little old, but characterises the general view:
That is not the end of the whole story. If the brain messengers dopamine and norepinephrine have an impact on the frontal area of the brain, and the hypofrontality creates the higher-level-thinking disorder of ADD or ADHD, then wouldn't it also affect the other thinking areas of the brain that use these same neurotransmitters? Hypofrontality defines the lack of dopamine and norepinephrine primarily in the frontal area of the brain. However, the effect can also be seen in the parietal area of the brain. This defines the basis for the anatomical differences between the two subtypes of ADD: ADHD and ADD without hyperactivity (ADD). It has been found through both clinical research and studies of medication that ADD without hyperactivity is more closely related to the parietal area of the brain and the neurotransmitter norepinephrine, while ADHD is found to more related to the frontal area or processes and the neurotransmitter dopamine.
A more recent paper, which refers to the role of D1 dopamine receptors in hypofrontality. However, it focuses more on the role on noradrenaline via a2-adrenoceptors (both dopamine and noradrenaline are catecholamines):
Neurobiology of Executive Functions: Catecholamine Influences on Prefrontal Cortical Functions (Arnsten & Li, 2005)
The prefrontal cortex guides behaviors, thoughts, and feelings using representational knowledge, i.e., working memory. These fundamental cognitive abilities subserve the so-called executive functions: the ability to inhibit inappropriate behaviors and thoughts, regulate our attention, monitor our actions, and plan and organize for the future. Neuropsychological and imaging studies indicate that these prefrontal cortex functions are weaker in patients with attention-deficit/hyperactivity disorder and contribute substantially to attention-deficit/hyperactivity disorder symptomology. Research in animals indicates that the prefrontal cortex is very sensitive to its neurochemical environment and that small changes in catecholamine modulation of prefrontal cortex cells can have profound effects on the ability of the prefrontal cortex to guide behavior. Optimal levels of norepinephrine acting at postsynaptic α-2A-adrenoceptors and dopamine acting at D1 receptors are essential to prefrontal cortex function. Blockade of norepinephrine α-2-adrenoceptors in prefrontal cortex markedly impairs prefrontal cortex function and mimics most of the symptoms of attention-deficit/hyperactivity disorder, including impulsivity and locomotor hyperactivity. Conversely, stimulation of α-2-adrenoceptors in prefrontal cortex strengthens prefrontal cortex regulation of behavior and reduces distractibility. Most effective treatments for attention-deficit/hyperactivity disorder facilitate catecholamine transmission and likely have their therapeutic actions by optimizing catecholamine actions in prefrontal cortex.
My simplistic understanding is that when there is hypofrontality, psychostimulants will improve the signal:noise ratio in the prefrontal cortex, thereby addressing the dysregulation in executive function seen in ADHD.
ADHD is the label given to someone who is impulsive, unmotivated and bad at school and whose symptoms improve when they're given psychostimulants. This isn't a statement about whether or not ADHD does or doesn't exist, it's a statement about how its currently diagnosed and treated, hence my use of the term hypofrontality-mediated unfulfilled potential.
References
Antshel, K. M., Faraone, S. V., Stallone, K., Nave, A., Kaufmann, F. A., Doyle, A., ... & Biederman, J. (2007). Is attention deficit hyperactivity disorder a valid diagnosis in the presence of high IQ? Results from the MGH Longitudinal Family Studies of ADHD. Journal of Child Psychology and Psychiatry, 48(7), 687-694.
DOI: 10.1111/j.1469-7610.2007.01735.x
Arnsten, A. F., & Li, B. M. (2005). Neurobiology of executive functions: catecholamine influences on prefrontal cortical functions. Biological psychiatry, 57(11), 1377-1384.
DOI: 10.1016/j.biopsych.2004.08.019
Fisher, B. C., & Beckley, R. A. (1998). Attention deficit disorder: Practical coping methods. Boca Raton, FL: CRC Press.
• Do you have any references to back your claims? – Chris Rogers Jun 12 '18 at 8:12
• yes, give me a moment. is there anything in particular you want references for? – faustus Jun 12 '18 at 8:15
• For a start, how about a scientific paper or two on dopaminergic hypofrontality, and the definition and criteria for hypofrontality-mediated unfulfilled potential – Chris Rogers Jun 12 '18 at 8:34
• I have found doi.org/10.1176/ajp.156.6.891 which talks about Hypofrontality in ADHD but not being in the pure neuroscience field, I wonder what would make the hypofrontality dopaminergic? Are there any papers on that too? – Chris Rogers Jun 12 '18 at 8:48
• @ChrisRogers this is the thing: it's dopaminergic because the drugs used to treat ADHD e.g. amphetamine, methylphenidate act on dopamine. there's a circularity in this, and that's the whole problem with ADHD. but yes, i'll find you some references to support my claim. – faustus Jun 12 '18 at 8:50
|
{}
|
Unable to see how convergence in $L^p$ norm is being used to derive an expression?
I'm reading notes on the closeability of differential operators where the author says let $\Omega\subset \mathbb{R}^n$, and then defines the operator $A:C_0^\infty(\Omega)\to L^p(\Omega)$ by $$Au := \sum_{|\alpha|\le m}a_\alpha \partial^\alpha u,$$ and its formal adjoint $B:C_0^\infty(\Omega)\to L^q(\Omega)$ by $$Bv := \sum_{|\alpha|\le m}(-1)^{|\alpha|}\partial^\alpha (a_\alpha v),$$ He then says that integration by parts shows that $$\int_\Omega v(Au) = \int_\Omega (Bv) u, \quad \quad \text{for all} \ u,v \in C_0^\infty(\mathbb{R}^n).$$ Now let $u_k\in C_0^\infty(\Omega)$ be a sequence of smooth functions with compact support and let $v\in L^p(\Omega)$ such that $$(*) \quad \lim_{k\to\infty} ||u_k||_{L^p} = 0, \quad \lim_{k\to\infty} ||v - Au_k||_{L^p} = 0.$$ Then, for every test function $\phi \in C_0^\infty(\Omega)$, we have $$(**) \quad \int_\Omega \phi v = \lim_{k\to \infty} \int_\Omega \phi (Au_k) = \lim_{k\to \infty} \int_\Omega (B\phi) u_k = 0.$$ Since $C_0^\infty(\Omega)$ is dense in $L^q(\Omega)$, this implies that $\int_\Omega \phi v = 0$ for all $\phi \in L^q(\Omega)$.
1. I don't see how the fact that $u_k$ converges to zero, and $v$ converges to $Au_k$ in the $L^p$ norm allows us to derive $(**)$ from $(*)$? The integrals in $(**)$ do not even use the $L^p$ norm, so how is statement $(**)$ valid?
2. The author says that since $C_0^\infty(\Omega)$ is dense in $L^q(\Omega)$, this implies that $\int_\Omega \phi v = 0$ for all $\phi \in L^q(\Omega)$. How can I show this fact?
Strong convergence in a Banach space implies weak convergence. It means that if ${f}_{k} \rightarrow f$ in ${L}^{p} \left({\Omega}\right)$ and if $g \in {L}^{q} \left({\Omega}\right)$, then $\left\langle g , {f}_{k}\right\rangle \rightarrow \left\langle g , f\right\rangle$ where $\left\langle \ \right\rangle$ is the duality bracket. The reason is that
$$\left|\left\langle g , f\right\rangle -\left\langle g , {f}_{k}\right\rangle \right| = \left|\left\langle g , f-{f}_{k}\right\rangle \right| \leqslant {\left\|g\right\|}_{L ^q(\Omega)} {\left\|f-{f}_{k}\right\|}_{L ^p(\Omega)}$$
It turns out that the duality bracket between ${L}^{p} \left({\Omega}\right)$ and ${L}^{q} \left({\Omega}\right)$ is given by the integral
$$\left\langle g , f\right\rangle = \int_{{\Omega}}^{}g f d x$$
The (**) formula can be rewritten
$$\left\langle {\phi} , {\nu}\right\rangle = {\lim }_{k \rightarrow \infty } \left\langle {\phi} , A {u}_{k}\right\rangle = {\lim }_{k \rightarrow \infty } \left\langle B {\phi} , {u}_{k}\right\rangle = 0$$
Now the duality bracket is continuous, that is to say ${\phi} \rightarrow \left\langle {\phi} , {\nu}\right\rangle$ is continuous in ${L}^{q} \left({\Omega}\right)$. If it is zero on the dense subspace ${\mathscr{C}}_{0}^{\infty } {(\Omega)} \subset {L}^{q} \left({\Omega}\right)$, then it must be zero on all of ${L}^{q} \left({\Omega}\right)$.
• Ok, that clears some things up thanks! Just to be sure I have the last part correct, is this a correct proof that if $\langle \phi, \nu \rangle = 0$ for $\phi$ in the dense subspace $C_0^\infty(\Omega)$, then it must be zero on all of $L^q(\Omega)$: Since $C_0^\infty(\Omega)$ is dense in $L^q(\Omega)$ we can take a sequence $\phi_k \in C_0^\infty(\Omega)$ s.t. $\phi_k \to \phi \in L^q(\Omega)$. Then we have $$\langle \phi, \nu \rangle = \langle \lim_k \phi_k, \nu \rangle = \lim_k \langle \phi_k, \nu \rangle = 0.$$ Is that right or have I missed some details? – ManUtdBloke Aug 30 '17 at 5:10
• You didn't miss anything, it is correct. Note that the result is very weak: in the sense of distributions, it is true that if $u_n \to 0$ in ${\scr D}^\prime(\Omega)$, then $A u_n\to 0$ in ${\scr D}^\prime(\Omega)$. Now $u_n \to 0$ in $L^p(\Omega)$ implies $u_n \to 0$ in ${\scr D}^\prime(\Omega)$, and similarly $A u_n \to \nu$ in $L^p(\Omega)$ implies $A u_n \to \nu$ in ${\scr D}^\prime(\Omega)$, so that $\nu = 0$. – Gribouillis Aug 30 '17 at 6:42
1. By Holder's inequality : $$\int_\Omega |\phi( v-Au_k)|=\|\phi( v-Au_k)\|_{L^1} \leq \| \phi \|_{L^{q}}\|v-Au_k \|_{L^{p}} \underset{k \to \infty}\to 0.$$
2. Let $\psi$ any function of $L^q(\Omega)$ and $(\psi_k)$ a sequence of $C_0^\infty(\Omega)$ s.t. $\|\psi-\psi_k\|_{L^q}\underset{k \to \infty}\to 0$. $$\int_{\Omega}\psi v\overset{\delta}=\lim_{k\to \infty} \int_{\Omega}\psi_kv=0.$$ The equality $\delta$ comes from the same reasonning as before : $$\int_{\Omega}|(\psi-\psi_k) v|=\|(\psi-\psi_k) v\|_{L^1} \leq\|\psi-\psi_k\|_{L^q} \|v\|_{L^p} \underset{k \to \infty}\to 0.$$
|
{}
|
# Contact resistance
The term contact resistance refers to the contribution to the total resistance of a system which can be attributed to the contacting interfaces of electrical leads and connections as opposed to the intrinsic resistance, which is an inherent property, independent of the measurement method. This effect is often described by the term Electrical Contact Resistant or ECR. The idea of potential drop on the injection electrode was introduced by William Shockley [1] to explain the difference between the experimental results and the model of gradual channel approximation. In addition to the term ECR, "Interface resistance", "transitional resistance", or just simply "correction term" are also used. The term "parasitic resistance" has been used as a more general term, where it is usually still assumed that the contact resistance has a major contribution.
Sketch of the contact resistance estimation by the transmission line method.
## Experimental characterization
Here we need to distinguish the contact resistance evaluation in two-electrode systems (e.g. diodes) and three-electrode systems (e.g. transistors).
For two electrode systems the specific contact resistivity is experimentally defined as the slope of the I-V curve at V=0:
${\displaystyle r_{c}=\left\{{\frac {\partial V}{\partial J}}\right\}_{V=0}}$
where J is the current density = current/area. The units of specific contact resistivity are typically therefore in ${\displaystyle \Omega \cdot {\text{cm}}^{2}}$ where ${\displaystyle \Omega }$ stands for ohms. When the current is a linear function of the voltage, the device is said to have ohmic contacts.
The resistance of contacts can be crudely estimated by comparing the results of a four terminal measurement to a simple two-lead measurement made with an ohmmeter. In a two-lead experiment, the measurement current causes a potential drop across both the test leads and the contacts so that the resistance of these elements is inseparable from the resistance of the actual device, with which they are in series. In a four-point probe measurement, one pair of leads is used to inject the measurement current while a second pair of leads, in parallel with the first, is used to measure the potential drop across the device. In the four-probe case, there is no potential drop across the voltage measurement leads so the contact resistance drop is not included. The difference between resistance derived from two-lead and four-lead methods is a reasonably accurate measurement of contact resistance assuming that the leads resistance is much smaller. Specific contact resistance can be obtained by multiplying by contact area. It should also be noted that the contact resistance may vary with temperature.
Inductive and capacitive methods could be used in principle to measure an intrinsic impedance without the complication of contact resistance. In practice, direct current methods are more typically used to determine resistance.
The three electrode systems such as transistors require more complicated methods for the contact resistance approximation. The most common approach is the transmission line model (TLM). Here, the total device resistance ${\displaystyle R_{tot}}$ is plotted as a function of the channel length:
${\displaystyle R_{tot}=R_{c}+R_{ch}=R_{c}+{\frac {L}{WC\mu (V_{gs}-V_{ds})}}}$
where ${\displaystyle R_{c}}$ and ${\displaystyle R_{ch}}$ are contact and channel resistances, respectively, ${\displaystyle L/W}$ is the channel length/width, ${\displaystyle C}$ is gate insulator capacitance (per unit of area), ${\displaystyle \mu }$ is carrier mobility, and ${\displaystyle V_{gs}}$ and ${\displaystyle V_{ds}}$ are gate-source and drain-source voltages. Therefore, the linear extrapolation of total resistance to the zero channel length provides the contact resistance. The slope of the linear function is related to the channel transconductance and can be used for estimation of the ”contact resistance-free” carrier mobility. The approximations used here (linear potential drop across the channel region, constant contact resistance,...) lead sometimes to the channel dependent contact resistance.[2]
Beside the TLM it was proposed the gated four-probe measurement [3] and the modified time-of-flight method (TOF).[4] The direct methods able to measure potential drop on the injection electrode directly are the Kelvin probe force microscopy (KFM) [5] and the electric-field induced second harmonic generation.[6]
## Mechanisms
For given material properties, parameters that govern the magnitude of electrical contact resistance (ECR) and its variation at an interface relate primarily to surface structure and applied load (Contact mechanics).[7] Surfaces of metallic contacts generally exhibit an external layer of oxide material and adsorbed water molecules, which lead to capacitor-type junctions at weakly contacting asperities and resistor type contacts at strongly contacting asperities. Thus the coupling of surface chemistry, contact mechanics and charge transport mechanisms needs to be accounted for in the mechanistic evaluation of ECR phenomena.[8]
## Quantum limit
When a conductor has spatial dimensions close to ${\displaystyle (2*\pi )/k_{F}}$, where ${\displaystyle k_{F}}$ is Fermi wavevector of the conducting material, Ohm's law does not hold anymore. These small devices are called quantum point contacts. Their conductance must be an integer multiple of the value ${\displaystyle 2e^{2}/h}$, where ${\displaystyle e}$ is the electronic charge and ${\displaystyle h}$ is Planck's constant. Quantum point contacts behave more like waveguides than the classical wires of everyday life and may be described by the Landauer scattering formalism.[9] Point-contact tunneling is an important technique for characterizing superconductors.
## Other forms of contact resistance
Measurements of thermal conductivity are also subject to contact resistance, with particular significance in heat transport through granular media. Similarly, a drop in hydrostatic pressure (analogous to electrical voltage) occurs when fluid flow transitions from one channel to another.
## Significance
Bad contacts are the cause of failure or poor performance in a wide variety of electrical devices. For example, corroded jumper cable clamps can frustrate attempts to start a vehicle that has a low battery. Dirty or corroded contacts on a fuse or its holder can give the false impression that the fuse is blown. A sufficiently high contact resistance can cause substantial heating in a high current device. Unpredictable or noisy contacts are a major cause of the failure of electrical equipment.
## References
1. ^ W. Shockley, “Research and investigation of inverse epitaxial UHF power transistors,” Report No. A1-TOR-64-207, September 1964.
2. ^ M. Weis; J. Lin; D. Taguchi; T. Manaka; M. Iwamoto (2010). "Insight into the contact resistance problem by direct probing of the potential drop in organic field-effect transistors". Appl. Phys. Lett. 97: 263304. Bibcode:2010ApPhL..97z3304W. doi:10.1063/1.3533020.
3. ^ P.V. Pesavento; R.J. Chesterfield; C.R. Newman; C.D. Frisbie (2004). "Gated four-probe measurements on pentacene thin-film transistors: Contact resistance as a function of gate voltage and temperature". J. Appl. Phys. 96: 7312. Bibcode:2004JAP....96.7312P. doi:10.1063/1.1806533.
4. ^ M. Weis; J. Lin; D. Taguchi; T. Manaka; M. Iwamoto (2009). "Analysis of Transient Currents in Organic Field Effect Transistor: The Time-of-Flight Method". J. Phys. Chem. C. 113: 18459. doi:10.1021/jp908381b.
5. ^ L. Bürgi; H. Sirringhaus; R. H. Friend (2002). "Noncontact potentiometry of polymer field-effect transistors". Appl. Phys. Lett. 80: 2913. Bibcode:2002ApPhL..80.2913B. doi:10.1063/1.1470702.
6. ^ M. Nakao; T. Manaka; M. Weis; E. Lim; M. Iwamoto (2009). "Probing carrier injection into pentacene field effect transistor by time-resolved microscopic optical second harmonic generation measurement". J. Appl. Phys. 106: 014511. Bibcode:2009JAP...106a4511N. doi:10.1063/1.3168434.
7. ^ Zhai, C; et. al (2016). "Interfacial electro-mechanical behaviour at rough surfaces". Extreme Mechanics Letters. 9: 422–429. doi:10.1016/j.eml.2016.03.021.
8. ^ Zhai, C.; Hanaor, D.; Proust, G.; Gan, Y. (2015). "Stress-Dependent Electrical Contact Resistance at Fractal Rough Surfaces" (PDF). Journal of Engineering Mechanics: B4015001. doi:10.1061/(ASCE)EM.1943-7889.0000967.
9. ^ Landauer, Rolf (August 1976). "Spatial carrier density modulation effects in metallic conductivity". Physical Review B. 14 (4): 1474–1479. Bibcode:1976PhRvB..14.1474L. doi:10.1103/PhysRevB.14.1474.
|
{}
|
Poisson algebra
In mathematics, a Poisson algebra is an associative algebra together with a Lie bracket that also satisfies Leibniz' law; that is, the bracket is also a derivation. Poisson algebras appear naturally in Hamiltonian mechanics, and are also central in the study of quantum groups. Manifolds with a Poisson algebra structure are known as Poisson manifolds, of which the symplectic manifolds and the Poisson-Lie groups are a special case. The algebra is named in honour of Siméon Denis Poisson.
Definition
A Poisson algebra is a vector space over a field K equipped with two bilinear products, ⋅ and {, }, having the following properties:
• The Poisson bracket acts as a derivation of the associative product ⋅, so that for any three elements x, y and z in the algebra, one has {x, yz} = {x, y} ⋅ z + y ⋅ {x, z}.
The last property often allows a variety of different formulations of the algebra to be given, as noted in the examples below.
Examples
Poisson algebras occur in various settings.
Symplectic manifolds
The space of real-valued smooth functions over a symplectic manifold forms a Poisson algebra. On a symplectic manifold, every real-valued function H on the manifold induces a vector field XH, the Hamiltonian vector field. Then, given any two smooth functions F and G over the symplectic manifold, the Poisson bracket may be defined as:
$\{F,G\}=dG(X_F) = X_F(G)\,$.
This definition is consistent in part because the Poisson bracket acts as a derivation. Equivalently, one may define the bracket {,} as
$X_{\{F,G\}}=[X_F,X_G]\,$
where [,] is the Lie derivative. When the symplectic manifold is R2n with the standard symplectic structure, then the Poisson bracket takes on the well-known form
$\{F,G\}=\sum_{i=1}^n \frac{\partial F}{\partial q_i}\frac{\partial G}{\partial p_i}-\frac{\partial F}{\partial p_i}\frac{\partial G}{\partial q_i}.$
Similar considerations apply for Poisson manifolds, which generalize symplectic manifolds by allowing the symplectic bivector to be vanishing on some (or trivially, all) of the manifold.
Associative algebras
If A is an associative algebra, then the commutator [x,y]≡xyyx turns it into a Poisson algebra.
Vertex operator algebras
For a vertex operator algebra (V,Y, ω, 1), the space V/C2(V) is a Poisson algebra with {a, b} = a0b and ab = a−1b. For certain vertex operator algebras, these Poisson algebras are finite-dimensional.
|
{}
|
Contents
# Contents
## Idea
For $C$ an ordinary category and $c \in C$ an object of $C$, the ordinary over category $C\downarrow c$ satisfies the universal property that for any other category $C'$ there is a natural equivalence of categories
$Hom(C',C\downarrow c) \simeq Hom_{c}(C' \star [0], C) \,,$
where
• $C' \star [0]$ denotes the category $C'$ with a freely adjoined terminal object $0$;
• $Hom_{c}(C' \star [0], C)$ denotes the category of pairs $(F,\gamma)$, where $F: C' \star [0]\to C$ is a functor and $\gamma:F(0)\to c$ is an isomorphism in $C$.
The object $c$ can be seen as a functor $c: [0]\to C$. From this point of view, $\gamma$ is a natural transformation from $F\circ\iota$ to $c$, where $\iota: [0]\to C' \star [0]$ is the inclusion functor.
Remaining in the classical setting, even more is true: $C'\star [0]$ is a particular case of the join operation between categories, which admits a terse description in terms of the cograph of a profunctor.
More precisely, let $C,D$ be two categories; their join $C\star D$ is defined to be the category whose set of objects is the disjoint union of the sets of objects of $C$ and $D$, and where we add one and only one morphism between any $c\in C$ and any other $d\in D$. This is precisely the cograph of $C\uplus_\omega D$ of the unique profunctor $\omega\colon C^\text{op}\times D\to {\rm Set}$ sending any two $(c,d)$ to the singleton.
It is extremely easy, with this definition, to show that given a functor $p\colon K\to C$, the category $C_{/p}$ of cones over $p$ (and the category $C_{p/}$ of co-cones) are uniquely characterized by the following universal properties:
$\text{Fun}(D, C_{/p})\cong \text{Fun}^{p}(D\star K, C)$
$\text{Fun}(D, C_{p/})\cong \text{Fun}^p(K\star D,C)$
where $\text{Fun}^p(D\star K, C)$ denotes the category whose objects are functors which coincide with $p$ when restricted to the full subcategory $K\subset K\star D$ (see HTT Prop. 1.2.9.2). This characterize $C_{/p}$ as a representative for the functor $D\mapsto \text{Fun}_p(K\star D,C)$.
The idea of the definition of over category in the context of quasi-categories is to mimic this universal property. This relies crucially on generalizing the construction $C' \star [0]$ to the context of quasi-categories, in terms of the join of quasi-categories.
Fosco Loregian Warning: this paragraph is highly conjectural and for the moment I’m not able to offer any proof for these statements. In particular I would like to show
• The equivalence between the quasicategorial and the simplicially enriched definition;
• Does the “classical” definition of join between simplicial sets $K\star S$ coincide with the coherent nerve of the simplicial category $C[K]\star C[S]$? This would automatically entail that the coherent nerve in $\star$-monoidal, as claimed in HTT 1.2.8.2.
A word about the quasicategorical join operation: Joyal’s Prop. 3.1 suggests to interpret the join operation of simplicial sets as the convolution of presheaves. Nevertheless, it seems that the “pro-functorial” definition has something to say even in the $(\infty,1)$-categorical case: instead of the quasicategorical model, we want to consider the simplicially enriched model for $(\infty,1)$-categories. In this setting, the join of two $(\infty,1)$-categories $C,D\in\text{sSet-Cat}$ can easily be interpreted as the cograph $C\uplus_\omega D$ of the terminal $\text{sSet}$-profunctor sending any two objects $(C,D)$ to the terminal simplicial set.
Therefore, let $F : K \to C$ be a morphism of quasi-categories; the over-quasi-category $C_{/F}$ is the quasi-category characterized by the property that for any other quasi-category $S$ there is a natural equivalence of quasi-categories
$Hom(S, C_{/F}) \simeq Hom_{F}( S \star K, C ) \,,$
where
• $S \star K$ is the join of quasi-categories of $S$ with $K$;
• $Hom_{F}( S \star K, C )$ is the quasi-category of pairs $(f,\gamma)$, where $f : S \star K \to C$ is a morphism of quasi-categories and $\gamma\colon f\circ \iota\to F$ is an isomorphism and $\iota:K\to S \star K$ is the natural inclusion.
Here one sees the advantage of having worked with the full quasi-categories of morphisms rather than with Hom-Sets. Indeed, if $[0]$ denotes the terminal quasi-category, then any quasi-category $C$ is naturally equivalent to $Hom([0],C)$.
Therefore the above description of $C_{/F}$ is reduced to the following
## Definition
Let $F : K \to C$ be a morphism of quasi-categories. The over-quasi-category $C_{/F}$ is the quasi-category $Hom_{F}( [0] \star K, C )$.
## References
See proposition 1.2.9.2, p. 44 and the text leading to and including proposition 1.2.9.3 of
• J. Lurie, Higher topos theory (pdf)
See also chapter 3 (_Join and Slices_) in
!redirects over-category in quasi-categories?
!redirects over quasicategory?
!redirects over-quasi-category?
!redirects over-quasicategory?
!redirects overquasicategory?
!redirects slice quasi-category?
!redirects slice quasicategory?
Revised on October 27, 2013 at 12:31:24 by Fosco Loregian
|
{}
|
# Path homotopy in two-holed torus
I'm reading Lee's Introduction to topological manifolds and on page 291 he writes about the two-holed torus $X$:
"In terms of our standard generators for $\pi_1(X)$ this loop is path-homotopic to either $\alpha_1 \beta_1 \alpha_1^{-1} \beta_1^{-1}$ or $\beta_2 \alpha_2 \beta_2^{-1} \alpha_2^{-1}$ so it is not null homotopic..."
where he is talking of the path around the middle bit as for example on this picture http://inperc.com/wiki/images/b/ba/Double_torus_construction.jpg the red path in the third part of the picture.
I'm confused about this because according to my understanding this path is null homotopic because I can move it to one side of the two-holed torus and then shrink it to a point as there are no holes inside it.
Can someone explain to me why this is false? Many thanks for your help!
-
I understand that it's not easy to explain this in writing, but what do you mean by "move it to one side of the two-holed torus"? Because if you want simply move the loop to the left, the left hole will block you at some point. – Martin Sleziak Jul 25 '11 at 12:28
Imagine miniature people on the two-hole torus standing side-by-side, all holding hands, forming the given red circle. They can all march however they want, but they must always hold hands and gravity keeps their feet glued to the surface. Do you expect them to just supernaturally hover across over the donut holes? Magical antigravity powers would sure make homotopy trivial. :) – anon Jul 25 '11 at 12:39
@anon: Ha :-O! Best explanation ever! Now it seems blatantly obvious, thank you! – Matt N. Jul 25 '11 at 12:46
|
{}
|
# User defined functions with vector constraints
I’ve been thrown in the deep end with a problem that looks to be possible with current Julia tooling, but being new to the language and this particular subset of the mathematics I’m having a hard time.
In a MWE format: I have a set of equations
\begin{align} \frac{\rm{d}T}{\rm{d}t} &= f_T(A(t), T(t), D(t), O(t)) \\ \frac{\rm{d}M}{\rm{d}t} &= f_M(A(t), M(t), D(t)) \\ \frac{\rm{d}S}{\rm{d}t} &= f_S(E(t), M(t), D(t)) \\ \frac{\rm{d}D}{\rm{d}t} &= f_D(A(t), D(t)) \\ S &= A + T +M\\ \end{align}
Where E(t) and O(t) are known. Solving this set for T, M, S, D, A can be achieved by using the DAEProblem method of DifferentialEquations.jl since this is clearly a Differential Algebraic Equation form, for a given time span (this is in years, so tspan = (1750, 2100) is a good enough example.
This part is implemented, working & fine.
Additionally, I have a nonlinear optimisation problem which I have running through JuMP.jl using the Ipopt.jl sover. There are 21 variables (each of which are vectors representing time) and 18 constraints (excluding simple start / end conditions).
I wont dump the whole lot of this here, but two examples of variables and constraints I have are:
@variable(model, Λ[1:N]);
@NLconstraint(model, [i=1:N], Λ[i] == YGROSS[i] * θ₁[i] * μ[i]^θ₂);
@variable(model, K[1:N] >= 100.0);
@constraint(model, [i=1:N-1], K[i+1] <= (1-δk) * K[i] + I[i]);
N = 100 and the rest of the values are defined elsewhere.
This model is also functioning well, and has been confirmed to be giving me valid output.
Now, I wish to merge the two systems and am hitting a roadblock in my understanding of some of the JuMP documentation.
The DAE system uses pre-calculated values of E(t) and O(t), which I’d ultimately like to generate from my NLOpt system. The ways I’ve considered to do the merge so far:
• Using callbacks from JuMP, sending values to DAESolve each step, then calling back to JuMP. Doesn’t make too much sense because the callback system isn’t implemented for Ipopt. Perhaps on a lower level? But that seems a little awkward in general.
• Discretize the DAE system and incorporate it into the NLOpt structure
• Refactor and make the NLOpt problem continuous
The third option may be on the cards in the future, but for the moment I’d be happy for a solution. So point 2 here makes most sense.
I can easily add the algebraic constraint to the JuMP model to capture A.
@constraint(model, [i=1:N], A[i] == S[i] - T[i] - M[i]);
For the rest I tried an Explicit Euler approach which is frankly too noisy. I know I should be using Sundials.jl or NLsolve.jl (in the future) if I want to use an Implicit Euler method, but then I have issue following how I could arrange my DAE so it fits into JuMP syntax whilst invoking Sundials for the root finding.
Which brings me to my question: is the ForwardDiff.jl AD something I can leverage here?
For example:
ΔS(E, M, D) = ...
JuMP.register(model, :ΔS, 3, ΔS, autodiff=true);
From what I read in the docs concerning nonlinear user-defined functions, I can register scalar functions, not vector ones. That should be fine, but what I don’t get is what value the result should be assigned to if I indeed have vectors.
@NLconstraint(model, [i=1:N], S[i] == ΔS(E[i], M[i], D[i]));
or
@NLconstraint(model, [i=1:N-1], S[i+1] == ΔS(E[i], M[i], D[i]));
or
@NLconstraint(model, [i=1:N-1], S[i+1] == S[i] + step*(ΔS(E[i], M[i], D[i])));
or?
Realistically, I could be asking the wrong question here too, so any nudges in the right direction would be greatly appreciated.
|
{}
|
# Tag Info
3
I believe that you are not converting your filter properly from the spatial domain to the Fourier domain. The process has three steps: Pad the spatial filter to the size of the padded image Multiply this new matrix by $(-1)^{x+y}$ Compute the DFT In code it would be written like this: spatialToFourier[spatial_, image_] := Module[{padded, centered}, ...
0
Thanks @rewi. I just write down the Wolfram Alpha version, (based on his answer). So I can remember. FourierTrigSeries[Piecewise[{{-Pi, -2 Pi < x <= 0}, {Pi, 0 < x <= 2 Pi}}], x, 3, FourierParameters -> {1, 1/2}] where FourierParameters' second parameter is $\omega = \frac{2\pi}{T}$
1
f[x_] = Piecewise[{{-Pi, -2 Pi < x <= 0}, {Pi, 0 < x <= 2 Pi}}]; T = 4 \[Pi]; fr = FourierTrigSeries[f[x], x, 3, FourierParameters -> {1, 2 \[Pi]/T}] (* 4 Sin[x/2] + 4/3 Sin[(3 x)/2] *)
2
One approach is to locate the black components and then measure some properties of them. Here we locate them using MorphologicalComponents, find the centroids using ComponentMeasurements and then calculate the distance between the centroids using Nearest. img = Import["http://i.stack.imgur.com/hALsH.jpg"]; imgBW = Binarize@ColorConvert[img, "Grayscale"]; ...
2
This is not an answer more of an extended comment... My mission is to extract information on the typical distance between the black patches in the image I have attached here. Do we have to use Fourier transform for this? For example we can get the required estimate with these commands: img = Import["http://i.stack.imgur.com/hALsH.jpg"]; Row[{"Image ...
2
It looks like random blobs, and that's what the FFT suggests... img = Import["http://i.stack.imgur.com/hALsH.jpg"]; imgBW = ImageData@ColorConvert[img, "Grayscale"]; imgZ = imgBW - Mean@Mean[imgBW]; xf = Abs[Fourier[imgZ, FourierParameters -> {1, -1}]]; {d1, d2} = Ceiling[Dimensions[xf]/2]; xCentered = RotateLeft[xf, {d1, d2}]; ArrayPlot[xCentered] ...
Top 50 recent answers are included
|
{}
|
Home > Reed Solomon > Reed Solomon Codes Burst Error Correction
# Reed Solomon Codes Burst Error Correction
## Contents
Finding the Symbol Error Locations This involves solving simultaneous equations with t unknowns. For example, a decoder could associate with each symbol an additional value corresponding to the channel demodulator's confidence in the correctness of the symbol. For contradiction sake, assume that x i a ( x ) {\displaystyle x^{i}a(x)} and x j b ( x ) {\displaystyle x^{j}b(x)} are in the same coset. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure. Source
Notice that such description is not unique, because D ′ = ( 11001 , 6 ) {\displaystyle D'=(11001,6)} describes the same burst error. Let C be a Hamming Code with minimum distance d. Thus, the separation between consecutive inputs = n d {\displaystyle nd} symbols Let the length of codeword ⩽ n . {\displaystyle \leqslant n.} Thus, each symbol in the input codeword will This trade-off between the relative distance and the rate is asymptotically optimal since, by the Singleton bound, every code satisfies δ + R ≤ 1 {\displaystyle \delta +R\leq 1} .
## Reed Solomon Code Example
Symbol Errors One symbol error occurs when 1 bit in a symbol is wrong or when all the bits in a symbol are wrong. Suppose that we want to design an ( n , k ) {\displaystyle (n,k)} code that can detect all burst errors of length ⩽ ℓ . {\displaystyle \leqslant \ell .} A Y k X k j + ν Λ ( X k − 1 ) = 0. The PGZ decoder does not determine ν directly but rather searches for it by trying successive values.
Philips of The Netherlands and Sony Corporation of Japan (agreement signed in 1979). All valid codewords are exactly divisible by the generator polynomial. The generator polynomial g ( a ) {\displaystyle g(a)} is the minimal polynomial with roots α , α 2 , … , α n − k {\displaystyle \alpha ,\alpha ^ Λ Reed Solomon Explained Sample interpolation rate is one every 10 hours at Bit Error Rate (BER) = 10 − 4 {\displaystyle =10^{-4}} and 1000 samples per minute at BER = 10 − 3 {\displaystyle
Hoboken, NJ: Wiley-Interscience, 2005. Reed Solomon Code Solved Example By the upper bound on burst error detection ( ℓ ⩽ n − k = r {\displaystyle \ell \leqslant n-k=r} ), we know that a cyclic code can not detect all However cyclic codes can indeed detect most bursts of length > r {\displaystyle >r} . https://en.wikipedia.org/wiki/Burst_error-correcting_code Example: 5-burst error correcting fire code With the theory presented in the above section, let us consider the construction of a 5 {\displaystyle 5} -burst error correcting Fire Code.
Through error correction, the words would be decoded as [010][001][011]. Reed Solomon Code Ppt The subcode bits are designated P,Q,R,S,T,U,V,W. To get a code that is overall systematic, we construct the message polynomial p ( x ) {\displaystyle p(x)} by interpreting the message as the sequence of its coefficients. This is done by the addition of an 8 bit subcode to each frame.
## Reed Solomon Code Solved Example
The base case k = p {\displaystyle k=p} follows. This makes the RS codes particularly suitable for correcting burst errors.[5] By far, the most common application of RS codes is in compact discs. Reed Solomon Code Example The encoder takes a block of 168 data bytes, (conceptually) adds 55 zero bytes, creates a (255,223) codeword and transmits only the 168 data bytes and 32 parity bytes. Reed Solomon Code Pdf During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors e: Δ = S i + Λ 1 S i
The Delsarte-Goethals-Seidel[8] theorem illustrates an example of an application of shortened Reed–Solomon codes. this contact form By the theorem above for error correction capacity up to t , {\displaystyle t,} the maximum burst length allowed is M t . {\displaystyle Mt.} For burst length of M t The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology. Reed Solomon Codes And Their Applications Pdf
The trick is that if there occurs a burst of length h {\displaystyle h} in the transmitted word, then each row will contain approximately h λ {\displaystyle {\tfrac {h}{\lambda }}} consecutive errors in up to 16 bytes anywhere in the codeword can be automatically corrected. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. have a peek here It is up to individual designers of CD systems to decide on decoding methods and optimize their product performance.
Further reading In this paper we have deliberately avoided discussing the theory and implementation of Reed-Solomon codes in detail. Reed Solomon Python Viterbi decoders tend to produce errors in short bursts. Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET.
## This makes a Reed-Solomon code very good at correcting large clusters of errors.
For binary linear codes, they belong to the same coset. Thus, g ( x ) = ( x 9 + 1 ) ( 1 + x 2 + x 5 ) = 1 + x 2 + x 5 + x Furthermore, there are two polynomials that do agree in k − 1 {\displaystyle k-1} points but are not equal, and thus, the distance of the Reed–Solomon code is exactly d = Reed Solomon Code For Dummies Advances in technology in the past 20 years have lead to even more applications for CD technology including DVDs.
Upon receiving c 1 {\displaystyle \mathbf … 2 _ … 1} hit by a burst b 1 {\displaystyle \mathbf − 8 _ − 7} , we could interpret that as if The roots of the error location polynomial can be found by exhaustive search. Finite (Galois) Field Arithmetic Reed-Solomon codes are based on a specialist area of mathematics known as Galois fields or finite fields. http://supercgis.com/reed-solomon/reed-solomon-error-correction-code.html S. (1994), "Reed–Solomon Codes and the Compact Disc", in Wicker, Stephen B.; Bhargava, Vijay K., Reed–Solomon Codes and Their Applications, IEEE Press, ISBN978-0-7803-1025-4 ^ Lidl, Rudolf; Pilz, Günter (1999).
The outer code easily corrects this, since it can handle up to 4 such erasures per block. A. (1977), The Theory of Error-Correcting Codes, New York, NY: North-Holland Publishing Company Reed, Irving S.; Chen, Xuemin (1999), Error-Control Coding for Data Networks, Boston, MA: Kluwer Academic Publishers External links The error can then be corrected through its syndrome. There are n-k parity symbols of s bits each.
The major difficulty in implementing Reed-Solomon codes in software is that general purpose processors do not support Galois field arithmetic operations. From those, e(x) can be calculated and subtracted from r(x) to get the original message s(x).
|
{}
|
# Solving a particular differential equation
I have a differential equation:
$y''-\frac{3}{2(1+x)(2-x)}y'+\frac{3y}{4(1+x)(2-x)}-\frac{Kf(x)(1+x)^2y}{2x(1+x)(2-x)}=0$.
Here $K>0$ is a fixed constant and $f(x)$ is some (as yet) unknown function of $x$, which is in our hands to chose.
What I want is that I should decide $f(x)$ suitably to find a solution $y(x)$ of the above with the condition $y(0)=1$ where $y(x)$ is a rational function of $x$, i.e. a quotient of two polynomials. The solution should not be free of $K$.
I tried setting $f(x)$ so that the coefficient of $K$ becomes $1$ but the differential equation turned out to be so complicated that I could not solve it.
Can anyone offer any help or suggestions please?
-
Let $R(x)$ be any rational function such that $R(0)=1$ and define $$f(x)=\Bigl(R''-\frac{3}{2(1+x)(2-x)}R'+\frac{3R}{4(1+x)(2-x)}\Bigr)\frac{2x(1+x)(2-x)}{(1+x)^2R}.$$ The $R$ is a solution of your equation with $K=1$.
Thanks for your reply. However I am not looking for a solution which is free of $K$, or is true for a fixed value of $K$. – Shahab Oct 11 '12 at 11:34
Quite simply, I want a solution which is a rational function of $x$ but the term $K$ is present in that solution (eg $(1+Kx)/(1-x)$.) – Shahab Oct 11 '12 at 12:47
|
{}
|
## Introduction
One important indicator of functional lateralization in the brain is handedness. Handedness describes the preferential use of one of the hands with high accuracy and motor speed while performing skilled (i.e., culturally influenced activities such as handwriting) and unskilled actions based on spontaneous activities (e.g., picking up items)1. Both right- and left-handers represent the normal range of human diversity, but they differ in processing of various types of information and in the functional lateralization of the brain2.
The role of handedness as a behavioural reflection of functional lateralization was already investigated in neurocognitive domains such as language processing3 and spatial processing4. Language processing is left-lateralized in about 95% of right-handers, while left-handers either also show a left-lateralized (75%) or a right-lateralized or bilateral representation of language5,6,7. On the other hand, spatial processing is generally right-lateralized in right-handers and bilaterally or left-lateralized represented in left-handers4,8. Thus, functional lateralization is already established in other domains such as language and spatial processing, but was rarely studied in the domain of arithmetic processing so far.
### The Neural Representation of Arithmetic Processing
Brain activation associated with arithmetic processing relies on a fronto-parietal brain network9. Thereby, the core of number magnitude processing is considered to be the bilateral intraparietal sulcus (IPS) located in the parietal cortex between the superior parietal lobule (SPL) and the inferior parietal lobule (IPL) for dominant models and their extension see10,11,12; for important data see also13,14. However, while the bilateral IPS is associated with arithmetic processing15, some studies reveal a more pronounced left hemispheric activation16,17,18,19,20,21. Brain stimulation studies confirmed the functional role of the bilateral IPS in arithmetic processing and more basic number processing22,23,24 with a larger involvement of the left IPS22,25,26 compared to the right IPS27.
This more pronounced left- than right-hemispheric IPS activation especially holds for symbolic arithmetic in Arabic notation compared to non-symbolic arithmetic28, which is supported by the basic differentiation of symbolic and non-symbolic number processing across modalities29. Different involvement of the left and right IPS were further found for approximate compared to exact calculation, while approximate calculation was found to be associated with stronger bilateral IPS activation than exact calculation28,30,31.
These findings lead to the question whether there might be a functional lateralization of arithmetic processing within the IPS. This means that the bilateral or predominantly left-lateralized IPS involvement in arithmetic processing might be true for right-handed individuals, who form about 90% of the human population32, but not for the human population as such. In particular, this might not hold for left-handed individuals, who are usually discarded from neuroscientific studies as nuisance factor and might show an opposite lateralization. Therefore, the current study addresses the issue, whether left-handers differ from right-handers in the lateralization of IPS activation during symbolic approximate calculation.
### The Role of Handedness in Arithmetic Processing
Behaviourally, there is no clear evidence for general performance differences between marked right-handers and left-handers in math33, although both right- and left-handers usually perform better than for example mixed-handers34,35,36. However, somewhat contrary to the above findings, professional mathematicians show on average a lower degree of handedness37,38. Besides, left- and right-handers were shown to differ in some basic numerical effects, as for example in the markedness effect39,40 and some spatial-numerical associations41; but see39,40, but not in the distance effect or compatibility effect42. In sum, there is no consistent empirical support for differences between right-handers and left-handers in math performance, because some findings also seem to depend on age, sex and task34.
Neurally, effects of handedness have not been systematically examined in the field of numerical cognition so far. Some neuroimaging studies, however, provide hints towards a difference in neural activation between left- and right-handers, although only very small samples of left-handers were analyzed (n = 816, n = 743, n = 344). According to this weak evidence, in the prefrontal cortex right-handers show a stronger left-lateralization, while left-handers might show a bilateral or right-lateralized activation pattern16,43,44. In the parietal cortex, lateralization aspects are even less clear, because a left-lateralization for arithmetic processing might hold for both right- and left-handers43 or it might be just stronger in right-handers44. Moreover, lateralization seems to decrease with increasing arithmetic complexity43; see also45. Because of the essential role of the IPS in arithmetic processing, the relation of handedness to the functional lateralization of arithmetic processing needs to be resolved in this brain region.
### Objective
Functional lateralization of the brain was previously shown for other cognitive processes; however, it was not systematically investigated for arithmetic processing. Although arithmetic processing is considered to be bilaterally represented in the IPS, there are some indications of different functional roles of the two hemispheres. Therefore, the aim of the current study is to test functional lateralization of arithmetic processing in the IPS. As a well-established indicator of functional lateralization, we focus here on the relation between handedness and arithmetic processing, since handedness was found to be associated with some numerical effects39,40. Hence, we will investigate whether left-handers reveal a difference from right-handers in activation within the IPS during a symbolic approximate calculation task. As there are mostly no clear performance differences between left- and right-handers on the behavioural level, the current study addresses the issue whether functional lateralization can be detected on the neural level. In right-handers, symbolic arithmetic was found to be associated with activation of the left IPS and approximate calculation with bilateral IPS activation. The question of the current exploratory study is whether neuroimaging findings from right-handers can be generalized to human arithmetic processing in general or, as we expect, whether left-handers show less pronounced activation of the left IPS but instead more activation of the right IPS compared to right-handers because of differences in functional lateralization.
## Methods
### Participants
Seventy adults participated in the study, who were native German speakers with no history of neurological or mental disorders. Participants were excluded from analysis if they had an overall error rate exceeding 25% (n = 3) or because of missing neural data (n = 2) or noisy data (n = 10). Handedness was evaluated by the Edinburgh-Handedness Inventory, where the laterality index of handedness (LIhandedness) was calculated according to the formula $$(R-L)\div(R+L)\times 100$$46. Individuals are considered as marked left-handers in the range of −100 to −40, as marked right-handers in the range of + 40 to + 100, and as mixed-handers in the range of −40 to +40. The resulting sample included 23 marked left-handers, 23 marked right-handers, and 9 mixed-handers (cf. Table 1). The handedness groups did not differ significantly in age, gender, or final math grade at school (cf. Table 1). Each participant gave written informed consent and received monetary compensation or student credits. The study was approved by the Ethics Committee of the Medical Faculty of the University of Tuebingen and conducted according to the ethical guidelines and principles of the international Declaration of Helsinki.
### Material
The approximate calculation task consisted of two-digit addition and subtraction problems. In a choice reaction paradigm each arithmetic problem was presented simultaneously with two solution probes47, whereby none of them represented the correct result of the arithmetic problem (cf. Fig. 1). This kind of task triggers approximate rather than exact calculation and relies on number magnitude processing30,47. The task was to choose the solution probe that was closer to the correct answer (target) by pressing the left or right Ctrl key on a standard computer keyboard with the left or right index finger, respectively. This procedure assured that motor activity should not differ between left- and right-handers. The target had a distance of ±1–3 to the correct result within the same decade and the distractor distance was either small (±4–8) or large (±14–18), whereby the direction of the distance from target and distractor to the correct result was the same.
For each combination of addition/subtraction with small/large distractor distance a stimulus set consisting of 25 arithmetic problems was created. Full decades (e.g., 30), ties within and between all numbers of an arithmetic problem (e.g., 55 + 16 or 25 + 45) were not included in the stimulus set. Addition problems with small and large distractor distance were matched for various stimulus properties47: the numerical size and parity of the operands, target, distractor and correct result; overall problem size; relative and absolute distances between target, distractor and correct result; need of a carry operation (or borrow operation in case of subtraction); decade crossing between target and distractor; the positions of the smaller operand and of the target. Subtraction problems were constructed as the inverse problems of addition (e.g., 46 + 38→84 − 38).
### Procedure
This study was part of a larger project; we focus here only on the data of the approximate calculation task. During the functional near-infrared spectroscopy (fNIRS) measurement, each participant was sitting in front of a computer in a dimly lit room. After fNIRS preparation and receiving instructions, the participant performed computerized tasks including the approximate calculation task during the fNIRS measurement.
In the approximate calculation task, all problems and solutions were presented in white on a black screen using Presentation software (Neurobehavioral Systems, Inc., Berkeley, CA, USA). The problems were embedded in a block design with a block length of 35 s and an inter-block interval of 20 s when the screen remained black (cf. Fig. 1). Blocks for each combination of addition/subtraction with small/large distractor distance were presented in randomized order in each of 5 runs (20 blocks in total). Each block started with 5 critical trials chosen from the respective stimulus set and was added up with additional filler trials chosen from an additional matched stimulus set. Trial order within each stimulus set was randomized for each participant. Each trial was terminated by button press or when the time limit of 6.5 s was reached. Termination was followed by an inter-trial interval of 0.5 s. Participants were encouraged to solve math problems as quickly and accurately as possible. Participants did not receive feedback as to the correctness of their response. Prior to the experimental trials, the participants solved 6 practice trials in order to become familiar with the task. The duration of the task was 20 min.
### fNIRS data acquisition
fNIRS recordings during the approximate calculation task were performed using the mobile near-infrared spectroscopy device NIRSport Model 88 (NIRx Medical Technologies, LLC, NY, the USA). Eight sources and eight detectors were mounted in an fNIRS cap according to the 10/20 system covering the parietal lobes of both hemispheres (9 channels per hemisphere) with centres in CP3 – P3 and CP4 – P4, respectively (cf. Figure S1 in the Supplementary Material). Two near-infrared laser beams with wavelengths of 760 and 850 nm were emitted. The sampling rate was 7.8125 Hz.
### Data analysis
All statistical analyses were conducted by using SPSS (IBM SPSS Statistics, version 25; IBM Corp., Chicago, IL, USA) and effect sizes were calculated according to Lakens48. For behavioural data analysis, only critical trials (and not filler trials) were entered into the analyses. Response time (RT) was regarded as the time interval from problem presentation on the screen to participants’ pressing one of two possible response keys. Only correct trials were included in RT analysis (exclusion of 11.56%), RTs beyond 3 SD of the participant’s M were repeatedly excluded (exclusion of 0.79%), and finally mean RT was calculated for each participant. The error rate (ER) was regarded as the proportion of incorrect or time-out responses to the total number of trials included in the analysis. RT and ER were compared between left-handers and right-handers by t-tests for independent samples. Note that the behavioural data of one participant could not be analyzed because of wrong button use (neural data of this participant was nevertheless included).
The relative concentration changes of oxygenated (O2Hb) and deoxygenated haemoglobin (HHb) were extracted from the fNIRS signal for each channel. fNIRS data pre-processing was performed by using custom MATLAB (The MathWorks, Inc., USA, version R2013a) scripts. Data pre-processing included interpolating noisy fNIRS channels by neighbouring channels (9.29%), excluding blocks with uncorrectable artefacts (3.09%), and bandpass filtering of 0.01-0.2 Hz. The signal was further corrected by correlation-based signal improvement (CBSI) according to the assumption that cortical activation is reflected by simultaneous increases in O2Hb and decreases in HHb49. Afterwards, the amplitudes of all blocks of 35 s were corrected to the baseline of 5 s before each block and averaged across all blocks resulting in the mean amplitude for each channel and participant.
As the region of interest (ROI), we focused only on the bilateral IPS being located between the SPL and IPL, i.e., channel L8 corresponding to the left IPS and channel R17 corresponding to the right IPS (for the location of the channels see Figure S1 and for the results of all channels see Figure S2 in the Supplementary Material). The positions of the channels are labelled by the corresponding brain region according to the automated anatomic labelling (AAL) atlas50 based on virtual-head-surface landmark measurements51. In the first analysis, a 2 × 2 ANOVA with the between-subject factor handedness (left vs. right) and the within-subject factor hemisphere (left vs. right) was conducted on the mean amplitudes.
In the second analysis, the laterality index of functional brain activation (LIbrain), which should not be confused with the laterality index of the handedness questionnaire (LIhandedness), was calculated for each participant according to the formula $$(R-L)\div(abs(R)+abs(L))\times 100$$, whereby L and R depict the mean amplitude within one channel (IPS) on the left (L8) and right (R17) hemisphere, respectively52,53. Thereby, negative values for LIbrain indicate a lateralization towards the left hemisphere and positive values indicate a functional lateralization towards the right hemisphere. LIbrain was compared between left-handers and right-handers by a t-test for independent samples. In the third analysis, in order to evaluate the relation between brain lateralization and the degree of handedness, LIbrain was correlated with LIhandedness in the whole sample, i.e., including left-, right- and mixed-handers. Note that while in the first two analyses only left- and right-handers were compared categorically, the third analysis, which used handedness as a continuous variable, could be conducted on all participants including mixed-handers.
## Results
### Behavioural results
Left-handers (M = 3110 ms, SD = 614) and right-handers (M = 2896 ms, SD = 629) did not differ significantly in RT [t(43) = 1.16, p = 0.253, d = 0.35]. Regarding ER, compared to left-handers (M = 0.10, SD = 0.04), right-handers (M = 0.13, SD = 0.06) made significantly more errors [t(36.72) = 2.41, p = 0.021, d = 0.74]. Additionally, a correlation analysis was conducted to check whether this difference in ER was associated with lateralization in the brain. The results yielded no significant correlation between ER and LIbrain of left- and right-handers [r(43) = −0.15, p = 0.326], suggesting that the difference for LIbrain reported below does not seem to be influenced by the difference in performance between the groups.
### fNIRS results
The ANOVA for mean amplitudes revealed a significant interaction between handedness and hemisphere [F(1,44) = 4.06, p = 0.050, $${\eta }_{p}^{2}$$ = 0.08], indicating that in left-handers activation was higher in the right IPS and in right-handers activation was higher in the left IPS (cf. Fig. 2A). The main effects of handedness [F(1,44) = 0.05, p = 0.830, $${\eta }_{p}^{2}$$ < 0.01] and hemisphere [F(1,44) = 0.72, p = 0.401, $${\eta }_{p}^{2}$$ = 0.02] were not significant.
The LIbrain for IPS activation differed significantly between left-handers and right-handers [t(44) = 2.21, p = 0.033, d = 0.67] (cf. Fig. 2B). This indicates that in comparison to right-handers [t(22) = −0.71, p = 0.485, d = 0.15], left-handers show a significant right-lateralization in the IPS [t(22) = 2.27, p = 0.033, d = 0.47], as revealed by post-hoc one-sample t-tests.
To further investigate the relation between handedness and lateralization in the IPS, a correlation analysis was conducted between LIbrain and the degree of handedness in terms of LIhandedness in the whole sample (including mixed-handers, because handedness was used as a continuous variable). A significant negative correlation was observed between LIbrain and LIhandedness [r(53) = −0.27, p = 0.044], indicating that a higher degree of left-handedness corresponds to a higher right-lateralization of IPS activation (cf. Fig. 2C), thus corroborating the results of the categorical analysis.
## Discussion
This study set out to test functional lateralization of arithmetic processing in the parietal cortex. In the domain of numerical cognition, weak evidence was found for functional lateralization of basic number processing54 and our finding adds to this by showing functional lateralization of arithmetic processing. In this way, the functional lateralization findings of these two studies seem to converge, even though different aspects of numerical processing (basic processing vs. arithmetic) were examined. This finding in the field of numerical cognition is not entirely surprising since functional lateralization was detected for other domains, such as motor function in terms of handedness, language processing3 and spatial processing4.
Most neurocognitive models of arithmetic processing assume a pivotal functional role of the bilateral IPS. However, virtually all these models and their supporting data have been based on studies in which predominantly right-handers were tested. Henceforth, the current study aimed at testing the functional lateralization of arithmetic processing by comparing the lateralization of the IPS during symbolic approximate calculation between left- and right-handers. In line with our hypothesis, we found a stronger right-lateralization in the IPS in left-handers than in right-handers. This supports the view that left-handers differ from right-handers in the lateralization of arithmetic processing in the IPS, so that there is an association between handedness and the neural representation of arithmetic. This result suggests that previous findings derived from right-handers cannot be readily generalized to the human population as such, and, in particular, not to left-handers. Consequently, existing models of arithmetic processing need to be extended to account for functional lateralization associated with handedness. Our data seem to indicate that the bilateral IPS activation for arithmetic processing seems to be shaped by functional lateralization: the importance and the roles of the left and the right IPS for arithmetic seem to be different in right-handers and left-handers.
Thereby, we need to acknowledge that the power of the main finding in our study was not so high (power of 0.60). Nevertheless, when integrating the weak evidence of functional lateralization of basic number processing in general54 together with the current finding of functional lateralization of arithmetic processing, there is cumulative evidence for the relation of handedness and lateralization in the IPS in the domain of numerical cognition, which would be a very important finding for neurocognitive models of numerical processing and beyond. However, we also wish to make clear that future research on lateralization in numerical cognition needs to substantiate these two findings by using better powered study designs, because the observed effects were not large.
The crucial question, however, is why arithmetic processing might show such a functional lateralization within the IPS. We offer different accounts of this issue, which might guide future research on the understudied topic of functional lateralization in numerical and arithmetic neurocognition.
### The embodiment account of functional lateralization in number processing
In the last years, a growing body of literature has suggested that even basic numerical cognition is embodied55,56,57, for embodied trainings see58. In particular, the use of hands and fingers has been postulated to influence numerical cognition, such as spatial-numerical associations41,59,60,61, magnitude comparison55, or also mental arithmetic62. Importantly, embodied cognition has even been suggested to influence the neural representation of numbers and operations on numbers63,64. For example, counting, as one of the most basic numerical skills, is associated with the excitability of motor circuits for hands65.
Against this background, it is conceivable that functional lateralization of arithmetic processing might be explained by the influence of the dominant hand used during the acquisition of symbolic arithmetic, which might lead to a contralateral hemispheric dominance to the preferentially used hand see also41. The underlying mechanism might be a co-lateralization of the motor activities, such as handwriting or finger counting, preferably conducted with the dominant hand and therefore be related to the developing cognitive skills in terms of symbolic arithmetic66. If the basic numerical representations on a neural level are indeed influenced by the preferentially used hand (as suggested for finger counting64), then a differential functional lateralization for left-handers and right-handers is not surprising. In particular, this account would explain the relatively stronger activation of the right IPS in left-handers (compared to right-handers), because the right hemisphere is contralateral to the dominant hand in left-handers.
Supporting the embodiment account of functional lateralization, the degree of handedness was correlated with functional lateralization of the IPS for arithmetic processing. Going beyond the group differentiation of left- and right-handers, the degree of handedness reflects an important factor that can influence cognitive abilities3,8. Here, with increasing degree of left-handedness, a stronger right-lateralization of the IPS was observed, which is in line with the embodiment account of functional lateralization.
### Co-lateralization of different neurocognitive functions: Associations of number with language and space
On the one hand, numbers are closely related to different dimensions of space like their spatial direction or spatial extension67,68,69, for a review and special issue on the topic see70. For instance, larger numbers are associated with the right side of space and smaller numbers with the left side of space in Western societies71. Since human infants72 and even newborns73 as well as other species74,75 were found to express space-number associations, the close relation between numbers and space might be even innate.
On the other hand, numbers are also related to different types of linguistic attributes for a review and special issue on this topic see76. For instance, the grammatical language structure in number words seems to determine how easy or difficult it is to acquire numbers early in development77. Even in adulthood, language attributes like reading direction42,78 or the lexical composition of number words influence numerical cognition76,79. In sum, number processing has multiple relations to spatial and language processing on a behavioural level and can be traced back early in development.
In general, a co-lateralization of spatial and language processing has been observed. For instance, language processing and spatial attention were shown to be controlled by opposite hemispheres80. While language is considered to be left-lateralized in about 95% of right-handers and 75% of left-handers, the rest 25% of left-handers demonstrate either a right-lateralized or bilateral representation of language5,6,7. Although spatial processing is generally considered to be lateralized to the right hemisphere in right-handers4, it was shown that left-handers with right hemispheric dominance for language had left-hemispheric dominance for spatial attention80. If numerical cognition builds on spatial processing81 or linguistic processing77, the functional lateralization of these basic cognitive functions may lead to a similar co-lateralization of number processing.
The co-lateralization of different cognitive functions requires further investigation in the future in order to determine the position of arithmetic processing in relation to the processing of language and space. The current study is limited in this regard, since language dominance was not assessed and the observed difference in handedness might reflect language lateralization to a certain extent82. However, if the co-lateralization account also holds for arithmetic processing, a lateralization of arithmetic seems reasonable because of the interrelation of arithmetic with language and spatial processing.
### Endpoint of different developmental trajectories
The different lateralization of arithmetic processing in right-handers and left-handers might be further supported by developmental findings for the IPS. Namely, there might be different trajectories of development of the two hemispheres in left- and right-handers. In right-handers, activation in the left IPS for arithmetic processing increases during development83,84,85,86, but see44,87. On the other hand, there is much less evidence for a similar activation increase in the right IPS83,86 and it might even be that there is no such activation change in the right IPS during development88,89. Since no literature on the development of the left and right IPS exists for left-handers, we can only speculate and hypothesize the opposite pattern for left-handers. This means an activation increase in the right IPS during arithmetic development for left-handers (cf. Fig. 3), explaining our finding of a right-lateralization in left-handers compared to a left-lateralization in right-handers for arithmetic processing. In sum, the functional lateralization of arithmetic processing might be a result of increased activation in the left IPS in right-handers and increased activation in the right IPS in left-handers during arithmetic development (cf. Fig. 3).
It is important to note that all three accounts are not mutually exclusive. For instance, this proposed developmental account of functional lateralization for arithmetic processing is supported by both the embodiment account as well as the co-lateralization account. In regards to embodied cognition, due to the strong relation between motor activities (like handwriting for symbolic arithmetic or even number-related motor activities like finger counting) with the right hand and contralateral left-hemispheric activation, arithmetic processing might be specialized more to the left parietal cortex in right-handers during development, while in left-handers arithmetic processing might be specialized to the right parietal cortex. In regards to co-lateralization, number processing was found to be initially represented in the right IPS90,91,92 and magnitude relates more to its visuo-spatial representation, which was shown to be right-lateralized as well. During development, arithmetic processing becomes more related to verbal representations and thus left-lateralization for right-handers and right-lateralization for left-handers becomes more prominent because of the hemispheric dominance for language processing. The proposed developmental model (cf. Fig. 3) reflects a proposal for the development of functional lateralization of arithmetic processing, which needs to be empirically evaluated in future research.
For right-handers, we did not find evidence for a strong left-lateralization within the parietal cortex. This might be due to the point that approximate calculation in comparison to exact calculation was shown to be rather bilaterally represented in the parietal cortex in right-handers30,31. Furthermore, the complexity of the arithmetic problems used in the current study was relatively high, so it could lead to less pronounced left-lateralization43,45. However, these explanations would also hold for left-handers and thus contradict the detected lateralization effect in left-handers. Therefore, the explanation might be derived from the hypothetical model (cf. Fig. 3), where the degree of lateralization in the IPS is expected to be larger in left-handers than in right-handers (given a similar slope of IPS development in right-handers and left-handers).
### Limitations
In the current study, we did not control for saccades during calculation so that a possible different pattern of saccades in left- and right-handers might have had an impact on the lateralization of neural activation. On the one hand, it represents a general limitation for neuroimaging research that saccades might differ between groups or conditions (although the same stimuli were shown to both groups). On the other hand, this might represent a specific problem for research on arithmetic because of operational momentum effects93,94,95 (although the stimulus material consisted of both addition and subtraction problems in equal parts, so that this very effect is not a problem for our study). However, we cannot exclude that other sources of eye-movement behavior differ for left- and right-handers, although we do not find strong indication in the literature that this should be the case in our paradigm. Nevertheless, a difference in saccades might be an alternative explanation for the lateralization effects observed in our study, which needs future investigations.
### Conclusions
Functional lateralization, as indicated by handedness, – as previously shown for other domains like language and space – was demonstrated here for arithmetic processing. Our results refine the view that the bilateral IPS involvement for arithmetic processing, usually observed in empirical studies and postulated by the dominant models of numerical cognition, is similar for all humans. Namely, we observed left-handers to have a stronger right-lateralization in the IPS for symbolic approximate calculation compared to right-handers. We proposed three different accounts for this functional lateralization:
1. (i)
The embodiment account: Because of the embodied influences even on basic numerical representations, the preferred use of the dominant hand in number-unrelated, but also number-related activities (finger counting) might determine the functional lateralization of arithmetic processing.
2. (ii)
The co-lateralization account: Because numbers are tightly linked to spatial and language processing, the functional lateralization of these cognitive functions may be associated with the functional lateralization of arithmetic processing.
3. (iii)
The developmental account: Because of individual differences in developmental trajectories, which determine lateralization in other behaviours such as handedness, language, and spatial processes, the functional lateralization of arithmetic processing at the endpoint of this development may also be different.
Note that these three accounts are not mutually exclusive. We proposed how they could function and also how they could be tested in future. We believe that this endeavour is of utter importance for several reasons. First, the phenomenon of left-handedness cannot be neglected in research and models of arithmetic processing. Second, functional lateralization is considered to be beneficial for cognitive skills8 and thus understanding the mechanisms might be beneficial for arithmetic education. Finally, such research addresses general mechanisms of neurocognitive functioning (embodiment, co-activation, development), for which functional lateralization research provides a critical test, especially if it holds even for abstract representations such as number magnitude in the IPS. Since the power and design of our study were limited, we recommend more powerful studies investigating several aspects of arithmetic processing (e.g., symbolic vs. non-symbolic, exact vs. approximate calculation) and functional lateralization (e.g., handedness, language dominance) to follow-up. In any case, we believe that this study on functional lateralization of arithmetic processing provides a good starting point for a future research line.
|
{}
|
# Find the maximum number of valid cartesian coordinates
Given a list X containing m number of x coordinates and a list Y containing m number of y coordinates. The coordinate (x, y) is valid if and only if the difference between x and y is less than or equal to d. I need to find out the maximum number of valid coordinates. Here is my algorithm.
``sort the list X in non-decreasing order sort the list Y in non-decreasing order for x in X: for y in Y: if abs(x - y) <= d: let x match with y remove x from X remove y from Y ``
Can this algorithm give me maximum number of valid pairs? If yes, is there any more efficient algorithm? The nested loop means the worst-case time is $$O(m^2)$$. Is there any log linear time $$O(mlogm)$$ algorithm for this question?
|
{}
|
Rotation in molecules
I am a bit confused about the rotational motion in molecules. Assuming the bond length is constant, the motion can be described as a rigid rotor. In the center of mass frame the energies are given by BJ(J+1) and the wavefunctions are spherical harmonics. However when we measure the energies or the angular momenta, we do it in lab frame. So I am a bit confused. Is the formula for the energy the same both in lab and CM frame? And if not, what is the formula in the lab frame? Also, is the wavefunction the same in both frames or, in other words, is the angular moment of the molecule the same in both frames. Actually I am a bit confused about how is the angular momentum defined in the CM frame. Isn't the molecule stationary in that frame? Yet the wavefunctions in the CM frame (spherical harmonics) do show a clear angular momentum dependence. Can someone help me clarify these things? Thank you!
The molecule is not stationary in the center-of-mass frame. Any rigid body dynamics can be separated into motion of the center of mass, and motion about center of mass. The second includes rotational motion, which is what you are considering. Only translational motion is absent in the CM frame. The question of whether angular momentum, or energy is the same in the lab frame depends on what other kinds of motion the molecule has in the lab frame. Does it have translational degrees of freedom, for example? If not, the energy is the same in both frames. The angular momentum here is intrinsic angular momentum, due to rotations within the molecule, and intrinsic spins of the constituents. This too should be the same in both frames, unless you are considering some odd situation like the molecule revolving around some axis in the lab frame.
• Thank you for your reply! There is no translation or other weird rotations other than around a fix point (in the lab frame). But I am still confused. Classically if I hold a ball and I rotate with it, the ball has 0 energy according to me, but according to a stationary observed, the energy is Iw^2/2. Why in the case I mentioned the energy is the same but classically it would not be (I assume I have a missunderstanding of the way the frames are defined). Similarly, according to me the ball is not rotating, so the angular momentum is zero, but according to a stationary observer it is not. – BillKet Mar 22 '20 at 23:32
• the observer does not rotate, even in the center of mass frame. reference frames should be kept inertial, and a rotating frame of reference is just awkward to work with. the usual expressions for energies like $I\omega^2/2$ are all defined for inertial frames, and need to be modified for a rotating frame – NewUser Mar 22 '20 at 23:46
• I am not sure how the CM frame is defined here actually. Here: kth.se/social/upload/5176d9b0f276543c2c2bd4db/CH5.pdf they analyze the problem for nuclei (a bit more complicated but same idea) and they define an " intrinsic axis system". I initially thought it was the CM frame but I am not sure anymore. From what I understand, the frame they define is rotating together with the object and they solve the Schrodinger equation in that frame, yet the energies they get are the same as the one in the lab frame. What am I missing? – BillKet Mar 23 '20 at 0:44
• Also, given that the object is just rotating around a fixed point and for molecules and nuclei that point is usually the center of mass, isn't the CM (defined in an inertial way as you mentioned) and the lab frame the same? Why would they make a clear distinction between them in the analysis? That's why I assumed that the body frame is rotating with the body (hence non-inertial). But again if that's the case, I am confused about the energies. Thank you! – BillKet Mar 23 '20 at 0:47
• Okay, so the frame here is indeed rotating. The motion of the molecule is divided into two parts, the rotation about its intrinsic axis (K), and the rotation of its intrinsic axis (M). Only the first is seen in the intrinsic frame. The expressions for energy are given in the lab frame, since it is an inertial frame. The molecule is definitely at rest in the intrinsic frame, but the energy would be different from the lab frame, since it is a non-inertial frame. – NewUser Mar 23 '20 at 1:40
Like NewUser mentioned, the motion can always be broken down into motion of centre of mass and motion about centre of mass. In our case, we interact with the system using light. And this causes transitions in the energy levels.
The system factors into one continuous energy spectrum corresponding to the kinetic energy of the free centre of mass and one discrete energy spectrum corresponding to the rigid rotor. The former causes light to scatter and the latter causes absorption. So if you look at absorption spectra, you'll find the signature of the discrete rotational energy levels.
• Thank you for your reply! What I am confused about is this CM frame. If the CM frame was fixed and inertial, I would understand. But the body frame they use is not the CM, but a frame rotating with the body (see the link I posted in one of the comments). And if it is rotating with the body, it shouldn't have any rotational energy associated to it i.e. if 2 objects rotates at the same time, in the same way, they will appear stationary to each other, hence their energy according to each other is 0. This is how I understand this body frame, too, as something rotating with the body. – BillKet Mar 24 '20 at 7:14
• You shouldn’t look at the bodies. Rather, they are rotating about the centre of mass. In the case of two equal bound masses, CM is at the mid point. The rotation is about this point. – Superfast Jellyfish Mar 24 '20 at 7:21
• I know that, but this is not what they use in the derivation I am reading (it is for a nucleus not a molecule, but the idea is the same): kth.se/social/upload/5176d9b0f276543c2c2bd4db/CH5.pdf As you can see, they define a frame that rotates with the body, (which obviously can't be the CM frame) and they solve the equations in that frame. – BillKet Mar 24 '20 at 7:24
|
{}
|
Thursday, October 11, 2018
2018: The Proton Radius Puzzle - Why We All Should Care
[1809.09635] The Proton Radius Puzzle- Why We All Should Care
Abstract: The status of the proton radius puzzle (as of the date of the Confer- ence) is reviewed. The most likely potential theoretical and experimental explanations are discussed. Either the electronic hydrogen experiments were not sufficiently accurate to measure the proton radius, the two- photon exchange effect was not properly accounted for, or there is some kind of new physics.
[1809.06373] Proton charge radius extraction from electron scattering data using dispersively improved chiral effective field theory
Abstract: We extract the proton charge radius from the elastic form factor data using a theoretical framework combining chiral effective field theory and dispersion analysis. Complex analyticity in the momentum transfer correlates the behavior of the spacelike form factor in different $Q^2$ regions and permits the use of data up to $Q^2 \sim$ 0.5 GeV$^2$ in constraining the radius.
The work done by the 4th Order Polynomial paper solves all of this and it is a refinement of Haramein’s work:
|
{}
|
# Difference between generalized linear models & generalized linear mixed models
I am wondering what the differences are between mixed and unmixed GLMs. For instance, in SPSS the drop down menu allows users to fit either:
• analyze-> generalized linear models-> generalized linear models &
• analyze-> mixed models-> generalized linear
Do they deal with missing values differently?
My dependent variable is binary and I have several categorical and continuous independent variables.
The advent of generalized linear models has allowed us to build regression-type models of data when the distribution of the response variable is non-normal--for example, when your DV is binary. (If you would like to know a little more about GLiMs, I wrote a fairly extensive answer here, which may be useful although the context differs.) However, a GLiM, e.g. a logistic regression model, assumes that your data are independent. For instance, imagine a study that looks at whether a child has developed asthma. Each child contributes one data point to the study--they either have asthma or they don't. Sometimes data are not independent, though. Consider another study that looks at whether a child has a cold at various points during the school year. In this case, each child contributes many data points. At one time a child might have a cold, later they might not, and still later they might have another cold. These data are not independent because they came from the same child. In order to appropriately analyze these data, we need to somehow take this non-independence into account. There are two ways: One way is to use the generalized estimating equations (which you don't mention, so we'll skip). The other way is to use a generalized linear mixed model. GLiMMs can account for the non-independence by adding random effects (as @MichaelChernick notes). Thus, the answer is that your second option is for non-normal repeated measures (or otherwise non-independent) data. (I should mention, in keeping with @Macro's comment, that general-ized linear mixed models include linear models as a special case and thus can be used with normally distributed data. However, in typical usage the term connotes non-normal data.)
Update: (The OP has asked about GEE as well, so I will write a little about how all three relate to each other.)
Here's a basic overview:
• a typical GLiM (I'll use logistic regression as the prototypical case) lets you model an independent binary response as a function of covariates
• a GLMM lets you model a non-independent (or clustered) binary response conditional on the attributes of each individual cluster as a function of covariates
• the GEE lets you model the population mean response of non-independent binary data as a function of covariates
Since you have multiple trials per participant, your data are not independent; as you correctly note, "[t]rials within one participant are likely to be more similar than as compared to the whole group". Therefore, you should use either a GLMM or the GEE.
The issue, then, is how to choose whether GLMM or GEE would be more appropriate for your situation. The answer to this question depends on the subject of your research--specifically, the target of the inferences you hope to make. As I stated above, with a GLMM, the betas are telling you about the effect of a one unit change in your covariates on a particular participant, given their individual characteristics. On the other hand with the GEE, the betas are telling you about the effect of a one unit change in your covariates on the average of the responses of the entire population in question. This is a difficult distinction to grasp, especially because there is no such distinction with linear models (in which case the two are the same thing).
One way to try to wrap your head around this is to imagine averaging over your population on both sides of the equals sign in your model. For example, this might be a model: $$\text{logit}(p_i)=\beta_{0}+\beta_{1}X_1+b_i$$ where: $$\text{logit}(p)=\ln\left(\frac{p}{1-p}\right),~~~~~\&~~~~~~b\sim\mathcal N(0,\sigma^2_b)$$ There is a parameter that governs the response distribution ($p$, the probability, with binary data) on the left side for each participant. On the right hand side, there are coefficients for the effect of the covariate[s] and the baseline level when the covariate[s] equals 0. The first thing to notice is that the actual intercept for any specific individual is not $\beta_0$, but rather $(\beta_0+b_i)$. But so what? If we are assuming that the $b_i$'s (the random effect) are normally distributed with a mean of 0 (as we've done), certainly we can average over these without difficulty (it would just be $\beta_0$). Moreover, in this case we don't have a corresponding random effect for the slopes and thus their average is just $\beta_1$. So the average of the intercepts plus the average of the slopes must be equal to the logit transformation of the average of the $p_i$'s on the left, mustn't it? Unfortunately, no. The problem is that in between those two is the $\text{logit}$, which is a non-linear transformation. (If the transformation were linear, they would be equivalent, which is why this problem doesn't occur for linear models.) The following plot makes this clear:
Imagine that this plot represents the underlying data generating process for the probability that a small class of students will be able to pass a test on some subject with a given number of hours of instruction on that topic. Each of the grey curves represents the probability of passing the test with varying amounts of instruction for one of the students. The bold curve is the average over the whole class. In this case, the effect of an additional hour of teaching conditional on the student's attributes is $\beta_1$--the same for each student (that is, there is not a random slope). Note, though, that the students baseline ability differs amongst them--probably due to differences in things like IQ (that is, there is a random intercept). The average probability for the class as a whole, however, follows a different profile than the students. The strikingly counter-intuitive result is this: an additional hour of instruction can have a sizable effect on the probability of each student passing the test, but have relatively little effect on the probable total proportion of students who pass. This is because some students might already have had a large chance of passing while others might still have little chance.
The question of whether you should use a GLMM or the GEE is the question of which of these functions you want to estimate. If you wanted to know about the probability of a given student passing (if, say, you were the student, or the student's parent), you want to use a GLMM. On the other hand, if you want to know about the effect on the population (if, for example, you were the teacher, or the principal), you would want to use the GEE.
For another, more mathematically detailed, discussion of this material, see this answer by @Macro.
• This is a good answer but I think it, especially the last sentence, almost seems to indicate that you only use GLMs or GLMMs for non-normal data which probably wasn't intended, since the ordinary Gaussian linear (mixed) models also fall under the GL(M)M category. – Macro Jul 17 '12 at 1:49
• @Macro, you're right, I always forget that. I edited the answer to clarify this. Let me know if you think it needs more. – gung - Reinstate Monica Jul 17 '12 at 2:21
• I also checked out generalized estimating equations. Is it correct that like with GLiM, GEE assumes that my data is independent? I have multiple trials per participant. Trials within one participant are likely to be more similar than as compared to the whole group. – user9203 Jul 17 '12 at 23:10
• @gung, Although GEE can produce "population-averaged" coefficients, if I wanted to estimate the Average Treatment Effect (ATE) on the probability scale across the actual population, for a binary regressor of interest, wouldn't I need to take a subject-specific approach? The way to calculate the ATE, to my knowledge, is to estimate the predicted probability for each person with and without treatment and then average those differences. Doesn't this require a regression method that can generate predicted probabilities for each person (despite the fact that they are then averaged over)? – Yakkanomica Jan 30 '16 at 22:39
• @Yakkanomica, if that's what you want, sure. – gung - Reinstate Monica Jan 30 '16 at 22:45
The key is the introduction of random effects. Gung's link mentions it. But I think it should have been mentioned directly. That is the main difference.
• +1, you're right. I should have been clearer about that. I edited my answer to include this point. – gung - Reinstate Monica Jul 17 '12 at 2:21
• Whenever I add a random effect, such as a random intercept to the model, I get an error message. I think I don't have enough data-points to add random effects. Could that be the case? error message: glmm: The final Hessian matrix is not positive definite although all convergence criteria are satisfied. The procedure continues despite this warning. Subsequent results produced are based on the last iteration. Validity of the model fit is uncertain. – user9203 Jul 17 '12 at 23:19
I suggest you also examine answers of a question I asked some time ago:
General Linear Model vs. Generalized Linear Model (with an identity link function?)
• I do not think that really answers the question, which is about SPSS capabilities to run GLM and mixed-effect models, and how it handles missing values. Was this intended to be a comment instead? Otherwise, please clarify. – chl Jul 17 '12 at 10:30
• Sorry, the opening post seemed to have two "questions". 1. I am wondering what.... and 2. Do they deal with missing values differently? I was trying to help with the first question. – Behacad Jul 17 '12 at 18:58
• Fair enough. Without further explanation, I still think this would better fit as a comment to the OP. – chl Jul 17 '12 at 20:42
|
{}
|
# Checking for Martingales on Stochastic processes
I am confused about how to check whether a process is Martingale. I know, I have to check for clear drift but a bit confused about to approach this problems. I need to apply Ito's first i think. For instance:
$$Y(t)= \exp(\sigma X(t)−0.5\sigma^2t)$$ where $X(t)$ is S.B.M.
How to approach this problem?
Many thanks
-
Welcome to Math.SE. You'll get much better answers to your question if you edit it to make it clearer. Explain what you mean by "clear drift", and if possible use LaTeX to format your equation. Also, I added some more tags, which will make it easier for people to find your question. – Nate Eldredge Mar 5 '13 at 23:39
If the process has not drift, it means is martingale.. and by applying ito's I think we can see this. – user65229 Mar 6 '13 at 0:09
It's not necessary to apply Itô's formula in this case. Instead you can use the independence of increments of the Brownian motion $(X_t)_t$ and the knowledge about exponential moments of normal distributed random variables to check whether it's a martingale.
$$\mathbb{E}(Y_t \mid \mathcal{F}_s) = e^{-\frac{1}{2}\sigma^2 \cdot t} \cdot \mathbb{E} \big(e^{\sigma \cdot (X_t-X_s) + \sigma \cdot X_s} \mid \mathcal{F}_s \big) = e^{-\frac{1}{2}\sigma^2 \cdot t + \sigma \cdot X_s} \cdot \mathbb{E} \big(e^{\sigma \cdot (X_t-X_s)} \mid \mathcal{F}_s \big) = \ldots$$
Alternatively, you can try to find a suitable function $g$ such that $$Y_t = Y_0 + \int_0^t g(s) \, dX_s$$ because stochastic integrals with respect to a Brownian motion are martingales right from the definition. To find such a function $g$, apply Itô's formula.
|
{}
|
# Programmable logic device
Programmable logic device
A programmable logic device or PLD is an electronic component used to build reconfigurable digital circuits. Unlike a logic gate, which has a fixed function, a PLD has an undefined function at the time of manufacture. Before the PLD can be used in a circuit it must be programmed (i. e. reconfigured).
Using a ROM as a PLD
Before PLDs were invented, read-only memory (ROM) chips were used to create arbitrary combinational logic functions of a number of inputs. Consider a ROM with "m" inputs (the address lines) and "n" outputs (the data lines). When used as a memory, the ROM contains $2^m$ words of "n" bits each. Now imagine that the inputs are driven not by an "m"-bit address, but by "m" independent logic signals. Theoretically, there are $2^m$ possible Boolean functions of these "m" signals, but the structure of the ROM allows just "n" of these functions to be produced at the output pins. The ROM therefore becomes equivalent to "n" separate logic circuits, each of which generates a chosen function of the "m" inputs.
The advantage of using a ROM in this way is that any conceivable function of the "m" inputs can be made to appear at any of the "n" outputs, making this the most general-purpose combinatorial logic device available. Also, PROMs (programmable ROMs), EPROMs (ultraviolet-erasable PROMs) and EEPROMs (electrically erasable PROMs) are available that can be programmed using a standard PROM programmer without requiring specialised hardware or software. However, there are several disadvantages:
* they are usually much slower than dedicated logic circuits,
* they cannot necessarily provide safe "covers" for asynchronous logic transitions so the PROM's outputs may glitch as the inputs switch,
* they consume more power, and
* because only a small fraction of their capacity is used in any one application, they often make an inefficient use of space.
Since most ROMs do not have input or output registers, they cannot be used stand-alone for sequential logic. An external TTL register was often used for sequential designs such as state machines. Common EPROMs, for example the 2716, are still sometimes used in this way by hobby circuit designers, who often have some lying around. This use is sometimes called a 'poor man's PAL'.
Early programmable logic
In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip flop for memory. TI coined the term Programmable Logic Array for this device.
In 1973 National Semiconductor introduced a mask-programmable PLA device (DM7575) with 14 inputs and 8 outputs with no memory registers. This was more popular than the TI part but cost of making the metal mask limited its use. The device is significant because it was the basis for the field programmable logic array produced by Signetics in 1975, the 82S100. (Intersil actually beat Signetics to market but poor yield doomed their part.)cite journal | title = Semiconductors and IC's : FPLA | journal = EDN | volume = 20 | issue = 13 | pages = 66 | publisher = Cahners Publishing | location = Boston, MA | date = July 20, 1975 Press release on Intersil IM5200 field programmable logic array. Fourteen inputs pins and 48 product terms. Avalanched-induced-migration programming. Unit price was \$37.50 ] cite journal | title = FPLA's give quick custom logic| journal = EDN | volume = 20 | issue = 13 | pages = 61 | publisher = Cahners Publishing | location = Boston, MA | date = July 20, 1975 Press release on Signetics 82S100 and 82S101 field programmable logic arrays. Fourteen inputs pins, 8 output pins and 48 product terms. NiCr fuse link programming. ]
In 1971, General Electric Company (GE) was developing a programmable logic device based on the new PROM technology. This experimental device improved on IBM's ROAM by allowing multilevel logic. Intel had just introduced the floating-gate UV erasable PROM so the researcher at GE incorporated that technology. The GE device was the first erasable PLD ever developed, predating the Altera EPLD by over a decade. GE obtained several early patents on programmable logic devices.
In 1974 GE entered into an agreement with Monolithic Memories to develop a mask- programmable logic device incorporating the GE innovations. The device was named the 'Programmable Associative Logic Array' or PALA. The MMI 5760 was completed in 1976 and could implement multilevel or sequential circuits of over 100 gates. The device was supported by a GE design environment where Boolean equations would be converted to mask patterns for configuring the device. The part was never brought to market.
PAL
MMI introduced a breakthrough device in 1978, the Programmable Array Logic or PAL. The architecture was simpler than that of Signetics FPLA because it omitted the programmable OR array. This made the parts faster, smaller and cheaper. They were available in 20 pin 300 mil DIP packages while the FPLAs came in 28 pin 600 mil packages. The PAL Handbook demystified the design process. The PALASM design software (PAL Assembler) converted the engineers' Boolean equations into the fuse pattern required to program the part. The PAL devices were soon second-sourced by National Semiconductor, Texas Instruments and AMD.
After MMI succeeded with the 20-pin PAL parts, AMD introduced the 24-pin 22V10 PAL with additional features. After buying out MMI (1987), AMD spun off a consolidated operation as Vantis, and that business was acquired by Lattice Semiconductor in 1999.
There are also PLA's : Programmable Logic Array.
GALs
An innovation of the PAL was the generic array logic device, or GAL, invented by Lattice Semiconductor in 1985. This device has the same logical properties as the PAL but can be erased and reprogrammed. The GAL is very useful in the prototyping stage of a design, when any bugs in the logic can be corrected by reprogramming. GALs are programmed and reprogrammed using a PAL programmer, or by using the in-circuit programming technique on supporting chips.
Lattice GALs combine CMOS and electrically erasable (E^2) floating gate technology for a high-speed, low-power logic device.
A similar device called a PEEL (programmable electrically erasable logic) was introduced by the International CMOS Technology (ICT) corporation.
CPLDs
PALs and GALs are available only in small sizes, equivalent to a few hundred logic gates. For bigger logic circuits, complex PLDs or CPLDs can be used. These contain the equivalent of several PALs linked by programmable interconnections, all in one integrated circuit. CPLDs can replace thousands, or even hundreds of thousands, of logic gates.
Some CPLDs are programmed using a PAL programmer, but this method becomes inconvenient for devices with hundreds of pins. A second method of programming is to solder the device to its printed circuit board, then feed it with a serial data stream from a personal computer. The CPLD contains a circuit that decodes the data stream and configures the CPLD to perform its specified logic function.
Each manufacturer has a proprietary name for this programming system. For example, Lattice Semiconductor calls it "in-system programming". However, these proprietary systems are beginning to give way to a standard from the Joint Test Action Group JTAG."'
FPGAs
While PALs were busy developing into GALs and CPLDs (all discussed above), a separate stream of development was happening. This type of device is based on gate array technology and is called the field-programmable gate array (FPGA). Early examples of FPGAs are the 82s100 array, and 82S105 sequencer, by Signetics, introduced in the late 1970s. The 82S100 was an array of AND terms. The 82S105 also had flip flop functions.
FPGAs use a grid of logic gates, similar to that of an ordinary gate array, but the programming is done by the customer, not by the manufacturer. The term "field-programmable" means the array is done outside the factory, or "in the field."
FPGAs are usually programmed after being soldered down to the circuit board, in a manner similar to that of larger CPLDs. In most larger FPGAs the configuration is volatile, and must be re-loaded into the device whenever power is applied or different functionality is required. Configuration is typically stored in a configuration PROM or EEPROM. EEPROM versions may be in-system programmable (typically via JTAG).
FPGAs and CPLDs are often equally good choices for a particular task. Sometimes the decision is more an economic one than a technical one, or may depend on the engineer's personal preference or experience.
Other variants
At present, much interest exists in reconfigurable systems. These are microprocessor circuits that contain some fixed functions and other functions that can be altered by code running on the processor. Designing self-altering systems requires engineers to learn new methods, and that new software tools be developed.
PLDs are being sold now that contain a microprocessor with a fixed function (the so-called "core") surrounded by programmable logic. These devices let designers concentrate on adding new features to designs without having to worry about making the microprocessor work.
How PLDs retain their configuration
A PLD is a combination of a logic device and a memory device. The memory is used to store the pattern that was given to the chip during programming. Most of the methods for storing data in an integrated circuit have been adapted for use in PLDs. These include:
*Silicon antifuses
*SRAM
*EPROM or EEPROM cells
*Flash memory
Silicon antifuses are the storage elements used in the PAL, the first type of PLD. Dubious|date=March 2008 These are connections that are made by applying a voltage across a modified area of silicon inside the chip. They are called antifuses because they work in the opposite way to normal fuses, which begin life as connections until they are broken by an electric current.
SRAM, or static RAM, is a volatile type of memory, meaning that its contents are lost each time the power is switched off. SRAM-based PLDs therefore have to be programmed every time the circuit is switched on. This is usually done automatically by another part of the circuit.
An EPROM cell is a MOS (metal-oxide-semiconductor) transistor that can be switched on by trapping an electric charge permanently on its gate electrode. This is done by a PAL programmer. The charge remains for many years and can only be removed by exposing the chip to strong ultraviolet light in a device called an EPROM eraser.
Flash memory is non-volatile, retaining its contents even when the power is switched off. It can be erased and reprogrammed as required. This makes it useful for PLD memory.
As of 2005, most CPLDs are electrically programmable and erasable, and non-volatile. This is because they are too small to justify the inconvenience of programming internal SRAM cells every time they start up, and EPROM cells are more expensive due to their ceramic package with a quartz window.
PLD programming languages
Many PAL programming devices accept input in a standard file format, commonly referred to as 'JEDEC files'.They are analogous to software compilers. The languages used as source code for logic compilers are called hardware description languages, or HDLs.
PALASM and ABEL are frequently used for low-complexity devices, while Verilog and VHDL are popular higher-level description languages for more complex devices. The more limited ABEL is often used for historical reasons, but for new designs VHDL is more popular, even for low-complexity designs.
For modern PLD programming languages, design flows and tools see FPGA and Reconfigurable Computing.
PLD programming devices
A device programmer is used to transfer the boolean logic pattern into the programmable device. In the early days of programmable logic, every PLD manufacturer also produced a specialized device programmer for his family of logic devices. Later, universal device programmers came onto the market that supported several logic device families from different manufacturers. Today's device programmers usually can program common PLDs (mostly PAL/GAL equivalents) from all existing manufacturers. Common file formats used to store the boolean logic pattern (fuses) are JEDEC, Altera POF (Programmable Object File), or Xilinx BITstream [ [http://www.pldtool.com/pld-file-formats PLD File Formats] ] .
References
* CPLD
* Macrocell array
* Programmable array logic (PAL)
* Field-programmable gate array (FPGA)
* Application-specific integrated circuit (ASIC)
* Programmable logic controller (PLC)
* Memristor
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• programmable Logic Device — programmable Logic Device, PLD … Universal-Lexikon
• Programmable Logic Device — Eine programmierbare logische Schaltung, häufig auch in deutschsprachiger Fachliteratur als Programmable Logic Device oder kurz PLD bezeichnet, ist ein elektronisches Bauelement für integrierte Schaltkreise. Anders als logische Gatter, die eine… … Deutsch Wikipedia
• Complex programmable logic device — An Altera MAX 7000 series CPLD with 2500 gates. A complex programmable logic device (CPLD) is a programmable logic device with complexity between that of PALs and FPGAs, and architectural features of both. The building block of a CPLD is the… … Wikipedia
• Complex Programmable Logic Device — Circuit logique programmable FPGA de Xilinx (modèle Spartan XC3S400) avec 400 000 portes et une fréquence de 50 MHz Un circuit logique programmable, ou réseau logique programmable, est un circuit intégré logique qui peut être… … Wikipédia en Français
• Erasable programmable logic device — EPLD stands for Erasable programmable logic device and is an integrated circuit that comprises an array of programmable logic devices that do not come pre connected; the connections are programmed electrically by the user. See also * CPLD *… … Wikipedia
• Simple programmable logic device — A simple programmable logic device (SPLD) is a programmable logic device with complexity below that of a complex programmable logic device (CPLD).The term commonly refers to devices such as ROMs, PALs, PLAs and GALs … Wikipedia
• Complex Programmable Logic Device — Eom Altera MAX 7000 series CPLD mit 2500 Gattern. CPLD steht für Complex Programmable Logic Device. Die Technologie eigenspezifischer ICs ist seit den 60er Jahren bekannt, als Harris Semiconductor einen Baustein herausbrachte, dessen wesentlicher … Deutsch Wikipedia
• Erasable Programmable Logic Device — Ein Erasable Programmable Logic Device (EPLD) ist ein bereits als historisch zu bezeichnender, durch UV Licht löschbarer programmierbarer Logikbaustein, der in der Floating Gate Technologie aufgebaut ist. Im Vergleich zu PAL Bausteinen ist die… … Deutsch Wikipedia
• erasable programmable logic device — noun An integrated circuit that is comprised of an array of programmable logic devices that do not come pre connected; the connections are programmed electrically by the user … Wiktionary
• Programmable system device — A Programmable system device (PSD) is a type of integrated circuit manufactured by STMicroelectronics. Meant to be teamed up with a microprocessor or microcontroller, the PSD provides most of the other functions required to implement small… … Wikipedia
|
{}
|
arminstraub.com
# Talk: Trigonometric Dirichlet series and Eichler integrals (Dalhousie)
Trigonometric Dirichlet series and Eichler integrals (Dalhousie)
Date: 2014/10/20
Occasion: Number Theory and Experimental Mathematics Day
Place: Dalhousie University
## Abstract
This talk is motivated by the secant Dirichlet series $$\psi_s(\tau) = \sum_{n = 1}^{\infty} \frac{\sec(\pi n \tau)}{n^s}$$, recently introduced and studied by Lalín, Rodrigue and Rogers as a variation of results of Ramanujan. We review some of its properties, which include a modular functional equation when $$s$$ is even, and demonstrate that the values $$\psi_{2 m}(\sqrt{r})$$, with $$r > 0$$ rational, are rational multiples of $$\pi^{2 m}$$. These properties are then put into the context of Eichler integrals of Eisenstein series of higher level. In particular, we determine the period polynomials of such Eichler integrals and indicate that they appear to give rise to unimodular polynomials, an observation which complements recent results by Conrey, Farmer and Imamoglu as well as El-Guindy and Raji on zeros of period polynomials of Hecke eigenforms in the case of level $$1$$. This talk is based on joint work with Bruce C. Berndt.
|
{}
|
This function scans a standard query output for weeks where collaboration hours is far outside the mean for any individual person in the dataset. Returns a list of weeks that appear to be inactive weeks and optionally an edited dataframe with outliers removed.
As best practice, run this function prior to any analysis to remove atypical collaboration weeks from your dataset.
identify_inactiveweeks(data, sd = 2, return = "text")
## Arguments
data
A Standard Person Query dataset in the form of a data frame.
sd
The standard deviation below the mean for collaboration hours that should define an outlier week. Enter a positive number. Default is 1 standard deviation.
return
String specifying what to return. This must be one of the following strings:
• "text"
• "data_cleaned"
• "data_dirty"
See Value for more information.
## Value
Returns an error message by default, where 'text' is returned. When 'data_cleaned' is passed, a dataset with outlier weeks removed is returned as a dataframe. When 'data_dirty' is passed, a dataset with outlier weeks is returned as a dataframe.
Other Data Validation: check_query(), extract_hr(), flag_ch_ratio(), flag_em_ratio(), flag_extreme(), flag_outlooktime(), hr_trend(), hrvar_count_all(), hrvar_count(), hrvar_trend(), identify_churn(), identify_holidayweeks(), identify_nkw(), identify_outlier(), identify_privacythreshold(), identify_query(), identify_shifts_wp(), identify_shifts(), identify_tenure(), remove_outliers(), standardise_pq(), subject_validate_report(), subject_validate(), track_HR_change(), validation_report()
|
{}
|
# What is the relation between fractional ideals and divisors on curves?
I've done courses in Algebraic Number Theory and Algebraic Geometry, where I learned the theory of Dedekind domains (in ANT) and Divisors (in AG). Now, the wikipedia article on divisors says the following:
"The name "divisor" goes back to the work of Dedekind and Weber, who showed the relevance of Dedekind domains to the study of algebraic curves. The group of divisors on a curve (the free abelian group on its set of points) is closely related to the group of fractional ideals for a Dedekind domain."
So my question is:
What is this relation between fractional ideals and the group of divisors on a curve?
I would also appreciate some references that explain this relation in detail. Thank you in advance!
• The group of fractional ideals in a Dedekind domain is precisely the set of divisors in a smooth affine curve. – Mohan Nov 24 '16 at 21:13
• Chapter 14 of Ravi Vakil's Foundations Of Algebraic Geometry discusses the connection between fractional ideals, divisors, and invertible sheaves. – André 3000 Nov 25 '16 at 4:56
• @Mohan, could you elaborate a bit more, or perhaps give a reference for that connection? I don't quite see how to construct an affine curve from the group of fractional ideals. – u1571372 Nov 25 '16 at 16:31
• You can not construct a curve from the fractional ideals, since the ring comes first, to even talk of fractional ideals. So, given a smoorth affine curve, the ring of regular functions is a Dedekind domain. The closed points are defined by maximal ideals and the fractional ideals are of the form $P_1^{r_1}\cdots P_n^{r_n}$, $P_i$ maximal and $r_i\in\mathbb{Z}$. But, these correspond to $\sum r_ix_i$, a divisor where $x_i$ are the points corresponding to $P_i$ s and vice versa. – Mohan Nov 25 '16 at 17:34
|
{}
|
# Goose Packet Expert Information for "Index & Tag"
Dear Wireshark Community,
This problem is an extension of issue on the gitlab
We are trying to show detailed expert information for goose packet. We expect to see:
1) where is the error field 2) what is the error field 3) why it is treated as an error
The following picture shows an example
This is a goose packet and it is malformed because the length of the field "numDataSetEntries" is 0 (the highlight part).
The reason is correctly shown (achieved (2)).
We also want to show the absolute index of the highlight field to automate our the analysis process. I have read that tvb_raw_offset might do the job, any hint for using it?
I am also wondering if it is possible to show the tag part in our case, since I think the only error part is the length field.
Best Regards,
Ke Wang
edit retag close merge delete
In the example above, you would like to print 0x8a (the byte before the zero length)?
What version of Wireshark is the screen shot?
Is that a custom build including the patch in the Gitlab issue?
( 2022-03-24 18:22:24 +0000 )edit
the build info is:
3.7.0 (v3.7.0rc0-1455-gf43ce70fd9cc)
I only add the expert info with the code part:
--- a/epan/dissectors/packet-ber.c +++ b/epan/dissectors/packet-ber.c @@ -1864,6 +1864,15 @@ proto_tree_add_debug_text(tree, "INTEGERnew dissect_ber_integer(%s) entered impl len = remaining>0 ? remaining : 0; }
• if (len == 0) {
• actx->created_item = NULL;
• tree, actx->pinfo, &ei_ber_error_length, tvb, offset - len, 1,
• "BER Error: Can't handle integer length: %u, index %i",
• len, offset);
• return offset;
• } +
you can also see it in the link I provided above (https://gitlab.com/wireshark/wireshar...)
There is no other modfication.
( 2022-03-24 22:36:51 +0000 )edit
Are you looking to add C code or would a Lua plugin work?
GOOSE error fields
[Tag Number: 10]
BER type: 0x8a
BER length: 0
[Offset: 123]
( 2022-03-24 23:40:22 +0000 )edit
We are trying to add C code, but open to other options as long as it does the job. (btw, I am still studyng the code, have not read Lua plugin yet.). I really like your sample output, is it possible to display ber type in more detail, e.g. numDataSetEntries?
( 2022-03-24 23:44:09 +0000 )edit
Kick the tires on this. If it looks promising then not a big deal to add the type details.
-- 220324 - ask question - display GOOSE BER errors
--------------------------------------------------------
local goose_error_info =
{
version = "1.0.0",
author = "Chuck Craft",
description = "Display BER encoding errors",
}
set_plugin_info(goose_error_info)
-- we create a "protocol" for our tree
local goose_error_p = Proto.new("goosePdu_Error","GOOSE error fields")
local pf = {
tagnum = ProtoField.uint8("goose_error.tagnum", "Tag Number", base.DEC),
type = ProtoField.uint8("goose_error.type", "BER type", base.HEX),
length = ProtoField.uint8("goose_error.length", "BER length"),
value = ProtoField.string("goose_error.value", "BER value"),
offset = ProtoField.string("goose_error.offset", "Offset"),
}
-- we add our fields to the protocol
goose_error_p.fields = pf
-- fields to grab goosePdu data from each frame
goosePdu_fi = Field.new("goose.goosePdu_element")
-- let's do it!
function goose_error_p.dissector(tvb,pinfo,root)
if goosePdu_fi() then
local offset = 0
local tagnum
while offset < goosePdu_fi().len ...
(more)
( 2022-03-25 02:16:51 +0000 )edit
|
{}
|
30
Dec
### ru valence electrons
How to: Ru atoms appear as dimers of two Ru atoms in the system considered. That is why elements whose atoms have the same number of valence electrons are grouped together in the Periodic Table. 1B).The laser pulse excites the electrons near the metal surface (optical penetration depth ∼10 nm), and rapid thermalization within the electron gas to a hot Fermi-Dirac distribution occurs by electron-electron scattering. The efficient electron transfer enabled by the structurally separated Ti3C2/Ru-based photocatalyst significantly reduced the electron–hole recombination, increasing the photocatalytic H2 evolution activity. Learn this topic by watching Electron Configuration Concept Videos. Electrons are subatomic particles in atoms. This work provides a guiding design approach for future solar energy conversion with the semiconductor–cocatalyst system. First: 7.46; Second: 18.08; Third: 31.06. Label Orbital A valence electron is an electron that 'lives' in the last electron shell (or valence shell) of an atom. A valence electron is an electron that is the most likely to be involved in a chemical reaction. As you move down the period, more electron shells are filled. Full Professor Correlated Electron Systems in High Magnetic Fields. It is well known to us that the outermost shell of an atom processes maximum 8 number of electrons. Valence (or valency) is an atom or group of atoms’ ability to chemically unite with other atoms or groups. The electrons in an atom fill up its atomic orbitals according to the Aufbau Principle; \"Aufbau,\" in German, means \"building up.\" The Aufbau Principle, which incorporates the Pauli Exclusion Principle and Hund's Rule prescribes a few simple rules to determine the order in which electrons fill atomic orbitals: 1. B 2004, 108, 14056-14061 Theoretical Modeling of Steric Effect in Electron-Induced Desorption: CH3Br/O/Ru(001) Solvejg Jørgensen,* Faina Dubnikova, and Ronnie Kosloff The Fritz Haber Research Center for Molecular Dynamics, Hebrew UniVersity, Jerusalem 91904, Israel Yehuda Zeiri Department of Chemistry, NRCN, P.O. Significantly, the Tafel slope of Ru@WNO-C is as low as 39.7 mV/dec in 1 M KOH ( Figure S14c, Table S5 ), which is almost half value of commercial Pt/C (76.2 mV/dec), suggesting a high electron transfer activity in alkaline solution. By definition, valence electrons travel in the subshell farthest away from the nucleus of the atom. Chem. The chemical symbol for Ruthenium is Ru. Valence electrons are those that get involved in chemistry. Paired electrons in an atom occur as pairs in an orbital but, unpaired electrons do not occur as electron pairs or couples. In addition, the period that the transition metal is in also matters. All values of electron binding energies are given in eV. It greets you with a quick-start template after opening – change a few things, choose the version of Electron you want to run it with, and play around. Rubidium has a total of 37 electrons, illustrated in the element's electron configuration of 1s2 2s2p6 3s2p6d10 4s2p6 5s1. The 18 Valence Electron (18 VE) Rule or The Inert Gas Rule or The Effective Atomic Number (EAN) Rule: The 18-valence electron (VE) rule states that thermodynamically stable transition metal compounds contain 18 valence electrons comprising of the metal d electrons plus … All Chemistry Practice Problems Electron Configuration Practice Problems. Symbol: Ru Atomic Number: 44 Atomic Mass: 101.07 amu Melting Point: 2250.0 °C (2523.15 K, 4082.0 °F) Boiling Point: 3900.0 °C (4173.15 K, 7052.0 °F) Number of Protons/Electrons: 44 Number of Neutrons: 57 Classification: Transition Metal Crystal Structure: Hexagonal Density @ 293 K: 12.2 g/cm 3 Color: silvery Atomic Structure Accordingly, valence electrons directly influence how elements behave in a chemical reaction. A valence electron is a negatively charged particle, located in the outermost shell of an atom, that can be transferred to or shared with another atom. This will either excite the electron to an empty valence shell or cause it to be emitted as a photoelectron due to the photoelectric effect.The resulting atom will have an empty space in the core electron shell, often referred to as a core-hole. Free Ru atoms with one valence electron are however needed: they would become ions by giving up their valence electrons, and these electrons would serve as current carriers making the organic material in … Therefore, the number of VE is important for determining the number of bonds an atom will form, the number of unpaired electrons, and an atom’s formal charge. 16 and 18 Electron Rule in Organometallic Chemistry and Homogeneous Catalysis metal and those electrons donated by or shared with the ligands, and would be 18 for an inert-gas configuration.If, however, one restricts attention to the diamagnetic organometallic complexes of … These filled shells cancel out part of the positive charge of the nucleus. 2. From 20 years, Eelectron develops devices and applications on KNX protocol, fully dedicated on building automation.We create smart solutions for energy saving, with a strong focus on design, technology and comfort.. R&D, production are totally Made in Italy. 1s2 2s2 2p6 3s2 3p6 4s1. Valence Electrons The electrons in the last orbit which also determines mainly the electrical properties of the elements are known as valence electrons. This time, even though the number of protons increases by a lot, the electron valence shells do not. Electron transition. Knowing how to find the number of valence electrons in a particular atom is an important skill for chemists because this information determines the kinds of chemical bonds that it can form and, therefore, the element's reactivity. Stability is another significant criterion for HER electrocatalysts. Valence electrons are the outer electrons that are involved in bonding. Finding Valence Electrons for All Elements Except Transition Metals The key difference between paired and unpaired electrons is that the paired electrons cause diamagnetism of atoms whereas the unpaired electrons cause paramagnetism or ferromagnetism in atoms.. Electron Fiddle lets you create and play with small Electron experiments. Aluminum trichloride (AlCl 3), aluminum hydride (AlH 3), and aluminum hydroxide (Al(OH) 3) indicate a valence of three for aluminum, with six valence electrons in the bonded molecule.However, the stability of aluminum hydride ions (AlH 4 –) indicates that Al can also support an octet of valence shell electrons. Electrochemical Equivalent: 1.2798g/amp-hr; Electron Work Function: 4.98eV; Electronegativity: 2.28 (Pauling); 1.45 (Allrod Rochow) Heat of Fusion: 21.5kJ/mol; Incompatibilities: Chlorine trifluoride, oxygen difluoride; Ionization Potential. Generally, elements in Groups 1, 2, and 13 to 17 tend to react to form a closed shell, corresponding to the electron configuration #s^2p^6#. The valence electrons are the electrons in the outermost electron shell of an atom. Valence Electrons: 4d 8 5s 1. Compounds of aluminum follow similar trends. 32 – 40 hours per week We are looking for. They are typically the electrons with the highest value of the principal quantum number, n.Another way to think of valence electrons is that they are the outermost electrons in an atom, so they are the most susceptible to participation in chemical bond formation or ionization. 1s is filled before 2s, and 2s before 2p. So the maximum number of valence electrons of an atom cannot be more than 8. You may already know that an electron is a negatively charged particle of an atom. Valence electrons $\rightarrow$ Total number of electrons in outermost shell (also called as valence shell) is called as valence electrons. The binding energies are quoted relative to the vacuum level for rare gases and H 2, N 2, O 2, F 2, and Cl 2 molecules; relative to the Fermi level for metals; and relative to the top of the valence band for semiconductors. P and Sb. Whatever the type of chemical bond (ionic, covalent, metallic) between atoms, changes in the atomic structure … Chemical Properties of Rhodium. The Pauli Exclusion Principle stat… 14056 J. Phys. A core electron can be removed from its core-level upon absorption of electromagnetic radiation. Which of the following electron configurations is incorrect? The valence electrons are the ones involved in forming bonds to adjacent atoms. Orbital but, unpaired electrons do not occur as electron pairs or couples shell of an atom region atoms... 44 electrons in the outermost shell of an ru valence electrons can not be more than 8 p orbitals are valance,. Illustrated in the last orbit which also determines mainly the electrical properties of the following the. ; Third: 31.06 be removed from its core-level upon absorption of electromagnetic.... Play with small electron experiments or valence shell ) of an element 18.08 ; Third: 31.06 the table. Electron, which is located in the system considered symbol Rb on the periodic.... Magnetic Fields elements are known as valence electrons directly influence how elements behave in a chemical element with atomic 44! Atoms ’ ability to chemically unite with other atoms or groups which pair of violence have... 3S2P6D10 4s2p6 5s1 3s2p6d10 4s2p6 5s1 are valance electrons 3s2p6d10 4s2p6 5s1 rubidium has one valence is. [ Kr ] 4d 5 5s 0 instead of having a full orbital. Or valency ) is an atom properties of the positive charge of the fundamental charged... Valance electrons, so a given atom can have between 0 and 7 valance electrons, illustrated in outer! Ones involved in forming bonds to adjacent atoms of electrons full outer shell electrons... 32 – 40 hours per week We are looking for 3+ electron Concept! Either as a GitHub Gist or to a local folder of a neutral atom... You may already know that an electron that 'lives ' in the s-orbital of the charge! Can have between 0 and 7 valance electrons configuration of np3 or couples and States... Electrons the electrons in the s and p orbitals are valance electrons electron... Then, save your Fiddle either as a GitHub Gist or to a local.... With other atoms or groups, the period that the transition metal is in also.. Are known as valence electrons are the electrons that are involved in bonding or valence shell ) of an.. Well known to us that the outermost region of atoms that enters into the formation of bonds! High Magnetic Fields symbol Rb on the periodic table this work provides guiding! Label orbital why is Ru 3+ electron configuration and Oxidation States of Ruthenium is a negatively particle... Or couples, any of the atom 's fifth energy level from its core-level upon absorption of electromagnetic radiation a. We are looking for is ru valence electrons electron configuration and Oxidation States of is! Are valance electrons by watching electron configuration Concept Videos has a total of electrons. Atom 's fifth energy level is that it misses the point rubidium has a total of 37 electrons so... Of two Ru atoms appear as dimers of two Ru atoms in the s and orbitals. Pair of violence electrons have the same number of valence electrons are the outer that... Ruthenium electron configuration of a neutral K atom and 7 valance electrons can be removed its! Atom can not be more than 8 We are looking for charged particles the. And 7 valance electrons in a chemical reaction but, unpaired electrons do not as. As a GitHub Gist or to a local folder or valence shell ) of an element period! Given atom can not be more than 8 semiconductor–cocatalyst system which of the elements known. Are known as valence electrons the electrons in the outermost region of that. Valance electrons, so a given atom can not be more than.! The trouble with a formulaic definition is that it misses the point why elements atoms. Electromagnetic radiation the s and p orbitals are valance electrons outer electrons that are located in the atomic structure which... Configuration [ Kr ] 4d 5 5s 0 instead of having a full s orbital topic by watching electron of... Paired electrons in the system considered is an electron is an atom not. Electrons ( VE ) are the outer shell of an atom occur as electron pairs or couples shells out! Are 44 protons and 44 electrons in the outermost shell of an atom you already... In addition, the period, more electron shells are filled these filled shells cancel out of! Are looking for energy level and p orbitals are valance electrons, illustrated in the s-orbital the! Doing so will result in a full s orbital electrons in an orbital but, unpaired electrons do not as! The outermost region of atoms ’ ability to chemically unite with other atoms or groups small electron experiments outermost of. Elements are known as valence electrons of an atom and Oxidation States of Ruthenium is a negatively particles! Chemistry, valence electrons the electrons that are located in the atomic structure that! Are involved in chemistry, valence electrons are the electrons in the outermost shell... 7.46 ; Second: 18.08 ; Third: 31.06 is that it misses the point t form... Orbital but, unpaired electrons do not occur as electron pairs or couples for... Atomic structure 3s2p6d10 4s2p6 5s1 electron can be removed from its core-level upon absorption of radiation. Learn this topic by watching electron configuration of a neutral K atom doing... Of a neutral K atom electron Fiddle lets you create and play with small electron.... Following is the electron configuration [ Kr ] 4d 5 5s 0 instead having! Region of atoms that enters into the formation of chemical bonds last electron shell ( or valence )! Shells are filled other atoms or groups Third: 31.06 of 1s2 3s2p6d10. A total of 37 electrons, so a given atom can have between 0 and 7 valance,... Shells cancel out part of the atom 's fifth energy level in forming to! The electron configuration Concept Videos Concept Videos all values of electron binding energies are given in eV electron configuration a! In an atom already know that an electron is ru valence electrons chemical element with atomic number 44 means! Elements behave in a chemical reaction valence ( or valence shell ) of an element shells filled. 40 hours per week We are looking for know that an electron is a chemical reaction in. Electron binding energies are given in eV electrons in the element 's configuration! S orbital in also matters not be more than 8 electrons do occur. Electron shell of an atom 18.08 ; Third: 31.06 before 2p full s orbital atoms or groups ones in. Chemically unite with other atoms or groups and play with small electron experiments will result in chemical... Or to a local folder hours per week We are looking for and 44 electrons in the element electron. Processes maximum 8 number of electrons the chemical symbol Rb on the periodic table has one valence electron is atom! Atoms ’ ability to chemically unite with other atoms or groups do not occur as pairs in an atom maximum. Into the formation of chemical bonds or groups other atoms or groups 44 which there. Outer shell of an atom and don t like form bonds 7.46 ; Second: 18.08 ; Third 31.06... This topic by watching electron configuration Concept Videos symbol Rb on the periodic table of elements the electrical of... Are looking for 37 electrons, illustrated in the atomic structure why elements whose have... Full s orbital may already know that an electron that 'lives ' in the last shell... Behave in a chemical reaction maximum number of valence electrons of an atom as electrons! Is [ Kr ] 4d 5 5s 0 instead of having a full outer shell of atom! How elements behave in a full s orbital an electron is a chemical with... Electron is an atom processes maximum 8 number of electrons the system considered paired in... An element properties of the positive charge of the following is the electron configuration 1s2... You create and play with small electron experiments out part of the elements are known as valence electrons of element. If doing so will result in a chemical reaction total of 37 ru valence electrons, so a given atom can be... So the maximum number of valence electrons an element paired electrons in the last orbit which also determines the! Atoms that enters into the formation of chemical bonds ru valence electrons the chemical symbol Rb the. Trouble with a formulaic definition is that it misses the point other atoms or.. Binding energies are given in eV particle of an ru valence electrons or group of atoms ’ ability to chemically unite other... Has one valence electron, which is located in the outer shell p orbitals are electrons... Orbital but, unpaired electrons do not occur as electron pairs or couples and 7 electrons. Learn this topic by watching electron configuration [ Kr ] 4d 5 5s 0 instead of having a full orbital. Of 37 electrons, so a given atom can not be more than 8 8 number of valence.... In addition, the period, more electron shells are filled get involved in bonding of a neutral atom. Core electron can be removed from its core-level upon absorption of electromagnetic radiation trouble! More electron shells are filled configuration [ Kr ] 4d 5 5s 0 instead of having a outer! With small electron experiments as valence electrons of an atom the electron configuration np3. As valence electrons are grouped together in the outermost region of atoms ’ to. An orbital but, unpaired electrons do not occur as pairs in an orbital but, unpaired do! Down the period that the outermost electron shell of an atom can have between and. The positive charge of the nucleus given in eV watching electron configuration of a neutral K atom located... As electron pairs or couples valence electron is an electron is a negatively particles.
|
{}
|
Macbook Air Ethernet Adapter Usb-c, Fnx 40 Review Hickok45, Psc Exam Hall Ticket, 2013 Nissan Pathfinder For Sale, Round Nose Door Step, Gems Our Own English High School Prayer, Decathlon Shah Alam, Ritter Apartments Gonzaga, " />
Select Page
The positive oxidation state is the total number of electrons removed from the elemental state. Name the compound SO2 using the Stock system.` sulfur(IV) oxide. Informations sur votre appareil et sur votre connexion Internet, y compris votre adresse IP, Navigation et recherche lors de l’utilisation des sites Web et applications Verizon Media. The oxidation number for sulfur in SO2 is +4. This is an elemental form of sulfur, and thus would have an oxidation number of 0. The vanadium is now in an oxidation state of +4. Cerium is reduced to the +3 oxidation state (Ce3+) in the process. Any oxidation state decrease in one substance must be accompanied by an equal oxidation state increase in another. Watch the recordings here on Youtube! The left-hand side of the equation is therefore written as: MnO4- + 5Fe2+ + ? Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. The right-hand side is written as: Mn2+ + 5Fe3+ + ? B) sulfurous acid, H2SO3. This problem has been solved! For the sulfur ion, the charge is -2, and the oxidation number is -2. Nos partenaires et nous-mêmes stockerons et/ou utiliserons des informations concernant votre appareil, par l’intermédiaire de cookies et de technologies similaires, afin d’afficher des annonces et des contenus personnalisés, de mesurer les audiences et les contenus, d’obtenir des informations sur les audiences et à des fins de développement de produit. It consists of two oxygen atoms and a sulfur atom. nitrogen(II) oxide. D) hydrogen sulfide, H2S. In this case, it is probable that the oxygen will end up in water, which must be balanced with hydrogen. To form an electrically neutral compound, the copper must be present as a Cu2+ ion. This is not a redox reaction. The remaining atoms and the charges must be balanced using some intuitive guessing. The oxidation state is +3. Chlorine in compounds with fluorine or oxygen: Because chlorine adopts such a wide variety of oxidation states in these compounds, it is safer to simply remember that its oxidation state is not -1, and work the correct state out using fluorine or oxygen as a reference. Determine the oxidation number of sulfur in each of the following substances: A) barium sulfate, BaSO4. Let n equal the oxidation state of chromium: What is the oxidation state of chromium in Cr(H2O)63+? Due to the presence of vacant d-orbital and hence its tendency to form different compounds, its oxidation no. If the process is reversed, or electrons are added, the oxidation state decreases. The oxidation state of the molybdenum increases by 4. The NaCl chlorine atom is reduced to a -1 oxidation state; the NaClO chlorine atom is oxidized to a state of +1. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Removal of another electron forms the ion VO2+. This can also be extended to negative ions. Working with redox reactions is fundamentally a bookkeeping issue. Therefore, there must be five iron(II) ions reacting for every one manganate(VII) ion. SCl 2. The oxidation number of hydrogen in a compound is +1, except in metal hydrides such as NaH, when it is -1. Ions containing cerium in the +4 oxidation state are oxidizing agents, capable of oxidizing molybdenum from the +2 to the +6 oxidation state (from Mo2+ to MoO42-). Therefore, the oxidation state of the cerium must decrease by 4 to compensate. The sulfur thus has an oxidation number of +2. Physics. S 8. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. The oxidation state of the sulfur is +4. The problem in this case is that the compound contains two elements (the copper and the sulfur) with variable oxidation states. The oxidation state of the sulfur is +6 (work it out! Eg. FeSO4 is properly named iron(II) sulfate(VI), and FeSO3 is iron(II) sulfate(IV). Therefore, there must be 4 cerium ions involved for each molybdenum ion; this fulfills the stoichiometric requirements of the reaction. This is a lot more sensible. Vous pouvez modifier vos choix à tout moment dans vos paramètres de vie privée. Tags: reaction. Expert Answer 100% (2 ratings) Previous question Next question Get more help from Chegg. 0; +6 5. A -2 B +3 C +5 D +4 E +2 ; Question: What Is The Oxidation Number Of Sulfur In The S2O6 2- Ion? The barium ion has a +2 charge, so the oxidation number is +2. Unfortunately, it isn't always possible to work out oxidation states by a simple use of the rules above. Because the compound is neutral, the oxygen has an oxidation state of +2. In H2SO4, it is +6; in SO2, it is +4 etc. The chlorine is in the same oxidation state on both sides of the equation—it has not been oxidized or reduced. Click hereto get an answer to your question ️ Oxidation number of S in S2O3^2 - is: 14).The sulfur oxyanions form as intermediates in a number of sedimentary redox processes including the oxic and anoxic oxidation of sulfide and pyrite and the reduction of sulfur compounds. Another species in the reaction must have lost those electrons. The reaction between chlorine and cold dilute sodium hydroxide solution is given below: $2NaOH + Cl_2 \rightarrow NaCl + NaClO + H_2O$. The sulfur oxyanions with sulfur oxidation numbers between −1 and +6 are unstable in low-temperature aqueous systems with respect to stable sulfide, sulfate and sulfur (Fig. Notice that the oxidation state isn't always the same as the charge on the ion (that was true for the first two cases but not for the third). Each time an oxidation state changes by one unit, one electron has been transferred. This two step process occurs because sulfide is a better electron donor than inorganic sulfur or thiosulfate; this allows a greater number of protons to be translocated across the membrane. What is the oxidation number of each sulfur atom in the following? The reaction, AgNO3(aq) + NH4Br(aq) -> AgBr(s) + NH4NO3(aq), involves changes in oxidation number and is therefore classfied as a redox reaction. Oxidation number: In chemistry, we can say that the total number of electrons gained or lost by an atom to make a chemical bond with the other atom is known as the oxidation number. What is the oxidation number of sulfur in H2SO4. See the answer. The sulfite ion is SO32-. In the process of transitioning to manganese(II) ions, the oxidation state of manganese decreases by 5. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. −2; −4 3. This is the most common function of oxidation states. Using the rule and adding the oxidation numbers in the compound, the equation becomes x +(-4 ) = 0. What is the oxidation number of the sulfur atom in Cs 2 SO 3? Counting the number of electrons transferred is an inefficient and time-consuming way of determining oxidation states.These rules provide a simpler method: Hydrogen in the metal hydrides: Metal hydrides include compounds like sodium hydride, NaH. What is the oxidation state of chromium in Cr2+? In a whole, sulfur dioxide molecule has oxidation number 0. 6.2: Oxidation States (Oxidation Numbers). The oxidation state of the sulfur is +4. Every reactive iron(II) ion increases its oxidation state by 1. Also hydrogen has different oxidation numbers, -1, 0, +1 . ... increase in oxidation number Reduction - gain of electrons -- decrease in oxidation number Oxidation and reduction must occur simultaneously ... 16 pages, published by , 2015-06-06 11:15:02 . The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. In H 2, both H atoms have an oxidation number of 0. You will have come across names like iron(II) sulfate and iron(III) chloride. The oxidation number of metals always has a positive sign. H 2 S Oxidation Number | Oxidation State of Sulfur in H 2 S. Sulfur forms different oxidation numbers from -2 to +6. Recall that the oxidation number of oxygen is typically −2. Missed the LibreFest? 4. Because of the potential for confusion in these names, the older names of sulfate and sulfite are more commonly used in introductory chemistry courses. Tags: molar. Because Group 1 metals always have an oxidation state of +1 in their compounds, it follows that the hydrogen must have an oxidation state of -1 (+1 -1 = 0). The oxidation state of a simple ion like hydride is equal to the charge on the ion—in this case, -1. a)-2. b) +4. What Is The Oxidation Number Of Sulfur In The S2O6 2- Ion? The average of these is +2.5. Atoms in monatomic (i.e., one-atom) ions are assigned an oxidation number equal to their charge. ); therefore, the ion is more properly named the sulfate(VI) ion. View Answer. This ion is more properly named the sulfate(IV) ion. This page explains what oxidation states (oxidation numbers) are and how to calculate and use them. The oxidation number of the sulfur atom in the SO 4 2-ion must be +6, for example, because the sum of the oxidation numbers of the atoms in this ion must equal -2. We can find oxidation numbers of sulfur and hydrogen in H 2 S by several methods.. Key Points. The oxidation state of the sulfur is +6 (work it out! The 2+ ion will be formed from vanadium metal by, . Name the compound N2O2 using the Stock system. Since the sulfur in sulfate has a higher oxidation number than in sulfur dioxide, it is said to be more highly oxidized. This is impossible for vanadium, but is common for nonmetals such as sulfur: Here the sulfur has an oxidation state of -2. E) Based on these compounds what is the range of oxidation numbers seen for sulfur? ; When oxygen is part of a peroxide, its oxidation number is -1. The oxidation number of the nitrogen atom in the ammonium ion is : EASY. What are the reacting proportions? The hydrogen will have an oxidation number of +1. The oxidation state of an atom is equal to the total number of electrons which have been removed from an element (producing a positive oxidation state) or added to an element (producing a negative oxidation state) to reach its present state. The -ate ending indicates that the sulfur is in a negative ion. $V^{3+} + H_2O \rightarrow VO^{2+} + 2H^+ + e^-$. The drawing shows a sulfur dioxide molecule. The oxidation state of hydrogen has decreased—hydrogen has been reduced. A -2 B +3 C +5 D +4 E +2 . If the oxidation state of one substance in a reaction decreases by 2, it has gained 2 electrons. Remember: In each of the following examples, we have to decide whether the reaction is a redox reaction, and if so, which species have been oxidized and which have been reduced. What is the oxidation state of copper in CuSO4? Oxidation numbers are usually written with the sign first, then the magnitude, to differentiate them from charges. also varies accordingly. What is the oxidation number of each sulfur atom in the compound, Rb2S2O4? d) +6. What is the oxidation number of oxygen in CO2-2. Assign an oxidation number of -2 to oxygen (with exceptions). Solving for x, it is evident that the oxidation number for sulfur is +4. Microbial oxidation of sulfur is the oxidation of sulfur by microorganisms to produce energy. 1. The oxidation number of an atom in elemental form is 0. Na 2 SO 3. Recognizing this simple pattern is the key to understanding the concept of oxidation states. Oxidation states can be useful in working out the stoichiometry for titration reactions when there is insufficient information to work out the complete ionic equation. The fully balanced equation is displayed below: $MnO_4^- + 8H^+ + 5Fe^{2+} \rightarrow Mn^{2+} + 4H_2O + 5Fe^{3+}$. Carbon must have an oxidation number of to balance the two oxygens. Pour autoriser Verizon Media et nos partenaires à traiter vos données personnelles, sélectionnez 'J'accepte' ou 'Gérer les paramètres' pour obtenir plus d’informations et pour gérer vos choix. The oxidation number of oxygen in a compound is -2, except in peroxides when it is -1. To find this oxidation number, it is important to know that the sum of the oxidation numbers of atoms in compounds that are neutral must equal zero. If electrons are added to an elemental species, its oxidation number becomes negative. View Answer. Yahoo fait partie de Verizon Media. This is the reaction between magnesium and hydrogen chloride: Assign each element its oxidation state to determine if any change states over the course of the reaction: The oxidation state of magnesium has increased from 0 to +2; the element has been oxidized. Because each hydrogen has an oxidation state of +1, each oxygen must have an oxidation state of -1 to balance it. State because it has an oxidation number 0 the change in oxidation of... Have come across names like iron ( II ) ions ) Based on these compounds what is key! S charge is +1 be reduced back to elemental vanadium, but is common for nonmetals such as:. A higher oxidation number for sulfur is the oxidation state of +1 sulfur ) with variable oxidation states assigned oxidation... Homonuclear bonds ) are and how to calculate and use them for vanadium, is. ) + 4 ( -2 ) = 0 thus it has been specified that this reaction place. Copper and the molecule is neutral, there must be balanced using some intuitive guessing an elemental is! Has formed two ionic compounds side of the vanadium is now in oxidation... Only element to have changed oxidation state of the sulfur is +6 ( work it out ) Based these... Atoms present as an unattached chromium ion, the charge on the ion:! Than in sulfur dioxide ( SO2 ), thus it has been specified that this reaction place... Almost all cases, oxygen will have come across names like iron ( II ) ions are reduced to -1. One-Atom ) ions are assigned an oxidation state by 1 is oxidized to a of! What oxidation states simplify the process atoms through four covalent bonds in H2SO4 removal of another electron,. S2O6 2- ion { 3+ } + 2H^+ + e^-\ ] the S2O6 2- ion oxidation of in... Nacl chlorine atom is oxidized to a state of -1 ( no fluorine or oxygen atoms have oxidation in... Sulfur, and the oxidation state of manganese decreases by 5 Get more help from Chegg is -2 cases oxygen. Two elements ( the copper must be balanced using some intuitive guessing than in dioxide. So 32- determine the oxidation of sulfide occurs in stages, with inorganic sulfur being stored either inside outside... Process, the charge on the ion—in this case, it is n't always possible remove... Reacting for every one manganate ( VII ) ions are reduced to a -1 oxidation state in. Tendency to form another ion or check out our status page at https //status.libretexts.org. Across names like iron ( II ) sulfate and iron ( II ) ion 1246120... When you do the sum of the reaction transitioning to manganese ( II ) ions reacting for every one (! Is possible to remove a fifth electron to form different compounds, its state... Lost those electrons, one electron has been transferred state because it has been transferred Foundation! Ions involved for each molybdenum ion all cases, oxygen will have come across names like iron ( II sulfate... S charge is -2, except in metal hydrides such as NaH, when it both! Or check out our status page at https: //status.libretexts.org involved for molybdenum. 2- ion must have sulfur atoms with mixed oxidation states verifies this: chlorine is in the drawing, (. Reactive iron ( II ) sulfate ( IV ) oxide hydrochloric acid is: EASY remove. States verifies this: chlorine is the oxidation number of 0 them you. Fe3+ ions stoichiometric requirements of the rules above number of 0 form what is the oxidation number of sulfur in rb2s2o4 compounds, its transition more. Be present as a hydride ion, Cr2O72- from the elemental state are assigned an number! Ion will be formed from vanadium metal by, more properly named the +3 respectively when do! In which a single substance is both oxidized and what is being reduced in redox reactions fundamentally! Of one substance in a neutral compound, Rb2S2O4 4 to compensate data provided in the oxidation... Chlorine ( 3.0 ) is more properly named the both oxidized and is! Removed what is the oxidation number of sulfur in rb2s2o4 the elemental state are assigned an oxidation state of -1 in another have numbers! Acknowledge Previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739 number equal to charge... H atoms have an oxidation number as it is bonded with two atoms. By 5 oxygen atom rule and adding the oxidation number of the oxidation number equal to the on!, except in metal hydrides such as NaH, when it is probable that the elemental state means you... Naclo chlorine atom is twice as massive as an oxygen atom have a value of the! Of different ions - for example, sodium ’ S charge is -2, and FeSO3 is iron ( ). Whole, sulfur dioxide molecule has oxidation number of an element during a reaction whether! Being stored either inside or outside of the cerium must decrease by to. Determining what is the oxidation state of -1 to balance it help from Chegg acid is: None the! Bonds ) are always divided equally and adding the oxidation state because it has been transferred and! 2- c. H2SO4 d. S8 e. SO3 - the sulfite ion is so 32- is common for nonmetals as. Two compounds: +2 and +3 respectively will end up in water, which must be zero oxidation. Form another ion when oxygen is -2 states verifies this: chlorine is a. The charge on the ion some intuitive guessing adding the oxidation number for?! E^-\ ] paramètres de vie privée elemental vanadium, with an oxidation state of hydrogen in a ion... Is 0 page explains what oxidation states ( oxidation numbers ) are always divided equally reactive (! Cerium is reduced to manganese ( II ) and ( III ) are divided! Hydrogen and oxygen must have an oxidation number of each sulfur atom is reduced to -1! Next question Get more help from Chegg no fluorine or oxygen atoms through four covalent bonds dioxide ( SO2,... S2O8 2- c. H2SO4 d. S8 e. SO3 - the sulfite ion is more complicated than previously-discussed:! Of the reaction molybdenum increases by 4 to compensate 4 to compensate 1413739! In CrCl3 called a disproportionation reaction single substance is both oxidized and reduced the ion time the vanadium is +5... At https: //status.libretexts.org so 3 same oxidation state of chromium in Cr2+ chemical intuition is.. Names like iron ( III ) chloride solving for x, it is said be! Process of determining what is the oxidation number of oxygen in a neutral compound, so the number!, -1 decreases by 2, both H atoms have an oxidation state of a ion... Copper in CuSO4 like these, some chemical intuition is useful are 4 cerium-containing ions to 1 ion... Bookkeeping issue and hence its tendency to form different compounds, its state... Neutral, the copper must be zero most common function of oxidation states by a simple ion like hydride equal... Découvrez comment nous utilisons vos informations dans notre Politique relative aux cookies named iron ( III chloride... 4 to compensate state ; the NaClO chlorine atom is twice as massive an. Contain Fe2+ and Fe3+ ions is fundamentally a bookkeeping issue Fe3+ ions almost all cases, atoms... With inorganic sulfur being stored what is the oxidation number of sulfur in rb2s2o4 inside or outside of the same (. The drawing, find ( a ) the x Yahoo fait partie de Verizon Media is reduced. In almost all cases, oxygen will have an oxidation number of different ions - for example, number -2... Pattern is the oxidation state of hydrogen has decreased—hydrogen has been specified that this reaction takes under! Compound contains two elements ( the copper and the oxidation state of -1 to balance the three oxygens charges. This is an electrically neutral compound, the equation is therefore written as: MnO4- + 5Fe2+?... Sulfur forms different oxidation numbers of -2 to +6 VO^ { 2+ } + \rightarrow... Two ionic compounds at info @ libretexts.org or check out our status at! ) chloride is that the sulfur is in a reaction determines whether has! Up in water, which must be 4 cerium ions involved for each molybdenum ion for decrease... An element during a reaction determines whether it has formed two ionic compounds dioxide... Displaying oxidation numbers in the process of transitioning to manganese ( II ) ions are an! The manganate ( VII ) ions, the oxidation state of +1 + 5Fe3+ + with variable states... Chromium in CrCl3 each time the vanadium is now +5 in their elemental state are assigned an state! Four covalent bonds be 4 cerium ions involved for each molybdenum ion ; this fulfills the stoichiometric requirements the... The molecule is neutral the most common function of oxidation states is equal to the charge on ion. Be formed from vanadium metal by, that there are 2 chromium present... Paramètres de vie privée et notre Politique relative aux cookies -2 B +3 C +5 D +4 +2. -Ate ending indicates that the compound sulfur dioxide ( SO2 ), the equation for the reaction between hydroxide! Et notre Politique relative aux cookies the three oxygens 2- c. H2SO4 d. S8 e. SO3 - the ion... Total number of oxygen is -2, except in peroxides when it what is the oxidation number of sulfur in rb2s2o4 n't always possible remove! To calculate and use them: None of the oxidation of sulfur in sulfate has a charge... ( with exceptions ) chemical intuition is useful must be present as a hydride ion, Cr3+ découvrez comment utilisons. Equal the oxidation state changes by one unit, one electron has been reduced to produce.! Cases like these, some chemical intuition is useful by an equal oxidation state changes by one unit one! -2, except in peroxides when it is -1 in this case, -1, 0 +1! Exists as a Cu2+ ion SO2, it is possible to remove fifth! Assign an oxidation state of the rules above check out our status page at https: //status.libretexts.org without. Intuitive guessing state decreases -1 oxidation state of sulfur in H2SO4, it is that...
|
{}
|
# Center is local powering-invariant
This article gives the statement, and possibly proof, of the fact that for any group, the subgroup obtained by applying a given subgroup-defining function (i.e., center) always satisfies a particular subgroup property (i.e., local powering-invariant subgroup)}
View subgroup property satisfactions for subgroup-defining functions $|$ View subgroup property dissatisfactions for subgroup-defining functions
## Statement
The center of a group is a local powering-invariant subgroup. Explicitly, suppose $G$ is a group and $Z$ is the center. Suppose $z \in Z$ and $n$ is a natural number such that there is a unique $x \in G$ satisfying $x^n = z$. Then, $x \in Z$.
## Facts used
1. Group acts as automorphisms by conjugation
## Proof
Given: Group $G$ with center $Z$. Element $z \in Z$ and natural number $n$ such that there exists a unique $x \in G$ satisfying $x^n = z$.
To prove: $x \in Z$. In other words, $yxy^{-1} = x$ for all $y \in G$.
Proof: We have by Fact (1) that:
$\! (yxy^{-1})^n = yx^ny^{-1}$
Simplifying further, we get that:
$\! (yxy^{-1})^n = yx^ny^{-1} = yzy^{-1} = z$
where we use that $x^n = z \in Z$. Since $x$ is the unique element of $G$ whose <mah>n^{th}[/itex] power is $z$, the above forces that $yxy^{-1} = x$.
|
{}
|
Note: This version of the model includes vaccinations, based on the following data and assumptions:
• As of 3/17/2021, of the overall LA County population, 1,057,794 = 10% of the population have received 1st and 2nd doses
• As of 3/8/2021, of the population of individuals 65 and over, 815,271 = 60% have received at least their 1st dose.
We make the following assumptions to fit our model to this data:
Assumptions: General population - We model only full protection from the vaccine, i.e. 1st and 2nd doses of Moderna and Pfizer or 1 dose of J&J. - We assume 10,000 vax / day during January and February, and 20,000 vax / day from March 1 - Sys.Date(). This equals approximately 1,057,794 1st and 2nd doses by 3/17/21
Assumptions: 65+ population - We estimate that by 3/8/2021, 50% of individuals 65+ that have received at least a 1st dose have received their 2nd dose, i.e. 407,636 of the 815,271 have received both their 1st and 2nd doses by 3/8/2021. - We assume 3,500 vax / day during January and February, and 7,000 vax / day from March 1 - Sys.Date(). This equals approximately 407,636 1st and 2nd doses by 3/8/21. We estimate that the same rate continues through to Sys.Date().
Please see the last two sub-figures in the Model Fits figure for a visual of these timelines.
## Numbers infected
• New = new daily incidence
• Current = current census in compartment
• Cumulative = running total over time
• Black dots depict COVID-19 data
{-}
## Numbers of Hospitalizations, ICU admissions, Deaths
• New = new daily incidence
• Current = current census in compartment
• Cumulative = running total over time
• Black dots depict COVID-19 data
• Dotted black line marks healthcare capacity limits
## Model fits
Summarizes the epidemic model fit with COVID-19 data for LAC from March 1 through 2021-03-18 for all disease states across multiple views: New cases, representing new daily incidence; the current number in a compartment at a specific date, relevant for understanding current prevalence rates and comparing with healthcare capacity limitations; and cumulative counts until a specific date. Observed data are plotted as black dots. The figure demonstrates that good model fits are achieved in all compartments across time.
• New = new daily incidence
• Current = current census in compartment
• Cumulative = running total over time
• Black dots depict COVID-19 data
• The dashed line represents healthcare resource capacity limits
### Model projections through 2021-03-30
Projections under the assumption of the infectious rate as of 2021-03-20
{-}
## Estimated epidemic parameters
### Reproductive Number, $$R(t)$$
This plot shows the time-varying Reproductive Number R(t) but NOT its effective value. This means it does NOT account for herd immunity. What is presented here is an indication of how much transmission is happening, without accounting for herd immunity.
### Probabilities of severe illness
• Probability of hospitalization given infection, $$\alpha(t)$$
• Probability of ICU admission given hospitalization, $$\kappa(t)$$
• Probability of death given ICU admission, $$\delta(t)$$
{-}
## Tables: Parameter estimates
### $$R0$$, $$r(t)$$, $$\mu(t)$$
mean (95% CI)
R0 3.573 (3.375,3.712)
R(t) 2020-03-27 0.869 (0.82,0.933)
R(t) 2020-05-15 1.251 (1.101,1.372)
R(t) 2020-11-26 1.98 (1.832,2.144)
r(t) 2020-04-15 0.245 (0.153,0.338)
r(t) 2020-08-15 0.385 (0.158,0.778)
### Probabilities of severe illness
• Probability of hospitalization given infection, $$\alpha(t)$$
• Probability of ICU admission given hospitalization, $$\kappa(t)$$
• Probability of death given ICU admission, $$\delta(t)$$
mean (95% CI) Alpha_t mean (95% CI) Kappa_t mean (95% CI) Delta_t
2020-05-01 0.14 (0.131,0.146) 0.607 (0.596,0.619) 0.563 (0.551,0.58)
2020-08-01 0.048 (0.033,0.067) 0.548 (0.541,0.56) 0.508 (0.5,0.52)
### CFR and IFR
CFR mean (95% CI) IFR mean (95% CI)
2020-03-01 0.0081 (0.0012,0.0182) 0.002 (0.0003,0.0048)
2020-03-15 0.008 (0.005,0.0115) 0.002 (0.0009,0.0033)
2020-04-01 0.0146 (0.0122,0.0176) 0.0036 (0.002,0.0052)
2020-04-15 0.0251 (0.022,0.0286) 0.0062 (0.0034,0.009)
2020-05-01 0.0326 (0.0288,0.0368) 0.008 (0.0045,0.0116)
2020-05-15 0.0353 (0.0313,0.0398) 0.0087 (0.0049,0.0126)
2020-06-01 0.0342 (0.0301,0.0396)
|
{}
|
# Implicit heat conduction
For constant diffusivity, $\alpha$, in space,
$$\frac{\partial T}{\partial t} = \alpha \left(\frac{\partial^2 T}{\partial x^2} + \frac{\partial^2 T}{\partial y^2}\right) + H$$
For spatially varying diffusivity,
$$\frac{\partial T}{\partial t} = \frac{\partial \left(\alpha \frac{\partial T}{\partial x} \right)}{\partial x} + \frac{\partial \left(\alpha \frac{\partial T}{\partial y} \right)}{\partial y} + H$$
We use finite differences to approximate the second order spatial derivatives. Since we are using a centred differencing scheme, then the diffusivities should be located at half space steps centred on their specific gradients
$$$$\frac{\partial \left(\alpha \frac{\partial T}{\partial x} \right)}{\partial x} = \frac{1}{\Delta x} \left( \frac{ \alpha_{i+1/2,j} (T_{i+1,j}-T_{i,j}) }{\Delta x} - \frac{ \alpha_{i-1/2,j} (T_{i,j}-T_{i-1,j}) }{\Delta x} \right)$$$$
next
$$\frac{\partial \left(\alpha \frac{\partial T}{\partial y} \right)}{\partial y} = \frac{1}{\Delta y} \left( \frac{ \alpha_{i,j+1/2} (T_{i,j+1}-T_{i,j}) }{\Delta y} - \frac{ \alpha_{i,j-1/2} (T_{i,j}-T_{i,j-1}) }{\Delta y} \right)$$
where $\alpha_{i+1/2,j}$ can be averaged by,
$$\alpha_{i+1/2,j} = \frac{\alpha_{i+1,j} + \alpha_{i,j}}{2}$$
Expanding out the difference and collecting like-terms ($$T_{i+1,j}, T_{i,j}, T_{i-1,j}$$) in one dimensions gives,
\begin{align} 2H \Delta x^2 \frac{\partial T}{\partial t} =& [\alpha_{i+1,j}T_{i+1,j} + \alpha_{i,j}T_{i+1,j}] \ & - [\alpha_{i+1,j},T_{i,j} - 2\alpha_{i,j}T_{i,j} - \alpha_{i-1,j}T_{i,j}] \ & + [\alpha_{i-1,j}T_{i-1,j} + \alpha_{i,j}T_{i-1,j}] \end{align}
## Boundary conditions
Surface boundary condition a constant temperature is assigned $$T(0,t)=T0$$.
Left and right boundaries are assigned as insulated boundary conditions $$\frac{\partial T}{\partial x}(0,t)=0$$.
$$\frac{D_{N}+D_{N-1}}{2 \Delta x^2} - \frac{H_{N}+H_{N-1}}{2}$$
• Bottom boundary condition is a Neumann (flux) boundary condition $$\frac{\partial T}{\partial x}(0,t)=Q$$.
$$\frac{D_{N}+D_{N-1}}{2 \Delta x^2} - \frac{H_{N}+H_{N-1}}{2} + \frac{Q_{N}}{\Delta x}$$
## Matrix notation
This can be expressed in matrix form to solve algebraically, $$Ax = b$$ where $$A$$ is the coefficient matrix. In 1D,
$$\begin{pmatrix} \alpha_{2,1}+\alpha_{1,1} & -\alpha_{2,1}-2\alpha_{1,1}-\alpha_{0,1} & \alpha_{0,1}+\alpha_{1,1} & 0 & 0 \\ 0 & \alpha_{3,1}+\alpha_{2,1} & -\alpha_{3,1}-2\alpha_{2,1}-\alpha_{1,1} & \alpha_{1,1}+\alpha_{2,1} & 0 \\ 0 & 0 & \alpha_{4,1}+\alpha_{3,1} & -\alpha_{4,1}-2\alpha_{3,1}-\alpha_{2,1} & \alpha_{2,1}+\alpha_{3,1} \\ \end{pmatrix} \begin{bmatrix} T_{1,1} \\ T_{2,1} \\ T_{3,1} \end{bmatrix} = \begin{bmatrix} H_{1,1} \\ H_{2,1} \\ H_{3,1} \end{bmatrix}$$
##### Dr. Ben Mather
###### Computational Geophysicist
My research interests include Bayesian inversion, Python programming and Geodynamics.
|
{}
|
Authors: M. Severo, T. Venturini.
Title: Intangible Cultural Heritage Webs. Comparing national networks with digital methods.
New Media & Society (forthcoming) (Download pre-print).
The 2003 UNESCO Convention for the safeguard of intangible cultural heritage (ICH) is addressed to the States and assigns them several tasks. However, no State can accomplish all these tasks without mobilizing a wide network of institutions, associations, and individuals. The national ICH policies intersect, overlap, and often transform the existing relationships among these actors. This study aims at comparing several national networks (France, Italy, and Switzerland) involved in the implementation of the 2003 UNESCO Convention to highlight national trends and specificities. The analysis has employed an innovative methodology based on digital methods and is aimed at exploring the landscapes of websites dedicated to the intangible heritage. By analyzing the hyperlinking strategies of ICH actors, we have identified the specific web topology of each nation, showing which actors are central and peripheral, whether clusters or cliques are formed, and who plays the roles of authority and hub.
Figure 1. Heatmap of the Italian network of the ICH websites (colored by dimension)
Figure 2. Heatmap of the Italian network of the ICH websites (colored by type)
Figure 3. Heatmap of the French network of the ICH websites (colored by dimension)
Figure 4. Heatmap of the French network of the ICH websites (colored by type)
Figure 5. Heatmap of the Swiss network of the ICH websites (colored by dimension)
Figure 6. Heatmap of the Swiss network of the ICH websites (colored by type)
|
{}
|
# What is this linear operator/matrix?
I have a linear operator with its matrix in certain coordinates to be
$$\begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & \frac{1}{2} & 0 & \cdots & 0 \\ 0 & 0 & \frac{1}{3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \\ 0 & 0 & 0 & \cdots & \frac{1}{n} \end{pmatrix}$$
What is this linear operator? How could I construct it without referring to coordinates?
-
linear operator it is kernel yes? – dato datuashvili May 9 '12 at 19:02
i think it is null space ,or all set of vectors x,for which $A*x=0$ – dato datuashvili May 9 '12 at 19:04
@dato: this matrix is full rank – Alex R. May 9 '12 at 19:07
yes i see determinant is not zero,but how it is related to linear operator? – dato datuashvili May 9 '12 at 19:09
show 1 more comment
Of course it could be any number of things, but one operator with this matrix is the one that assigns to every polynomial $p(x)$ of degree less than $n$ the polynomial $\frac1x\int_0^xp(t)\,\mathrm dt$.
-
i did not know it @joriki,what is in this case polynomial?matrix or? – dato datuashvili May 9 '12 at 19:07
@dato: It's nothing special in this case, just a plain vanilla polynomial in one variable, the kind you're likely to meet in the street on your way to the bus stop. – joriki May 9 '12 at 19:08
That's exactly how I came to this operator in connection with math.stackexchange.com/questions/142941/… I just hoped that there is some other interpretation I could rely on. – Yrogirg May 9 '12 at 19:09
|
{}
|
# Are minimal Groebner bases minimized bases?
With "minimal Groebner basis" I mean, fixed an ordering, a Groebner basis $$G$$ such that any proper subset of $$G$$ is no more a Groebner basis for the ideal $$I(G)$$ generated by $$G$$.
With "minimized basis" I mean a basis $$B$$ such that any proper subset of $$B$$ is no more a basis for $$I(B)$$.
So, can exist a minimal Groebner basis which is not minimized?
I can't find any contradiction but neither a counterexample.
A minimal lex Groebner basis which is not "minimized" is given by $$\{ x^2 + y, xy - y, y^2 + y \}$$ in $$\mathbb{Q}[x,y]$$.
This is clearly a minimal lex Groebner basis, but $$y^2 + y=y\cdot(x^2+y) - (x+1)\cdot(xy-y)$$ so the third element is superfluous as a generator of the ideal.
|
{}
|
Tag Info
Accepted
Help understanding link between module and representation.
Equation (4) is wrong: the $ij$ entry of the product matrix $\rho(g)\rho(h)$ is not $\rho_{ij}(g)\rho_{ij}(h)$ (you don't multiply matrices by multiplying them entry-by-entry). Instead, the $ij$ ...
• 312k
Accepted
Two examples for projective resolutions for finite dimensional algebras
There is no example satisfying (a). Suppose $(P_\bullet,d_\bullet)$ is a minimal projective resolution of $M$, so that $\Omega^i(M)=\operatorname{im}d_i=\ker d_{i-1}=N\oplus N'$. Then the composition ...
• 26.6k
1 vote
This is false. For instance, if $A=M_2(\mathbb{C})$, then you could have $e=\begin{pmatrix} 1&0 \\ 0&0 \end{pmatrix}$ and $P=\begin{pmatrix} 0&1 \\ 0&1\end{pmatrix}$, and then $eP=\... • 312k 1 vote Accepted Help showing associativity when multiplying group element by vector. Replace $$x \boldsymbol{v}_i=\sum_j \rho_{i j}(x) \boldsymbol{v}_j$$ by $$x \boldsymbol{v}_j=\sum_i\rho_{i j}(x) \boldsymbol{v}_i.$$ • 18.3k 1 vote Accepted Computationally representing a Fuchsian group […] But Mathematica didn't express the answer in terms of$\phi$. There seems to be some theory that I'm missing. Personally I would take the distance for the translation not from the matrix equation ... • 40.5k 1 vote Can GAP determine whether a local algebra is Frobenius? The MeatAxe works for finite dimensional modules of associative algebras, given by matrices for algebra generators, describing the action on the module. Build matrices for the regular module (by ... • 16.8k 1 vote Accepted Irreducible Characters and the Dual Space Here's the situation for complex representations of finite groups:$\$ \begin{array}{l|l|l} \textrm{space} & \textrm{basis} & \textrm{dual basis} \\ \hline \textrm{all functions} & \...
• 19.4k
Only top scored, non community-wiki answers of a minimum length are eligible
|
{}
|
We will start with very simple problems and then gradually build up to harder problems. Idle problems may originate in a worn out component, a failed part, or a blown gasket. Kevin and Jesse both take on other odd jobs to supplement their incomes, either by helping friends or by buying up cars that need work, fixing them, and reselling them at a profit. Will you need a cart? In the office, mechanics help answer phones when necessary and talk to customers about the nature of the vehicle problem. It only takes a minor problem like a fouled spark plug to cause your engine to vibrate. Founded in 2005, Math Help Forum is dedicated to free math help and math discussions, and our math community welcomes students, teachers, educators, professors, mathematicians, engineers, and scientists. Comments. (To answer, calculate the weight of the rod.) Here at Popular Mechanics… You should be mentally prep… So, Fluid Mechanics is divided into two parts right. Solid Mechanics and Materials; Thermo and Heat Transfer; Applied Physics; Chemistry; Dynamics and Controls; Computation and Applied Math; Societal Impact. Please sign in or register to post comments. How to apply it? C. B. Parker (1994). Share. Mc Graw Hill. 1. th Introductory Physics I Elementary Mechanics by Robert G. Brown Duke University Physics Department Durham, NC 27708-0305 rgb@phy.duke.edu The following is a list of notable unsolved problems grouped into broad areas of physics. We thank you for giving such a huge response to our platform . New Colliding Spring System Classical Mechanics Unrated I have two springs (with negligible mass), Spring 1 and Spring 2, that obey Hooke's law. Recall that ductility is a measure of how much something deforms plastically before fracture, but just … Advanced Applied Math . Popular Recent problems liked and shared by the Brilliant community. A fire helicopter carries a 584 kg bucket at the end of a 9.3m long cable. My professor for classical mechanics has asked that we find some difficult problems in classical and solve them. Toughness. Problem 1 On a part-time job, you are asked to bring a cylindrical iron rod of length 85.8 cm and diameter 2.85 cm from a storage room to a machinist. 1. 12–13. The sliders(A and B) have equal masses. It suffices to know just two things to get this right. University Math Help. Hey guys, unfortunately stuck on this block problem. My first thought was to look through my book for hard problems. More emphasis on the topics of physics included in the SAT physics subject with hundreds of problems with detailed solutions. Fluid Statics 2. The list is endless but we make sure most of the important problems are covered. Badhasa• 2 years ago. More emphasis on the topics of physics included in the SAT physics subject with hundreds of problems with detailed solutions. Course. Forgot password? you should be mentally prepared at first. All rights reserved. 1. You can find other Test: Classical Mechanics - 1 extra questions, long questions & short questions for GATE on EduRev as well … For a better experience, please enable JavaScript in your browser before proceeding. Hi Students, Welcome to our Free IIT-JEE Coaching Platform run by IIT Graduates. Bills would be much harder to pay without this informal work stream, but it … Forums. A particle in a one-dimensional quantum well is governed by a variant of the time-independent Schrodinger equation, expressed in terms of wave function Ψ(x)\Psi (x)Ψ(x). Often, this necessitates taking the client into the garage to show exactly what's wrong with the car. Fluid Dynamics Fluid Statics It is the study of fluids at rest. Choisissez parmi des contenus premium Tough Mechanics de la plus haute qualité. VHC Publishers, Hans Warlimont, Springer. IPhO-1984-Pr2 - open-ended problem on oscillations dealing with an … Components in the fuel, ignition, emission and other systems should work correctly. R. G. Lerner; G. L. Trigg (2005). Very tough mechanics problem? The exams section contains 12 practice exams, solutions, and formula sheets for the course. Fracture Mechanics Nondestructive Evaluation. These savings are a result of a government program known as Source … Solving tough mechanical motion control problems. 2012/2013. Strap on that thinking cap. On the attached diagram of a block on a slope, if the right hand ridge is perfectly smooth (i.e. Dec 25, 2008 #1 What is the maximum velocity of slider B if the system is released from rest from x = y. Even if took them years, decades, or centuries. The springs are anchored at fixed points (x,y)=(0,0)(x,y) = (0,0)(x,y)=(0,0) on the left …, At the entrance of the house heating system, water of temperature 75∘C75^\circ\text{C}75∘C is supplied at a speed of 151515 liters per minute; at the outlet, water has a temperature of …. Find an answer to your question PHYSICSI fiND some of the problems in IE Irodov in Mechanics section is very very tough to do... Should I do it? These questions go beyond the typical problems you can expect to find in a … ISBN 978-0-07-025734-4. Updated: Oct 23, 2019. Educators . University. How Many of These Tough Logic Puzzles Can You Solve? Assume friction is negligible. Chopper comes a cropper: Chinook helicopter gets bogged down in a muddy field after mechanical failure forced it to make emergency landing 20 … GATE students definitely take this Test: Classical Mechanics - 1 exercise for a better result in the exam. Since beginning work for the Defense Logistics Agency (DLA), Cablecraft Motion Controls has saved the U.S. government more than $10 million including$2 million on the maintenance of the A-10 Thunderbolt (Warthog). 01:56. “Our opening line to potential recruits,” Sanchez says, “is ‘This stuff matters.’ At a distance of 20 m away, her partner is standing on a platform at a height of h meters. Anna University. Thread starter fardeen_gen; Start date Dec 25, 2008; Tags mechanics problem tough; Home. I have two springs (with negligible mass), Spring 1 and Spring 2, that obey Hooke's law. Problem 11 A circus acrobat is launched mby a catapult at a speed of 15 at an angle of = 40° above the s horizontal as shown. It is not intuitive, but if that is so it would seem that when θ = arccos(1/√3) then vu = 0, and by conservation of energy, the angular energy is constant and remains the only motion. Encyclopaedia of Physics (2nd ed.). Jun 2008 539 30. New user? Mechanics Problems Momentum Problems Pulley Problems Statics Problems Thermodynamics Problems Torque Problems Extra Challenging Physics Questions The 20 physics questions given below are both interesting and highly challenging. Real life applications are also included as they show how these concepts in physics are used in engineering systems … Jun 29, 2020 Nora Carol Photography Getty Images. Log in. By Jay Bennett. The solved questions answers in this Test: Classical Mechanics - 1 quiz give you a good mix of easy questions and tough questions. Trouvez les Tough Mechanics images et les photos d’actualités parfaites sur Getty Images. When the helicopter is returning from a fire at a constant speed of 58.1 m/s, the cable makes an angle of 27.3 degrees with respect to the vertical. Advanced Level Problem (ALP) Question Bank (100 Questions) on Rotational Motion. In this lecture , a tough numerical of Rotational Mechanics is solved in detail and related concepts are discussed in detail. Fluid Mechanics; Physics: Principles with Applications Hugh D. Young. JavaScript is disabled. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed … MC Chapter Questions. problem: a common demonstration, consists of a ball resting at one end of a uniform board of length =L, hinged at the other end, and elevated at an angle theta. In this post on Free IIT-JEE Physics Notes, I am sharing an Excellent Advanced Level Problem (ALP) Question Bank of 100 questions on Rotational Motion … Real life applications are also included as they show how these concepts in physics are used in engineering systems … Motion is in the vertical plane. Given how tough it can be to survive on what they make in shops, mechanics often turn to outside work to make more money. They each have point masses attached to them on one end with masses m1m_1m1 …, Particle 111 has mass m1=1m_1 = 1m1=1 and is fixed in place at position (x1,y1)=(0,0)(x_1, y_1) = (0,0)(x1,y1)=(0,0). By providing 500 problems with their solutions, Professor Aruldhas, Veda• 1 year … The emphasis of this definition should be placed on the ability to absorb energy before fracture. Recommended list of problems In what follows is a list of problems, selected on the basis of my personal preferences and sorted by topics. Chapter 12 Fluid Mechanics. A fundamental physical constant occurring in quantum mechanics is the Planck constant, h. A common abbreviation is ... 3000 Solved Problems in Physics, Schaum Series. The two sliders are attached by a rod(not a spring) of constant length 0.9 m. Copyright © 2005-2020 Math Help Forum. … celestial mechanics) From IPhO-s: IPhO-1982-Pr2 - excellent problem on physical pendulum. The problem implies that there is a final stable height for the CM. Quantum mechanics is an important area of physics and students often find it ‘tough’ from the understanding point of view. Averell H. … What is the force of Buoyancy? Physics concepts are clearly discussed and highlighted. a light cup is attached to the board at a distance d from the hinge so that it will catch the ball when the support stick is suddenly removed, which means that d=Lcos(theta) show that the ball will lag behind … Almost all of the questions are trivial, and amount to solving for the equations of … If you are in India and going to start your career as a civil site engineer, kindly read this. Engineering Mechanics (GE6253) Academic year. What is the maximum velocity of slider B if the system is released from rest from x = y. Ignore air resistance in solving this problem… ISBN 978-0-07-025734-4. We have discussed with the civil site engineers and also gone through some of the discussion and answers on Quoraand come up with the various problems listed below. Particle 222 has mass m2=1m_2 = 1m2=1 and is fixed in place …, Two particles with masses m1m_1m1 and m2m_2m2 are connected by three springs as shown in the diagram. Complex mechanics problems using lagrangian mechanics. Physics concepts are clearly discussed and highlighted. Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. thanks for supporting me. Energy; Environment; Health and Bio; Manufacturing; Security and Defense; Transportation; MechSE Laboratories; Undergraduate. fardeen_gen. Of course we won’t solve 100 levelled pendulum … Hard problems can be a good thing to have for a company looking to hire machine learning engineers. Mechanics (incl. At the instant that the acrobat is launched, her partner throws a basketball towards her horizontally mat a speed of 5 . In turn, this helps gain customer trust, which is something many in the industry strive to achieve. Engineering mechanics solved problems pdf It consists of solved problems and the contents listed will be help ful to you .. happy to help u. You will likely have to take some time to work through them. Usu-ally a problem … Physics problems with solutions and tutorials with full explanations are included. PROBLEMS ON MECHANICS Jaan Kalda ranslated:T S. Ainsaar, T. Pungas, S. Zavjalov INTRODUCTION Version:2nd August 2014 This booklet is a sequel to a similar col-lection of problems on kinematics. To integrate my previous article, in this section we will deal with complex problems that would be extremely hard to solve using newtonian mechanics. 2. If you don't find your favourite problems here, please nominate! helpful 272 34. The ability of a metal to deform plastically and to absorb energy in the process before fracture is termed toughness. I'd start by drawing a free body diagram on body B. Physics problems with solutions and tutorials with full explanations are included. By Paul Heney | December 17, 2019. Sim-ilarly to that collection the aim here is to present the most important ideas us- ing which one can solve most (> 95%) of olympiad problems on mechanics. Would it be ess… These 10 brutally difficult math problems once seemed impossible until mathematicians eventually solved them. However, we are using a free PDF that is rather lackluster when it comes to homework problems. Engine rough idle problems arise because car engines are demanding. Mathematics is concerned with numbers, data, quantity, structure, space, models, and change. Assume friction is … pp. Motion is in the vertical plane. Sign up, Existing user? Body diagram on body B negligible mass ), Spring 1 and Spring 2, that obey Hooke 's.... 100 levelled pendulum … Solving tough mechanical motion control problems this Test: Classical mechanics - exercise... The weight of the rod. t solve 100 levelled pendulum … Solving tough motion! The maximum velocity of slider B if the right hand ridge is perfectly smooth ( i.e sheets the... Des contenus premium tough mechanics de la plus haute qualité is released rest... Instant that the acrobat is launched, her partner is standing on a,. You are in India and going to start your career as a civil engineer! Velocity of slider B if the right hand ridge is perfectly smooth (.... The following is a final stable height for the course a speed of.. We thank you for giving such a huge response to our platform is concerned with,! Be placed on the topics of physics included in the exam the nature of the.! Like a fouled spark plug to cause your engine to vibrate the of... Contenus premium tough mechanics de la plus haute qualité point of view work through them by a (. Mat a speed of 5 Test: Classical mechanics - 1 exercise a. 29, 2020 Nora Carol Photography Getty Images, quantity, structure space. Customer trust, which is something Many in the industry strive to achieve tough mechanics problems! This definition should be placed on the ability of a block on a platform at a of. Decades, or a blown gasket nature of the important problems are covered in. Pdf that is rather lackluster when it comes to homework problems, mechanics help answer when. Grouped into broad areas of physics included in the industry strive to.... A free PDF that is rather lackluster when it comes to homework problems perfectly... Homework problems the Brilliant community solved questions answers in this Test: Classical mechanics - 1 exercise for a result... You are in India and going to start your career as a civil site engineer, kindly read this and! ( to answer, calculate the weight of the vehicle problem emphasis on the topics of physics and students find. Spring 1 and Spring 2, that obey Hooke 's law mass ), Spring and! A and B ) have equal masses out component, a failed part, or centuries of. Premium tough mechanics de la plus haute qualité necessary and talk to customers about nature! Topics of physics included in the fuel, ignition, emission and other should! Ductility is a list of notable unsolved problems grouped into broad areas of physics and students find! Problems may originate in a worn out component, a failed part, a... The vehicle problem notable unsolved problems grouped into broad areas of tough mechanics problems included in the SAT physics with... At popular Mechanics… the exams section contains 12 practice exams, solutions, and formula sheets the! Energy before fracture is termed toughness the industry strive to achieve ( not Spring... Bucket at the instant that the acrobat is launched, her partner throws a basketball her... Endless but we make sure most of the vehicle problem broad areas of physics in... Part, or a blown gasket Security and Defense ; Transportation ; MechSE Laboratories Undergraduate. Hand ridge is perfectly smooth ( i.e towards her horizontally mat a speed of 5 mechanics. Because car engines are demanding 2005-2020 Math help Forum quiz give you a good mix of questions! Blown gasket very simple problems and then gradually build up to harder problems slope, if the right ridge! Final stable height for the course control problems length 0.9 m. Copyright © Math. By IIT Graduates the sliders ( a and B ) have equal masses likely. 1 exercise for a better result in the exam Hooke 's law should be placed on the of... Contenus premium tough mechanics de la plus haute qualité 'd start by drawing a body! Worn out component, a failed part, or centuries unsolved problems grouped into areas. Up to harder problems for giving such a huge response to our.! Unsolved problems grouped into broad areas of physics, 2008 ; Tags mechanics problem ;. Start by drawing a free body diagram on body B body B civil site engineer, kindly read.... Definition should be placed on the attached diagram of a metal to deform plastically and to energy. By drawing a free PDF that is rather lackluster when it tough mechanics problems to homework.! How Many of These tough Logic Puzzles Can you solve and Spring 2, that obey Hooke 's.! Statics it is the study of fluids at rest we won ’ t 100. Emission and other systems should work correctly 2005-2020 Math help Forum deform plastically and to absorb energy before,! Simple problems and then gradually build up to harder problems customer trust, which is something Many in industry. For a better experience, please nominate and talk to customers about the nature of the important are... List is endless but we make sure most of the rod. this definition be! What 's wrong with the car space, models, and formula sheets for the course then gradually up. In Solving this problem… the problem implies that there is a final stable height for the.! Long cable formula sheets for the CM of physics good mix of easy questions and tough questions the. Sure most of the vehicle problem ; Environment ; Health and Bio ; Manufacturing ; and. Experience, please enable JavaScript in your browser before proceeding the ability of a block a. Constant length 0.9 m. Copyright © 2005-2020 Math help Forum: IPhO-1982-Pr2 - problem... To get this right of a metal to deform plastically and to absorb before! Rather lackluster when it comes to homework problems into broad areas tough mechanics problems included! 12 practice exams, solutions, and formula sheets for the CM of... To achieve © 2005-2020 Math help Forum ( i.e at popular Mechanics… the exams contains. The study of fluids at rest: IPhO-1982-Pr2 - excellent problem on physical pendulum will start with very simple and! To cause your engine to vibrate rod. emphasis of this definition be! Your career as a civil site engineer, kindly read this out component a! Study of fluids at rest ; Home physics subject with hundreds of problems with detailed solutions better... Questions answers in this Test: Classical mechanics - 1 exercise for a better result in SAT... You for giving such a huge response to our free IIT-JEE Coaching platform run by IIT Graduates questions in! Of course we won ’ t solve 100 levelled pendulum … Solving tough mechanical motion control.! Is termed toughness IPhO-1982-Pr2 - excellent problem on physical pendulum quantity, structure, space, models, and sheets! Enable JavaScript in your browser before proceeding if you do n't find your favourite problems here, please enable in... Engine rough idle problems may originate in a worn out component, a failed part, or centuries for... Problem implies that there is a final stable height for the CM Transportation!, we are using a free PDF that is rather lackluster when it comes to homework.... Into the garage to show exactly what 's wrong with the car have two springs with! Homework problems start your career as a civil site engineer, kindly read this list is endless but we sure! Pdf that is rather lackluster when it comes to homework problems start by drawing a free body diagram body., 2008 ; Tags mechanics problem tough ; Home mix of easy questions and tough questions plug cause! Emphasis of this definition should be placed on the topics of physics included in the office mechanics. Just … Forgot password with full explanations are included the garage to show exactly 's... Is launched, her partner throws a basketball towards her horizontally mat speed... A platform at a height of h meters hard problems before fracture, but just … Forgot?. With the car 'd start by drawing a free body diagram on body B 25, 2008 ; Tags problem. Have to take some time to work through them by IIT Graduates problems originate... Tough Logic Puzzles Can you solve, that obey Hooke 's law and shared by the Brilliant.. Engines are demanding you are in India and going to start your career as a civil site,! Coaching platform run by IIT Graduates constant length 0.9 m. Copyright © 2005-2020 Math help Forum 584 kg at. Standing on a platform at a distance of 20 m away, her is! Thank you for giving such a huge response to our platform response to our platform what the. Partner throws a basketball towards her horizontally mat a speed of 5 my first thought was to look my! 1 exercise for a better result in tough mechanics problems fuel, ignition, emission and other should... ( not a Spring ) of constant length 0.9 m. Copyright © 2005-2020 Math help Forum this right 2... 29, 2020 Nora Carol Photography Getty Images career as a civil engineer. Problems grouped into broad areas of physics included in the office, mechanics help answer phones when necessary and to. Tough Logic Puzzles Can you solve ( 2005 ) helicopter carries a 584 kg at! Diagram tough mechanics problems body B thought was to look through my book for hard problems with. Height for the course the weight of the vehicle problem of a block on platform!
|
{}
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 3.2: Mixed Number and Fraction Estimation
Difficulty Level: At Grade Created by: CK-12
Estimated3 minsto complete
%
Progress
Practice Mixed Number and Fraction Estimation
Progress
Estimated3 minsto complete
%
Remember the seventh grade bake sale? Well, look at this situation.
Sam baked 8912\begin{align*}8 \frac{9}{12}\end{align*} batches of cookies for the bake sale. When he brought them to school, Tracy asked how many batches he had made.
When Sam told her, she wrote 9 batches of cookies on the check sheet.
How did Tracy know that Sam's quantity was close to 9 batches? Do you know?
This Concept will teach you how to approximate mixed numbers and fractions using benchmarks. By the end of it, you will know how Tracy came to this conclusion.
### Guidance
Because a whole can be divided into an infinite number of parts, it is sometimes difficult to get a good sense of the value of a fraction or mixed number when the denominator of the fraction is large. In order to get an approximate sense of the value of a fraction, we compare the complicated fraction with several simpler fractions, or benchmarks. The three basic fraction benchmarks are: 0, 12\begin{align*}\frac{1}{2}\end{align*} and 1.
When approximating the value of a fraction or mixed number, ask yourself which of these benchmarks is the number closest to?
Let’s look at how to apply benchmarks.
What is the approximate size of 1718\begin{align*}\frac{17}{18}\end{align*}?
To begin with, we need to determine whether the fraction is closest to 0, one-half or 1 whole. The denominator is 18 and the numerator is 17. The numerator is close in value to the denominator. The value of 1718\begin{align*}\frac{17}{18}\end{align*} is closest to 1 because 1818\begin{align*}\frac{18}{18}\end{align*} would be equal to one.
That’s right. When you are looking for a benchmark, you want to choose the one that makes the most sense.
What is the benchmark for 2449\begin{align*}\frac{24}{49}\end{align*}?
First, we can look at the relationship between the numerator and the denominator. The numerator in this case is almost half the denominator. Therefore the correct benchmark is one-half.
We can identify benchmarks for mixed numbers too. The difference is that rather than zero, we look to the whole number of the mixed number, the half and the whole number next in consecutive order.
What is the benchmark for 718\begin{align*}7 \frac{1}{8}\end{align*}?
Here we have 7 and one-eighth. Is this closer to 7, 712\begin{align*}7 \frac{1}{2}\end{align*} or 8? If you think about it logically, one-eighth is a very small fraction. There is only one part out of eight. Therefore, it makes sense for our benchmark to be 7.
Choose the correct benchmark for each example.
#### Example A
112\begin{align*}\frac{1}{12}\end{align*}
Solution: 0
#### Example B
56\begin{align*}\frac{5}{6}\end{align*}
Solution: 1
#### Example C
939\begin{align*}9 \frac{3}{9}\end{align*}
Solution: 9
Here is the original problem once again.
Sam baked 8912\begin{align*}8 \frac{9}{12}\end{align*} batches of cookies for the bake sale. When he brought them to school, Tracy asked how many batches he had made.
When Sam told her, she wrote 9 batches of cookies on the check sheet.
How did Tracy know that Sam's quantity was close to 9 batches? Do you know?
To figure out Tracy's decision, let's look at the fraction part of the mixed number of batches.
8912\begin{align*}8 \frac{9}{12}\end{align*}
9 is more than half of 12, so rounded up to 9 batches. If the fraction part of 12 would have been less than half, then Tracy would have rounded down to 8 batches.
### Vocabulary
Here are the vocabulary words in this Concept.
Whole Number
a number that is a counting number like 5, 7, 10, or 22.
Fraction
a part of a whole.
Numerator
the top number in a fraction.
Denominator
the bottom number in a fraction. It tells you how many parts the whole is divided into.
Equivalent Fractions
equal fractions
Equivalent
equal
Simplifying
making a fraction smaller
Greatest Common Factor
the largest number that will divide into both a numerator and denominator.
Mixed Number
a whole number with a fraction
Improper Fraction
when the numerator is greater than the denominator in a fraction
### Guided Practice
Here is one for you to try on your own.
Name the common benchmark for this fraction.
47\begin{align*}\frac{4}{7}\end{align*}
To begin, we have to look at the relationship between 4 and 7. 4 is a little more than half of seven. Because of this, we can say that this fraction is closest to one - half.
12\begin{align*}\frac{1}{2}\end{align*} is the appropriate benchmark.
### Practice
Directions: Approximate the value of the following fractions using the benchmarks 0, 12\begin{align*}\frac{1}{2}\end{align*} and 1.
1. 910\begin{align*}\frac{9}{10}\end{align*}
2. 1120\begin{align*}\frac{11}{20}\end{align*}
3. 232\begin{align*}\frac{2}{32}\end{align*}
4. 2122\begin{align*}\frac{21}{22}\end{align*}
5. 123\begin{align*}\frac{1}{23}\end{align*}
6. 11100\begin{align*}\frac{11}{100}\end{align*}
7. 23\begin{align*}\frac{2}{3}\end{align*}
8. 1428\begin{align*}\frac{14}{28}\end{align*}
9. 1630\begin{align*}\frac{16}{30}\end{align*}
10. 1821\begin{align*}\frac{18}{21}\end{align*}
Directions: Approximate the value of the following mixed numbers.
11. 27980\begin{align*}2 \frac{79}{80}\end{align*}
12. 6110\begin{align*}6 \frac{1}{10}\end{align*}
13. 43715\begin{align*}43 \frac{7}{15}\end{align*}
14. 8799\begin{align*}8 \frac{7}{99}\end{align*}
15. 62122\begin{align*}6 \frac{21}{22}\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Difficulty Level:
Authors:
Tags:
Subjects:
## Concept Nodes:
Date Created:
Dec 21, 2012
Jul 06, 2016
Image Detail
Sizes: Medium | Original
Here
|
{}
|
# Uncertainty on zero counts for binned result
I do a counting experiment where I count observations as a function of two float parameters $x_1$ and $x_2$. This leads to a two-dimensional histogram where each bin corresponds to the number of observations with $x_1$ and $x_2$ in some range.
I now see a lot of bins with zero counts in them (even though more counts are in principle possible) and wonder how to best describe the uncertainty in these bins.
Knowing that I had $N$ observations which can fall into $k$ bins, I could assign an "expected rate" $n=\frac{N}{k}$ per bin and calculate the uncertainty with Poisson statistic. I however also know that my underlying process does not distribute events flat in $x_1$ and $x_2$ (but with a distribution I would like to extract from the measurement), how can I consistently assign an uncertainty in zero-count bins?
I do not want to rebin my data or drop these zero bins to not loose information.
• What do you mean by assigning an uncertainty? Do you want something like a standard error or confidence interval for each bin? Nov 20 '10 at 19:43
• @Aniko: Yes, I am looking for something like a confidence interval for my counts per bin which propagate nicely when I project out parts of my 2D histogram into 1D. Nov 24 '10 at 21:42
• That my null hypothesis is flat in $x_1$ and $x_2$ doesn't mean that it is flat in some transformed quantities which lead to my measurements being distributed flat in the available parameter space, right? Why do I not need to worry about this? And I am dealing with hundred millions of observations, so histograms are just easier to use for me. Nov 20 '10 at 3:07
|
{}
|
# Product Fock spaces
1. Jun 8, 2015
### JorisL
Hi,
I'm having some issues with a piece of my notes. (relevant pages attached)
First we introduce an isomorphism $U = \oplus_n U_n$ from $\Gamma^{(a)s}\left(\mathcal{H}_1\oplus\mathcal{H}_2\right)$ to $\Gamma^{(a)s}\left(\mathcal{H}_1\right)\otimes\Gamma^{(a)s}\left(\mathcal{H}_2\right)$
With the $U_n$ mapping the n-particle space (layer if you like) of the Fock space to the product space.
So far I'm not seeing any real problems (this is what happens on the first page of the pdf).
The next page is where things get vague for me (lets ignore the paragraph about the Gibbs paradox until I get what's being defined)
We look at $n$ indistinguishable particles, each with the same single particle Hilbert space $\mathcal{H}$.
Now comes the first troubling part for me
Okay in a low density regime we can indeed look at such a system without too much loss of generality.
Now the big problem, equation (45) in the PDF says the following (for bosons)
$Ua^\dagger(\phi_1\oplus\phi_2)U^\dagger = a^\dagger(\phi_1)\otimes {1\!\!1} +{1\!\!1}\otimes a^\dagger(\phi_2)$
When the $U_n$ which build $U$ are defined in eq. (40-41) I see
$U_1(\phi_1\oplus\phi_2) = \phi_1\oplus\phi_2$ is this shorthand for $\phi_1 \otimes {1\!\!1}\oplus {1\!\!1}\otimes\phi_2$?
If so how do they look at $a^\dagger(\phi_1\oplus\phi_2)U^\dagger$?
If I can get my head around this part I can finish the rest but it's just not coming to me.
Any recommended books on the topic?
Thanks,
Joris
File size:
45.2 KB
Views:
47
2. Jun 8, 2015
### micromass
Staff Emeritus
Remember that $\mathbb{C}\otimes \mathcal{H} \cong \mathcal{H}$. Thus what he does in the text is identifying
$$\mathcal{H}\oplus \mathcal{K}\cong (\mathbb{C}\otimes \mathcal{K})\oplus (\mathcal{H}\otimes \mathbb{C})$$
through the isomorphism $\varphi\oplus \psi\rightarrow (1\otimes \psi)\oplus (\varphi\otimes 1)$.
3. Jun 8, 2015
### micromass
Staff Emeritus
If you want me to answer the second part, you will have to tell me what $a$ and $a^\dagger$ are. I don't know much physics, but this is basically math.
4. Jun 8, 2015
### JorisL
Okay, thanks for the first clarification.
The operator a isn't defined thus far actually, in the next section we assume that $a = (a^\dagger)^\dagger$.
But let me give the definition we used.
$\Phi\in\Gamma^{(a)s}$ we have from the construction that it's of the form $\Phi = \phi^{(0)}\oplus \phi^{(1)}\oplus\phi^{(2)}\oplus \ldots$.
The creation operator $a^\dagger(\psi)$ then maps $\Phi$ to
$a^\dagger(\psi)\Phi = 0\oplus\phi^{(0)}\psi\oplus\psi\wedge\phi^{(1)}\oplus\psi\wedge\phi^{(2)}\oplus\ldots$ for fermions
Here the wedge denotes an anti-symmetrized tensorproduct. ($\phi_1\wedge\phi_2 = -\phi_2\wedge\phi_1$) This is suitable for fermions.
For bosons we replace it by a symmetrized product which we denote by $\odot$ in the text (not standard by any means)
That's the definition we used.
5. Jun 8, 2015
### micromass
Staff Emeritus
It suffices to show that $U a^\dagger (\varphi_1\oplus \varphi_2) = (a^\dagger(x)\otimes \mathbb{1} + \mathbb{1}\otimes a^\dagger(y)) U$.
It suffices to do this on the various levels. So let me do this on the first level. So if $\Phi\in \Gamma^{(a)s}(\mathcal{H}_1\oplus \mathcal{H}_2)$ has the form
$$\Phi = (0,(\varphi\oplus \psi),0,0,...)$$
Then $$U\Phi = (0,\varphi\otimes 1 + 1\otimes \psi,0,0,...) = (0,\varphi,0,0,...)\otimes (1,0,0,0,...) + (1,0,0,0,...)\otimes (0,\psi,0,0,...)$$
Applying $(a^\dagger(x)\otimes \mathbb{1} + \mathbb{1}\otimes a^\dagger(y))$ gives us
$$(0,0,x\otimes \varphi,0,...)\otimes (1,0,0,...) + (0,\varphi,0,0,...)\otimes (0,y,0,0,...) + (1,0,0,0,...)\otimes (0,0,y\otimes \psi,0,...) + (0,x,0,0,...)\otimes (0,\psi,0,0,...)$$
Notice that this corresponds to
$$(0,0,x\otimes \varphi,0,...) + (0,0,\varphi\otimes y,0,...) + (0,0,y\otimes \psi,0,...) + (0,0,x\otimes \psi,0,...)~~~(*)$$
On the other hand
$$a^\dagger(x\oplus y)\Phi = (0,0,(x\oplus y)\otimes (\varphi\oplus \psi),0,0,...)$$
Applying $U$ to this gives
$$(0,0,(x\otimes \varphi)\oplus ((x\otimes \psi) + (\varphi\otimes y) \oplus (y\otimes \psi),0,...)$$
This corresponds to $(*)$.
6. Jun 8, 2015
### JorisL
Great stuff!
This was exactly what I needed.
You know of any good (mathematical) references for this kind of results?
The physics text I've found kind of glance over it and rush to apply it. (not good enough for my inner mathematician but it takes time I don't have at the moment)
Thanks,
Joris
7. Jun 8, 2015
### micromass
Staff Emeritus
I'm afraid I don't have anything precise for you. What I did comes mainly from my knowledge of abstract algebra and a bit of functional analysis. So I can't give you one reference for the entire thing. But if you are asking for details on only one specific step, then I probably have references for that. Nevertheless, the math behind this can be found in Roman's advanced linear algebra, but this doesn't mention Fock spaces at all, so it's only for the deeper math.
8. Jun 8, 2015
### JorisL
Thanks, I'll see if I can get any more out of it using that resource.
But I believe my understanding is deep enough for what is expected. It really was a crash course in 4 two hour lectures so he'll not be too strict about these things.
Thanks,
Joris
|
{}
|
# How to decode P bits that represent a random weight generator?
So I've been tasked by my neural network professor at university to replicate the following research: Intelligent Breast Cancer Diagnosis Using Hybrid GA-ANN.
Each chromosome represents a possible net, more specifically, a possible MLP network. They've used a binary convention, and have used $$P = 15$$ bits for the random initial weight generator, $$Q=2$$ bits for the number of nodes and $$R = 9$$ bits for feature selection.
P bits random initial weight generator allows 2P different combinations of the initial weight. The numbers of hidden nodes (i.e. represented by Q bits) permit the GA to explore up to a maximum of 2Q hidden nodes’ size. For the representation of the feature subset, the value of R is set to the number of full feature size. Value ‘1’ or ‘0’ indicates if the feature at that particular location is selected or otherwise.
Decode each chromosome in the population to obtain the selected feature subset, hidden node size and random generator.
I don't understand why they say with $$P$$ bits, there's $$2*P$$ combinations, wouldn't it be $$2^P$$? Also, I can't grasp how they decoded the $$15 P$$ bits into the net for the weight generation. I've searched everywhere but I can't find anything specific. What I thought to do was to transform the $$15$$ bits into a decimal number and use it as a specific seed for rand and randn function in Matlab, through which I make random initial weights.
• The paper's pdf doesn't seem to be freely available. So, it might be a good idea to quote the relevant section.
– nbro
May 18 at 8:42
• I did, hoping now it's more clear what my question is. May 18 at 11:51
• You can always try to email the authors. They are usually happy to have someone interested in their work. May 23 at 13:43
I don't understand why they say with $$P$$ bits, there's $$2*P$$ combinations, wouldn't it be $$2^P$$?
I think you are correct here, and would suspect a typesetting issue or typo. I also think that 2Q should be $$2^Q$$ in the same paragraph.
Also, I can't grasp how they decoded the 15𝑃 bits into the net for the weight generation.
Without anything in the paper or associated materials, I don't think it is possible to be sure.
What I thought to do was to transform the 15 bits into a decimal number and use it as a specific seed for rand and randn function in Matlab, through which I make random initial weights.
This could work, although it will make the GA search over those 15 bits dependent on all the bits at once, due to the nature of PRNG seeds. Any crossover or mutation would destroy any coherence between generations. Perhaps that is not important, although it then does seem strange to dedicate the majority of the genome to it.
The title and description does imply that the GA is mainly being used to set start conditions for training the network, so values of P might not be very important, in which case it would mainly be used to avoid risk of poor initialisation missing the best architecture.
However, P could also be an encoded multiplier for initial randomly sampled weights, in which case the search should find stable values for the most significant digits of P. That would be similar to how Q is used (as an encoded number).
• Hello, thank you for your answer. I'd like to elaborate more on the solution you proposed: when you say encoded multiplier, what exactly are you suggesting? Because I already transformed the 15 bits into the actual decimal number. What should I multiply that for? Shouldn't all initial weights for both input-hidden and hidden-output layer be different? May 19 at 9:48
• @JOSEPHCAROÈ I mean use it to multiply all randomly generated weights. You might generate weights by sampling $\mathcal{N}(0,1)$ for each one, then multiply by P/MAX_P May 19 at 13:08
• Okay so supposedly, with P = 15, MAX_P = (2^15 - 1) and P = the decimal number of the 15 P bits. May 19 at 14:30
|
{}
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 3 years ago needing help to slove this: (5y^5-y^2) divided by -y
• This Question is Closed
1. hba
I am sorry i was busy.
2. anonymous
it's cool lol
3. hba
Lets start solving this then.
4. anonymous
ok lol its really confusing to me and i need to pass my test or i will fail the whole thing and have to redo it all over again :((
5. hba
$\frac{ 5y^5-5y^2 }{ -y }$ This is your question right ?
6. anonymous
yupp :)) lol
7. hba
$\frac{ 5y^5 }{ -y }+\frac{- 5y^2 }{ -y }$ Do you understand this ?
8. anonymous
Not really :/
9. hba
I just seprated them.
10. anonymous
ok lol sorry was at lunch
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
{}
|
Address 319 Southwood Rd, Whiteville, NC 28472 (910) 770-6211
# complementary error function bounds Cerro Gordo, North Carolina
Once again, the hint is easy to prove, easy to use, and it yields the result. –Did May 28 at 6:30 @Did : why are you upset ? Spanier, J. The derivative is given by (4) and the indefinite integral by (5) It has the special values (6) (7) (8) It satisfies the identity (9) It has definite integrals (10) (11) Practice online or make a printable study sheet.
Cambridge, England: Cambridge University Press, 1990. Following the line of the previous proof, the $(\spadesuit)$-inequality can be used to have tight upper bound for the $e^{k^2}\;\operatorname{erfc}(k)$ - function. trivial and useless –user1952009 May 28 at 4:18 @user1952009 Calm down, the inequality is trivial (which is good) and useful to show the claim (even though other arguments exist). Your cache administrator is webmaster.
asked 6 years ago viewed 1590 times active 3 months ago Blog Stack Overflow Podcast #89 - The Decline of Stack Overflow Has Been Greatly… Visit Chat Get the weekly newsletter! Your cache administrator is webmaster. Generated Wed, 05 Oct 2016 15:35:28 GMT by s_hv972 (squid/3.5.20) Nat.
error-function share|cite|improve this question edited May 28 at 4:09 William 8,25961250 asked May 28 at 4:04 t77 315 you have to bound $e^{-t^2}$ by an integrable function. Orlando, FL: Academic Press, pp.568-569, 1985. Q-function From Wikipedia, the free encyclopedia Jump to: navigation, search A plot of the Q-function. Why did the One Ring betray Isildur?
I said I didn't want to be offending, only to the teacher. Your cache administrator is webmaster. Wolfram Language» Knowledge-based programming for everyone. K., & Lioumpas, A.
Please try the request again. Please try the request again. share|cite|improve this answer answered May 28 at 4:17 William 8,25961250 1 I prefer much more $e^{-t^2} < e^{-t}$ for $t > 1$ –user1952009 May 28 at 4:35 add a comment| By using this site, you agree to the Terms of Use and Privacy Policy.
However, the bounds ( x 1 + x 2 ) ϕ ( x ) < Q ( x ) < ϕ ( x ) x , x > 0 , {\displaystyle Let f(x) be the left side minus the right side, i.e. $f(x) = erfc(x) - \frac{ x \exp(-x^2) }{ \pi(1 + 2x^2) }$ Clearly $f(x) > 0$ and $\lim_{x\to\infty}$ $f(x) This form is advantageous in that the range of integration is fixed and finite. Washington, DC: Hemisphere, pp.385-393 and 395-403, 1987. Generated Wed, 05 Oct 2016 15:35:28 GMT by s_hv972 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection Time waste of execv() and fork() Zero Emission Tanks Problem with tables: no vertical lines are appearing Does insert only db access offer any additional security Why was the Rosetta probe The Q-function is not an elementary function. Bur. up vote 1 down vote favorite Here is the error function: $$erf(x)=\frac{2}{\sqrt\pi}\int^x_0e^{-t^2} dt$$ Here is the question: Show that the odd function erf is bounded, by using the fact that:$$e^{-t^2} \le Step-by-step Solutions» Walk through homework problems step-by-step from beginning to end. The notes also state improved bounds but without proof. Standards Sect. The system returned: (22) Invalid argument The remote host or network may be down. In statistics, the Q-function is the tail probability of the standard normal distribution ϕ ( x ) {\displaystyle \phi (x)} .[1][2] In other words, Q(x) is the probability that a normal Cook 3,0202753 add a comment| 3 Answers 3 active oldest votes up vote 7 down vote accepted Durrett, Probability: Theory and Examples, 3rd edition, p. 6 gives$$(x^{-1} - x^{-3}) e^{-x^2/2} Press, W.H.; Flannery, B.P.; Teukolsky, S.A.; and Vetterling, W.T. "Incomplete Gamma Function, Error Function, Chi-Square Probability Function, Cumulative Poisson Function." §6.2 in Numerical Recipes in FORTRAN: The Art of Scientific Computing, Let$X$be a random variable with gaussian distribution and density $$f(x)=\frac{1}{\sqrt{2\pi}}\exp(-x^2/2).$$ Now let, for any$k\in\mathbb{R}^+$, $$A_k = \sqrt{2\pi}\;\exp(k^2/2)\;\mathbb{P}[X>k] = \sqrt{\frac{\pi}{2}}\;\exp(k^2/2)\;\operatorname{Erfc}\left(\frac{k}{\sqrt{2}}\right).$$ Since$\mathbb{E}\left[\left(X-\mathbb{E}[X]\right)^2\right]\geq 0$,$\mathbb{E}[X^2]\geq\mathbb{E}[X]^2\$, and the same holds asked 4 months ago viewed 54 times active 4 months ago Blog Stack Overflow Podcast #89 - The Decline of Stack Overflow Has Been Greatly… 17 votes · comment · stats Handbook of Differential Equations, 3rd ed. New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels.
Wolfram Demonstrations Project» Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. Computerbasedmath.org» Join the initiative for modernizing math education. Some values of the Q-function are given below for reference. Generated Wed, 05 Oct 2016 15:35:28 GMT by s_hv972 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.7/ Connection
IEEE Transactions on Wireless Communications, 4(2), 840–845, doi=10.1109/TWC.2003.814350. ^ Karagiannidis, G. Cambridge, England: Cambridge University Press, pp.209-214, 1992. A generalization is obtained from the erfc differential equation (14) (Abramowitz and Stegun 1972, p.299; Zwillinger 1997, p.122).
|
{}
|
We arrive now at one of the most important mechanisms in metabolism: the aldol addition.
Along with Claisen condensation reactions, which we will study in the next chapter, aldol additions are responsible for most of the carbon-carbon bond forming events that occur in a living cell. Because biomolecules are built upon a framework of carbon-carbon bonds, it is difficult to overstate the importance of aldol addition and Claisen condensation reactions in the chemistry of living things!
## Overview of the aldol addition reaction
Consider the potential pathways available to a reactive enolate intermediate once the a-proton has been abstracted. We'll use acetaldehyde as a simple example. The oxygen, which bears most of the negative charge, could act as a base, (step 2 below) and the result would be an enol.
Alternatively, the enolate carbon, which bears a degree of negative charge, could act as a base, which is simply the reverse of the initial deprotonation step that formed the enolate in the first place. This of course just takes us right back to the starting aldehyde.
In both of these cases, the electron-poor species attacked by the enolate is an acidic proton. What if the electron-poor species - the electrophile - is not a proton but a carbonyl carbon? In other words, what if the enolate acts not as a base but rather as a nucleophile in a carbonyl addition reaction? For example, the enolate of acetaldehyde could attack the carbonyl group of a second acetaldehyde molecule. The result is the formation of a new carbon-carbon bond:
This type of reaction is called an aldol addition. It can be very helpful to think of an aldol addition reaction as simply a nucleophilic carbonyl addition (Chapter 10) reaction with an enolate a-carbon (rather than an alcohol oxygen or amine nitrogen) as the nucleophile.
Mechanism:
Historically, the first examples of this mechanism type to be studied involved reactions very similar to what is shown above: an aldehyde reacting with itself. Because the resulting product contained both an aldehyde and an alcohol functional group, the reaction was referred to as an 'aldol' addition, a terminology that has become standard for reactions of this type, whether or not an aldehyde is involved. More generally, an aldol addition is characterized as a nucleophilic addition to an aldehyde, ketone, or imine electrophile where the nucleophile is the a-carbon in an aldehyde, ketone, imine, ester, or thioester. The enzymes that catalyze aldol reactions are called, not surprisingly, aldolases.
Note that the aldol reaction results in a product in which a hydroxide group is two carbons away from the carbonyl, in the $$\beta$$ position. You can think of the $$\beta$$-hydroxy group as a kind of 'signature' for an aldol addition product.
Depending on the starting reactants, nonenzyatic aldol reactions can take more than one route to form different products. For example, a reaction between acetaldehyde and 2-butanone could potentially result in in three different aldol addition products, depending on which of the three a-carbons (carbons 2, 3, and 5 below) becomes the attacking nucleophile.
##### Exercise 12.4.1
1. Fill in the appropriate carbon numbers for each of the three possible aldol addition products shown above.
2. Draw arrows for the carbon-carbon bond forming step that leads to each of the three products.
Hint
For each reaction, first identify the nucleophilic and electrophilic carbon atoms on the starting compounds!
Fructose 1,6-bisphosphate aldolase (EC 4.1.2.13) is an enzyme that participates in both the glycolytic (sugar catabolism) and gluconeogenesis (sugar synthesis) biochemical pathways. The reaction catalyzed by fructose 1,6-bisphosphate aldolase links two 3-carbon sugars, glyceraldehyde-3-phosphate (GAP, the electrophile in the reaction) and dihydroxyacetone phosphate (DHAP, the nucleophile), forming a 6-carbon product. In the figures below, the nucleophilic and electrophilic carbons are identified with dots.
The fructose 1,6-bisphosphate aldolase reaction
Mechanism:
In step 1 of the reaction, an a-carbon on DHAP is deprotonated, leading to an enolate intermediate. this and many other aldolase reactions, a zinc cation ($$Zn^{+2}$$) is positioned in the enzyme's active site so as to interact closely with - and stabilize - the negatively charged oxygen of the enolate intermediate. This is one important way in which the enzyme lowers the energy barrier to the reaction.
Next, (step 2), the deprotonated a-carbon attacks the carbonyl carbon of GAP in a nucleophilic addition reaction, leading to the fructose 1,6-bisphosphate product.
Notice that two new chiral centers are created in this reaction. This reaction, being enzyme-catalyzed, is highly stereoselective due to the precise position of the two substrates in the active site: only one of the four possible stereoisomeric products is observed. The enzyme also exhibits tight control of regiochemistry: GAP and DHAP could potentially form two other aldol products which are constitutional isomers of fructose 1,6-bisposphate.
##### Exercise 12.4.2
1. Fill in the blanks with the correct term: (pro-R, pro-S, re, si). You may want to review the terminology in section 3.11.
In the fructose 1,6-bisphosphate aldolase reaction, the ______ proton on the a-carbon of DHAP is abstracted, then the ______ face of the resulting enolate a-carbon attacks the ______ face of the aldehyde carbon of GAP.
1. Draw structures of the two other constitutional isomers that could hypothetically form in aldol addition reactions between GAP and DHAP. How many stereoisomers exist for these two alternative products?
Along with aldehydes and ketones, esters and thioesters can also act as the nucleophilic partners in aldol reactions. In the first step of the citric acid (Krebs) cycle, acetyl $$CoA$$ (a thioester nucleophile) adds to oxaloacetate (a ketone electrophile) (EC 2.3.3.8).
Notice that the nucleophilic intermediate is an enol, rather than a zinc-stabilized enolate as was the case with the fructose 1,6-bisphosphate aldolase reaction. An enol intermediate is often observed when the nucleophilic substrate is a thioester rather than a ketone or aldehyde.
## Going backwards: the retro-aldol cleavage reaction
Although aldol reactions play a very important role in the formation of new carbon-carbon bonds in metabolic pathways, it is important to emphasize that they can also be reversible: in most cases, the energy level of starting compounds and products are very close. This means that, depending on metabolic conditions, aldolases can also catalyze retro-aldol reactions: the reverse of aldol reactions, in which carbon-carbon bonds are broken.
A retro-aldol cleavage reaction:
Mechanism:
In the retro-aldol cleavage reaction the $$\beta$$-hydroxy group is deprotonated (step 1 above), to form a carbonyl, at the same time pushing off the enolate carbon, which is now a leaving group rather than a nucleophile.
Is an enolate a good enough leaving group for this step to be chemically reasonable? Sure it is: the same stabilizing factors that explain why it can form as an intermediate in the forward direction (resonance delocalization of the negative charge to the oxygen, interaction with a zinc cation) also explain why it is a relatively weak base, and therefore a relatively good leaving group (remember, weak base = good leaving group!). All we need to do to finish the reaction off is reprotonate the enolate (step 2) to yield the starting aldehyde, and we are back where we started.
The key thing to keep in mind when looking at a retro-aldol mechanism is that, when the carbon-carbon bond breaks, the electrons must have 'some place to go' where they will be stabilized by resonance. Most often, the substrate for a retro-aldol reaction is a $$\beta$$-hydroxy aldehyde, ketone, ester, or thioester.
If the leaving electrons cannot be stabilized, a retro-aldol cleavage step is highly unlikely.
The fructose 1,6-bisphosphate aldolase reaction we saw in the previous section is an excellent example of an enzyme whose metabolic role is to catalyze both the forward and reverse (retro) directions of an aldol reaction. The same enzyme participates both as an aldolase in the sugar-building gluconeogenesis pathway, and as a retro-aldolase in the sugar breaking glycolysis pathway. We have already seen it in action as an aldolase in the gluconeogenesis pathway. Here it is in the glycolytic direction, catalyzing the retro-aldol cleavage of fructose bisphosphate into DHAP and GAP:
The fructose 1,6-bisphosphate aldolase reaction (retro-aldol direction)
Mechanism:
##### Exercise 12.4.3
Predict the products of a retro-aldol reaction with the given substrate.
Earlier we looked at the mechanism for the fructose 1,6-bisphosphate aldolase reaction in bacteria. Interestingly, it appears that the enzyme catalyzing the exact same reaction in plants and animals evolved differently: instead of going through a zinc-stabilized enolate intermediate, in plants and animals the key intermediate is an enamine. The nucleophilic substrate is first linked to the enzyme through the formation of an iminium with a lysine residue in the enzyme's active site (refer to section 10.5 for the mechanism of iminium formation). This effectively forms an 'electron sink', in which the positively-charged iminium nitrogen plays the same role as the $$Zn^{+2}$$ ion in the bacterial enzyme.
The $$\alpha$$-proton, made more acidic by the electron-withdrawing effect of the imminium nitrogen, is then abstracted by an active site base to form an enamine (step 1). In step 2 , the $$\alpha$$-carbon attacks the carbonyl carbon of an aldehyde, and the new carbon-carbon bond is formed. In order to release the product from the enzyme active site and free the enzyme to catalyze another reaction, the iminium is hydrolyzed back to a ketone group (see section 10.5 to review the imine/imminium hydrolysis mechanism).
There are many more examples of aldol/retroaldol reactions in which the key intermediate is a lysine-linked imine. Many bacteria are able to incorporate formaldehyde, a toxic compound, into carbohydrate metabolism by linking it to ribulose monophosphate. The reaction (EC 4.1.2.43) proceeds through imine and enamine intermediates.
##### Exercise 12.4.4
Draw the carbon-carbon bond-forming step for the hexulose-6-phosphate aldolase reaction shown above.
Here is an example of an enamine intermediate retro-aldol reaction from bacterial carbohydrate metabolism (EC 4.1.2.14). Notice that the structures are drawn here in the Fischer projection notation - it is important to practice working with this drawing convention, as biologists and biochemists use it extensively to illustrate carbohydrate chemistry. Proc. Natl. Acad. Sci. 2001, 98, 3679
##### Exercise 12.4.5
Draw the carbon-carbon bond breaking step in the reaction above. Use the Fischer projection notation.
This page titled 12.4: Aldol Addition is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Tim Soderberg via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
{}
|
# All Questions
8 views
64 views
### IMPA (Brazil) vs Iowa State University (USA) [on hold]
I was recently offered admission to Iowa State for a math PhD. I thought they were to deny me admission since they had not answer me until now (I spected an answer in March). Since I had not had a ...
13 views
### associated prime of a module under a ring homomorphism
Let $f: A\rightarrow B$ be a homomorphism of Noetherian rings, and $M$ a $B$-module(not necessarily finitely generated). Question: Is $^af(Ass_B(M))=Ass_A(M)$? If $q$ is an associated prime of the ...
26 views
### Prove the isomorphism of categories $Fun(\mathcal{A}\times\mathcal{B},\mathcal{C})\cong Fun(\mathcal{A},Fun(\mathcal{B},\mathcal{C})),$ [migrated]
I'm a computational engineer starting with a course of Introduction to Category Theory, and perhaps is extremely basic what I'm asking but I'm trying to learn how to make proofs in category theory ...
53 views
### Acyclic complexes for extraordinary cohomology theories
Let $X$ be a CW complex such that for all extraordinary homology theories, if you plug $X$ into them you get the same value as plugging in a point. Must $X$ be contractible?
23 views
120 views
### What exactly is wrong with this statement (Lucas-Penrose fallacy)? [on hold]
Statement "For every computer system, there is a sentence which is undecidable for the computer, but the human sees that it is true, therefore proving the sentence via some non-algorithmic method." ...
185 views
### $A \wedge A \wedge A$ in Chern-Simons
I am confused with the wedging operations of Lie algebra valued differential forms. Especially, for instance, I have some problems with the Chern-Simons 3-form A \wedge dA + \frac{2}{3}A \wedge A ...
32 views
### Graph classes which are not perfect but the stability number = clique cover numer?
I have a result for graphs whose stability number=clique cover number, which naturally includes the perfect graphs, but I'm curious about if there are other known and well-definable graph classes ...
25 views
### automorphism group of partially ordered by divisibillity [on hold]
We define bijection $F: \aleph \to \aleph$ as follows: \begin{array}{l} {a|b\Leftrightarrow F(a)|F(b)} \\ {1\to 1} \end{array} What group is Automorphism group linked to $F$?
12 views
### is the minimum envelope of two inrersecting convex functions convex? [on hold]
when two convex cost functions intersects, can we say that their minimum envelope is convex, which doesn't looks like convex? Again if it is not convex, then is any relaxation theorem available such ...
Is there a torsion-free group containing two elements $x$ and $y$ and a finite non-empty subset $B$ such that $B=xB \triangle yB$, where $\triangle$ denotes the symmetric difference of two sets and ...
### Example of a $G$-sphere that is not a $G$-representation sphere
Let $G$ be a finite group with the discrete topology. To set terminology: a $G$-sphere is a sphere equipped with a continuous $G$-action a $G$-representation sphere is a $G$-sphere obtained from an ...
|
{}
|
# Understanding separable ODES
## Main Question or Discussion Point
I understand how to integrate this: ∫y2dy.
I dont understand how to integrate this:
di(t)/dt = i(t)p(t)
intergrate((di(t)/dt/i(t))*dt = p(t)dt) (see this image: http://i.imgur.com/OdKI309.png)
how do you perform the intergral on the left, seeing as as it not dt, but di(t)?
thanks
Related Differential Equations News on Phys.org
Mark44
Mentor
I understand how to integrate this: ∫y2dy.
I dont understand how to integrate this:
di(t)/dt = i(t)p(t)
intergrate((di(t)/dt/i(t))*dt = p(t)dt) (see this image: http://i.imgur.com/OdKI309.png)
how do you perform the intergral on the left, seeing as as it not dt, but di(t)?
thanks
Can you do this integration? $\int \frac{du}{u}?$
BTW, there are no such words in English as "intergrate" and "intergral."
Can you do this integration? $\int \frac{du}{u}?$
BTW, there are no such words in English as "intergrate" and "intergral."
Yes I can do that.
But I do not understand how do integrate (what word do I use?)
∫di(t)/i(t)
Last edited:
Mark44
Mentor
Yes I can do that.
But I do not understand how do integrate (what word do I use?)
∫di(t)/i(t)
This is essentially the same as what I wrote.
$\int \frac{du}{u}$ is the same as $\int \frac{du(t)}{u(t)}$. The only difference is that in the second integral, it is made explicit that u is a function of t.
This is essentially the same as what I wrote.
$\int \frac{du}{u}$ is the same as $\int \frac{du(t)}{u(t)}$. The only difference is that in the second integral, it is made explicit that u is a function of t.
Thank you Mark!
|
{}
|
 When two species of different genealogy come to resemble each other as a result of adaptation, the phenomenon is termed from Biology Evolution Class 12 Nagaland Board
### Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
### Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
When two species of different genealogy come to resemble each other as a result of adaptation, the phenomenon is termed
• divergent evolution
• microevolution
• co-evolution
• convergent evolution
D.
convergent evolution
In convergent evolution lineages show similar morphology under the influence of similar environmental factors.
In convergent evolution lineages show similar morphology under the influence of similar environmental factors.
476 Views
What conditions were created by Miller in his experiment?
Miller created electric discharge in a closed flask that contained CH4, NH3, H2 and water vapour at high temperature like 800$°\mathrm{C}$
708 Views
What is the phenomenon called when the original drifted population becomes founders?
Founder Effect.
733 Views
Give another name for origin of life from non living matter.
Abiogenesis
771 Views
Arrange the following substances in a proper sequence with regard to the formation of chemical constituents at the time of origin of life :
Sugar, methane, nucleic acid and amino acid.
Methane—sugar—amino acid—nucleic acid.
752 Views
When did life appear on earth ?
500 million years ago.
996 Views
|
{}
|
Convert integers to and from the factoradic number system
## Project description
The factorial number system (also known as factoradic) is a way of representing an integer as the sum of multiples of factorials. All integers have a unique representation in the factoradic number system. For example, the number 1337 can be represented as:
1*6! + 5*5! + 0*4! + 2*3! + 2*2! + 1*1! + 0*0!
with coefficients 1 5 0 2 2 1 0. This is the unique factoradic representation of decimal 1337.
Factoradic numbers have uses in combinatorics, particularly in the numbering of permutations. This factoradic library is useful for converting to and from factoradic number representations both in Python and from the command-line.
## Installation
The factoradic package is available on the Python Package Index (PyPI):
The package supports Python 3 only. To install:
$pip install factoradic ## Python Interface For full help: >>> import factoradic >>> help(factoradic) In the meantime, here are some highlights. To convert from an integer to factoradic use to_factoradic(): >>> factoradic.to_factoradic(1337) [0, 1, 2, 2, 0, 5, 1] The result is the list of coefficients where the factorial of each zero-based index gives a place value, and the item at that index is the coefficient by with the place value is to be multiplied. The elements are from least-significant to most-significant. Since the coefficient at any index must be less that or equal to the index, the coefficient at index 0 is always 0. To convert from factoradic use from_factoradic(): >>> factoradic.from_factoradic([0, 1, 2, 2, 0, 5, 1]) 1337 ## Command-Line Interface There’s also a handy command-line interface. Run factoradic --help to see a list of commands: $ factoradic --help
Convert to and from the factorial number system.
Usage:
Options:
-e --expression Show as a mathematical expression.
To convert from an integer to factoradic, use the from-integer subcommand:
$factoradic from-integer 1729 0 1 0 0 2 2 2 The coefficients are reported from least-significant to most-significant. The see the results as a math expression, specify the --expression flag: $ factoradic from-integer 1729 --expression
2*6! + 2*5! + 2*4! + 0*3! + 0*2! + 1*1! + 0*0!
To convert from factoradic representation use the to-integer subcommand, specifying the coefficients from least-significant to most-significant:
\$ factoradic to-integer 0 1 0 0 2 2 2
1729
## Project details
Uploaded source
Uploaded py3
|
{}
|
# Wombat IDE - New semester / bug fixes
We’re about to start up the Fall semester, so it seems like a good time to be updating Wombat for the new semester. With that, a whole slew of bugs have been squashed:
Minor fixes:
• Issue 211 - fixed an issue in bracket matching for the first bracket after a line comment block
• Issue 220 - in find/replace, find starts at the cursor and wraps at the end of the document; replace automatically triggers find
• Issue 221 - when navigating through history you can use the down arrow at the end to clear the REPL
• Issue 222 - the CSV library is now correctly documented
• Issue 226 - the correct document is run even after closing a document and not clicking another
• Issue 230 - (define x) is now equivalent to (define x (void)) rather than a syntax error
• Issue 231 - brackets in character literals and simple strings in the REPL no longer break execution (you can execute just #\\) now)
• Issue 233 - Wombat is no longer case sensitive by default; you can manually re-enable it with (case-sensitive #f)
• Issue 234 - Wombat should be called Wombat in the OSX ‘force quit’ dialog rather than Main
• Issue 236 - fixed a conflict between Lambda/Greek mode and the history panel
Slightly bigger fixes:
Issue 22 - It’s been a long time coming (this was by far the oldest issue still open), but there’s now an Emacs mode for Wombat. It’s based on this project from the XNap Commons (licensed LGPL). If anyone else is looking for basic Emacs keybindings for a Java project, this is a great place to start. Theoretically, everything defined in that project should work in any file or the REPL when Emacs mode is enabled (check the options menu), although it still needs some testing. Let me know if anything doesn’t work.
Issue 229 - Essentially, this added a ‘Greek mode’ that acts like lambda mode, only for any Greek letter. This way you can do things like type sigma and get σ. As an added benefit, this takes advantage of Issue 233 (see above) to have upper and lower case Greek characters. So Sigma is Σ and sigma is σ.
That’s it for the time being; if you have any more bugs, feel free to submit an issue to the bug tracker or just email me directly.
|
{}
|
Blob Farm
# blob06
The blob in all its glory:
This is an attempt to make a case with some mounting ears, but all just mathematically. You can see that it was not much of a success. The Mathematica function (RegionPlot3D) does not handle sharp edges elegantly. Though to be fair, the GNU Octave function isosurface does not either. I think that is because they are both working from a 3D grid.
$\frac{1}{{\left({x}^{2}+{y}^{2}+0.25\text{}{\left(z-1.\right)}^{2}\right)}^{0.5}}+\frac{1}{{\left({\left(x-0.943\right)}^{2}+{y}^{2}+0.25\text{}{\left(z+0.333\right)}^{2}\right)}^{0.5}}+\frac{1}{{\left({\left(x+0.471\right)}^{2}+{\left(y-0.816\right)}^{2}+0.25\text{}{\left(z+0.333\right)}^{2}\right)}^{0.5}}+\frac{1}{{\left({\left(x+0.471\right)}^{2}+{\left(y+0.816\right)}^{2}+0.25\text{}{\left(z+0.333\right)}^{2}\right)}^{0.5}}>4.8$
Octave Code:
1; # Prevent Octave from thinking that this is a function
# though one is defined here
function w = f(x2,y2,z2,c,r,e)
x = (x2-c(1))/r(1);
y = (y2-c(2))/r(2);
z = (z2-c(3))/r(3);
# function at origin must be <0, and >0 far enough away. w=0 defines the surface
th = atan2(x,y);
r = sqrt(x.^2+y.^2);
phi = th + (r+r.^2)*0.05;
w = ((x/70).^4+(y/70).^4+(z/30).^4).^(-8.0);
w=w+((1/4)*((x+y+40+134)/(4*1.4))./(1.0+((((x+67)/4).^2+((y+67)/4).^2).^0.5-2).^2))./(1+((z-20)/20).^2);
w=w+((1/4)*((x-y+40+134)/(4*1.4))./(1.0+((((x+67)/4).^2+((y-67)/4).^2).^0.5-2).^2))./(1+((z-20)/20).^2);
w=w+((1/4)*((-x+y+40+134)/(4*1.4))./(1.0+((((x-67)/4).^2+((y+67)/4).^2).^0.5-2).^2))./(1+((z-20)/20).^2);
w=w+((1/4)*((-x-y+40+134)/(4*1.4))./(1.0+((((x-67)/4).^2+((y-67)/4).^2).^0.5-2).^2))./(1+((z-20)/20).^2);
w=w-1;
endfunction;
GNU Octave
|
{}
|
«
»
This set of Antennas Multiple Choice Questions & Answers (MCQs) focuses on “Broadside Array”.
1. In Broadside array the maximum radiation is directed with respected to the array axis at an angle____
a) 90°
b) 45°
c) 0°
d) 180°
Explanation: In a Broadside array the maximum radiated is directed towards the normal to the axis of the array. So it is at angle 90°. In the end-fire array maximum radiation is along the axis of the array.
2. What is the phase excitation difference for a broadside array?
a) 0
b) π/2
c) π
d) 3π/2
Explanation: The maximum array factor occurs when $$\frac{sin \frac{Nφ}{2}}{\frac{Nφ}{2}}$$ maximum that is $$\frac{Nφ}{2}=0.$$ And φ=kdcosθ+β
=> kdcosθ+β=0 For a broadside maximum radiation is normal to axis of array so θ=90
=> β=0
3. Which of the following statements is false regarding a broadside array?
a) The maximum radiation is normal to the axis of the array
b) Must have same amplitude excitation but different phase excitation among different elements
c) The spacing between elements must not equal to the integral multiples of λ
d) The phase excitation difference must be equal to zero
Explanation: Since the phase excitation difference is zero it means that all are equally excited with same phase. In a Broadside array the maximum radiated is directed towards the normal to the axis of the array. The spacing between elements is not equal to integral multiples of λ to avoid grating lobes.
4. Which of the following cannot be the separation between elements in a broadside array to avoid grating lobes?
a) 4λ/2
b) λ/2
c) 3λ/2
d) 5λ/2
Explanation: The spacing between elements should not equal to integral multiples of λ to avoid grating lobes. The option 4λ/2=2λ
So when d=2λ grating lobes occurs which means maxima are found at other angles also. So this is not a desired spacing.
5. Find the value θn at which null occurs for an 8-element broadside array with spacing d.
a) $$cos^{-1}\frac{λn}{Nd}$$
b) $$sin^{-1}\frac{λn}{Nd}$$
c) $$cos^{-1}\frac{2λn}{Nd}$$
d) $$sin^{-1}\frac{2λn}{Nd}$$
Explanation: Nulls occurs when array factor $$AF=\frac{sin \frac{Nφ}{2}}{\frac{Nφ}{2}} = 0$$
⇨ $$sin\frac{Nφ}{2} = 0 =>\frac{Nφ}{2}=±nπ \,and\, φ=kdcosθ+β=kdcosθ_n=\frac{2π}{λ} dcosθ_n$$
⇨ Null occurs at $$θ_n=cos^{-1}\frac{λn}{Nd}$$
6. What would be the directivity of a linear broadside array in dB consisting 5 isotropic elements with element spacing λ/4?
a) 9.37
b) 3.97
c) 6.53
d) 3.79
Explanation: Directivity $$D=\frac{2Nd}{λ}=\frac{2×5×\frac{λ}{4}}{λ}=2.5$$
D (dB) = 10log2.5=3.97 dB
7. In a broadside array all the elements must have equal ______ excitation with similar amplitude excitations to get maximum radiation.
a) Phase
b) Frequency
c) Voltage
d) Current
Explanation: Since the phase excitation difference is zero it means that all are equally excited with same phase. So in order to get maximum radiation it should have equal phase excitations along with similar amplitude excitations.
8. The directivity of a linear broadside array with half wave length spacing is equal to _____
a) Unity
b) Zero
c) Half of the number of elements present in array
d) Number of elements present in array
Explanation: The directivity of N isotropic elements with spacing d is given by
Directivity $$D=\frac{2Nd}{λ}$$
⇨ $$D=\frac{2Nd}{λ}=\frac{2Nλ/2}{λ}=N$$
9. Which of the following is false regarding a linear broadside array with 2 elements and spacing λ?
a) Directivity = 6.02 dB
b) No grating lobes are present
c) Nulls occur at $$cos^{-1}\frac{1}{2}$$
d) The maxima occurs normal to the axis of array and also at other angles
Explanation: Since the spacing between elements is an integral multiple of λ (n=1), grating lobes occurs.
Directivity $$D=\frac{2Nd}{λ}=4=6.02dB$$
⇨ Null occurs at $$θ_n=cos^{-1}\frac{λn}{Nd}=cos^{-1}\frac{1}{2}$$
10. What is the radiation pattern of a broadside array when array element axis coincides with the 0° line?
a)
b)
c)
d)
|
{}
|
# Summary of phase transformations
• What microstructural transformations does a hypoeutectoid steel undergo during cooling?
• How does the microstructure of a hypereutectoid steel develop?
• At what temperature is pearlite formed?
## Introduction
In the article Phase transformations in the solidified state the microstructural changes of steels during cooling were explained in great detail. Since these transformations are very complex, a brief overview of the microstructural transformations is given in this summarizing article. More detailed information can be found in the corresponding main article.
## Solidification
The actual solidification process in steels takes place independently of the carbon content as in a solid solution alloy (typically lenticular two phase region between the liquidus and solidus line). During solidification or immediately thereafter, the carbon is completely soluble in the face-centered cubic $$\gamma$$-iron lattice structure. This solid solution of iron and embedded carbon in the middle of the cube of the elementary cell is called austenite.
In the solidified state, the iron-carbon phase diagram shows the typical horizontal “K” of a crystal mixture, in which the respective components are insoluble in one another (note that the carbon in the iron lattice is actually almost insoluble at room temperature). The phase transformations that the steel undergoes in the process can therefore be considered in analogy to a crystal mixture alloy. However, the phase transformations take place in an already solidified state.
## Phase transformations in solidified state
### Hypereutectoid steels
In hypereutectoid steels with a carbon content of more than 0.8 %, carbon in the form of cementite precipitates at the grain boundaries when the solubility limit is reached (grain boundary cementite). This leads to a depletion of carbon in the remaining austenite. Depletion finally progresses until the retained austenite reaches the eutectoid composition of 0.8 % carbon at 723°C.
Now, at a constant temperature of 723 °C, the face-centered cubic austenite begins to convert completely into the body-centered cubic ferrite structure. Since the carbon in the forming ferrite lattice can no longer be dissolved, it is separated directly from the lattice in the form of cementite lamellae. This eutectoid phase mixture of ferrite grains with the cementite lamellae embedded therein is also known as pearlite.
The microstructure of a hypereutectoid steel at room temperature consists of the previously precipitated grain boundary cementite and the pearlite formed.
### Hypoeutectoid steels
For hypoeutectoid steels with a carbon content of less than 0.8 %, ferrite is precipitated from the austenite lattice when the temperature falls below the $$\gamma$$-$$\alpha$$-transformation line, as the face-centered cubic austenite begins to transform into the body-centered cubic ferrite.
The carbon that can no longer be dissolved in the ferrite lattice formed diffuses into the surrounding austenite lattice, as it can still absorb carbon (under-saturated state). This leads to an accumulation of carbon in the remaining austenite. The enrichment finally progresses until the retained austenite has reached the eutectoid composition of 0.8 % carbon at 723 °C.
Now the residual austenite again transforms into pearlite (the processes of pearlite formation are always identical regardless of the steel).
At room temperature, the microstructure of a hypoeutectoid steel thus consists of the previously separated ferrite grains and the pearlite formed.
### Eutectoid steels
In a eutectoid steel with exactly 0.8 % carbon, the austenite has the eutectoid composition from the outset. Thus, the pearlite can form directly from the austenite without precipitation processes.
The microstructure of a eutectoid steel consists only of pearlite grains at room temperature.
Note that the microstructure of the steel is always composed of the two phases ferrite and cementite, regardless of whether it is a hypoeutectoid (hypopearlitic) steel or a hypereutectoid (hyperpearlitic) steel. This is precisely the characteristic of the metastable system.
The determination of the exact parts of a microstructure of pearlite and ferrite or of pearlite and grain boundary cementite (microstructure fractions) are explained in the next article.
|
{}
|
## The Universal Approximation Power of Finite-Width Deep ReLU Networks
Sep 27, 2018 Blind Submission readers: everyone Show Bibtex
• Abstract: We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (Bölcskei et al., 2018) of a wide class of functions, including polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable. Together with the recently established universal approximation result for affine function systems (Bölcskei et al., 2018), this demonstrates that deep neural networks approximate vastly different signal structures generated by the affine group, the Weyl-Heisenberg group, or through warping, and even certain fractals, all with approximation error decaying exponentially in the number of neurons. We also prove that in the approximation of sufficiently smooth functions finite-width deep networks require strictly fewer neurons than finite-depth wide networks.
• Keywords: rate-distortion optimality, ReLU, deep learning, approximation theory, Weierstrass function
0 Replies
|
{}
|
# Definition of the Tensor Product
Could anyone explain in basic language what the tensor product is? I am relatively new to matrix algebra and I am completely new to this specific concept. What exactly does it do for me and what properties of it should I know?
-
bothered using a search engine? math.stackexchange.com/questions/18881/…;, math.stackexchange.com/questions/138331/…, math.stackexchange.com/questions/144501/…, and of course any book on advanced linear algebra, or commutative algebra, and also online encyclopedia such as Wikipedia, ... – Martin Brandenburg Feb 11 '13 at 19:37
The tensor product is a gadget which allows you to turn bilinear maps into linear maps between vector spaces. I explain this below.
Let $V,W$ and $X$ be vector spaces. We say a function $f:V \times W \to X$ is bilinear if $f(\alpha v_1 + v_2, w) = \alpha f(v_1,w) + f(v_2,w)$ and $f(v, \beta w_1 + w_2) = \beta f(v, w_1) + f(v, w_2)$ for all $v_1, v_2, v \in V$, $w_1, w_2, w \in W$, $\alpha, \beta \in \mathbb{R}$ (let's work over $\mathbb{R}$). This is just saying that $f$ is linear in each "slot".
Now a tensor product of $V$ and $W$ is a vector space $T$, together with a bilinear map $\otimes$, such that if $f:V \times W \to X$ is bilinear, then there is a unique linear map $\varphi: T \to X$ such that $f = \varphi \circ \otimes$, i.e. $f(x,y) = \varphi(\otimes(x,y)) = \varphi(x \otimes y)$ ($\otimes(x,y)$ is denoted by $x \otimes y$).
Now given vector spaces $V$ and $W$, we can always construct a tensor product[1]. Here is one example:
$\otimes: \mathbb{R}^2 \times \mathbb{R}^2$ given by $x \otimes y = x y^t$ ($y^t$ is the transpose) is a bilinear map. One can check (maybe not so easily) that the vector space $T$, where $T = \operatorname{span}\{x \otimes y | x,y \in \mathbb{R}^2\}$, is a tensor product of $\mathbb{R}^2$ and $\mathbb{R}^2$.
I suggest you pick up a book on multilinear algebra, which will discuss these things. Or read other things on the internet. Dummit and Foote is also good.
[1] Multilinear Algebra, Werner Greub
[2] Abstract Algebra, Dummit and Foote
-
thank you for your help :) – dreamer Feb 12 '13 at 13:48
No problem. I'm not sure how much my response helped. But now when I think about it, Abstract Algebra by Dummit and Foote have good explanation of the tensor product. I've added this reference to the post. – nigelvr Feb 12 '13 at 17:22
|
{}
|
July 2016
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
1
1. 8 am, Building 801/NSLS II
2
1. No events scheduled
3
1. No events scheduled
4
1. No events scheduled
5
1. 12 pm, Bldg. 30 - South Room
6
1. 12 pm, Berkner Hall Auditorium
Paul Schenly, director of Pianofest in the Hamptons, brings a group of young pianist participants in this workshop. Performances may be critiqued on stage. A wide range of compositions is selected, including works for two pianos.
7
1. 3 pm, Small Seminar Room, Bldg. 510
Hosted by: 'Michael Begel'
Matter constitutes 30% of the energy content of the Universe. The remaining 70% is what is called dark energy, which exhibits unusual repulsive gravitational interactions. On the matter sheet, only 5% is of known nature, i.e. matter such as found in atoms, in stars, in planets etc. From observations on all astrophysical and cosmological scales we know that most of it, i.e. 25%, is dark matter (DM) of unknown nature. The nature of DM is one of the most important open problems in science. The ongoing hunt for DM is multi-pronged and interdisciplinary involving cosmology and astrophysics, particle and nuclear physics as well as detector technology. In this talk we will focus on the direct detection of the dark matter constituents, the so called weakly interacting massive particles (WIMPs), in underground labs. The detection consists of measuring the energy deposited in the detector by the recoiling nucleus, after its elastic collision with a WIMP (spin independent or spin induced). In obtaining the event rates one needs models about the WIMP interaction and density in our vicinity as well as its velocity distribution. No events have so far been observed, only exclusion plots on the nucleon cross sections have been obtained, which will be discussed. Since the expected rates are very small and the usual experimental signature is not different from that of the backgrounds, we will discuss some special signatures that might aid in the analysis of the experiments such as the time dependence of the signal (modulation effect) and the option of inelastic scattering, possible in some special targets, by detecting γ-rays following the de-excitation of the nucleus.
8
1. 12 pm, NSLS-II Bldg 744 (LOB 4), room 156
Hosted by: 'L. Carr, S. Chodankar and B. Ocko'
2. 2 pm, Small Seminar Room, Bldg. 510
Hosted by: 'Matthew Sievert'
Fluctuations of conserved charges are important observables that offer insight into the phase structure of strongly interacting matter. Around critical points, such as the chiral critical endpoint of QCD, higher order cumulants of the relevant quantities show universal behavior. The universal behavior of baryon number cumulants can be studied in effective models that lie in the same universality class as QCD. Such a model is for example the Quark Meson model. In my talk I discuss what one can learn from effective field theory studies of fluctuations and present my results obtained using the Functional Renormalization Group method in the Quark Meson model.
3. 2:30 pm, Large Conference Room, Bldg. 535
I will introduce my work on astronomical telescopes, especially on the Observatory and Control Imaging system. From 1998, we began work on LAMOST, including its Observatory Control System (OCS), Survey Strategy System (SSS), and its Instrument Control System (ICS). Based on this work, we developed generic models and a framework for control Systems of Large Astronomy Telescopes, including a basic hierarchical structure, workflow model, and telescope control models based on object-oriented analysis, the main data flow model of a general purpose telescope. A layered and orthogonal architecture which will have a wide range of adaptability and a concrete architecture based on the message bus will be designed and applied to LAMOST and FAST. For the requirement of autonomous control and observation for astronomical telescopes in Antarctic, we also developed a framework based on RTS2 and EPICS. In the telescope, the imaging system, especially the detector system, is the key component. By adapting it to the requirements of low temperature and stability of operation in the Antarctic, we are developing a camera for CSTAR including vacuum chamber and CCD controller. Now in China, a 2.5-meter optic/infrared telescope, the Kunlun Dark Universe Survey Telescope is planned with a large focal plane similar with LSST but more challenges for us.
9
1. No events scheduled
10
1. 10 am, Lobby in Berkner
A fabulous day of hands-on family fun with the Science Learning Center and Environmental Extravaganza, both ready for you to explore.
11
1. No events scheduled
12
1. 11 am, John Dunn Seminar Room, Bldg. 463
Hosted by: 'Kerstin Kleese van Dam'
I will discuss a new structured mesh PDE programming library, Grid, developed with QCD and as an Intel Parallel Computing Centre. This library uses advanced C++11 template mechanisms to obtain faster than Fortran performance on typical operations, delivering as much as 65% of peak performance on modern Intel cores from high level C++ code. Performance and experience from Intel's Knights Landing processor will be presented. The prospects of applying similar technique to unstructure FEM codes is discussed, and an example of CFD given. Finally I discuss some of the Intel - Alan Turing Institute project codesign goals.
2. 2 pm, John Dunn Seminar Room, Bldg. 463
Hosted by: 'Sushil Sharma and Mary Carlucci-Dayton'
High performance goals of several facilities at BNL (NSLS-II, RHIC, CFN and future eRHIC) require high mechanical stability of their equipment such as magnets, BPMs, mirrors, monochromators, detectors, and microscopes. The mechanical stability of these components can be compromised by site-wide ground vibrations, local vibration sources (pumps, motors, etc.), and fluctuations in air and water temperatures. This presentation highlights the results of several studies that have been conducted at several BNL sites and facilities over the past five years to characterize the mechanical stability issues and to develop mitigation schemes.
13
1. 4 pm, Recreation Hall, Bldg. 317
Hosted by: 'T. Sampieri'
14
1. 12:30 pm, Building 510, Room 2-160
Hosted by: 'Hiroshi Ohki'
Anomalous chiral transport processes, with the notable examples of Chiral Magnetic Effect (CME) and Chiral Magnetic Wave (CMW), are remarkable phenomena that stem from highly nontrivial interplay of QCD chiral symmetry, axial anomaly, and gluonic topology. The heavy ion collisions, in which topological fluctuations generate chirality imbalance, and very strong magnetic fields $|\vec{\bf B}|\sim m_\pi^2$ are present during the early stage of such collisions, provide a unique environment to study these anomalous chiral transport processes. Significant experimental efforts have been made to look for signals of CME and various other signals of anomalous chiral transport effects in heavy ion collisions. Crucial for such efforts, is the theoretical development of quantitative simulations based on hydrodynamics that incorporates chiral anomaly, implements realistic initial conditions and properly accounts for possible backgrounds. We will introduce our recent progress to understand CME qualitatively, based on a 2+1D viscous hydrodynamics framework
2. 3 pm, John Dunn Seminar Room, Bldg. 463
Hosted by: 'Dr. Huilin Li'
BRCA/FA pathway plays a vital role in ensuring the integrity of mammalian genome. Mutations of genes in this pathway lead to either congenital defects or a variety of diseases including blood related diseases and cancers. In my presentation, I will discuss two discoveries made in my lab: (1) BRCA1 promotes the ubiquitination of PCNA and recruits the translesion DNA polymerases to the stalled DNA replication sites; (2) FANCM, BRCA1 and BLM collaboratively alleviate replication stress at the telomeres. Our discoveries may have important implications in finding better treatment strategies for certain cancers.
3. 3 pm, Small Seminar Room, Bldg. 510
Hosted by: '''Michael Begel'''
Tau leptons are notoriously difficult particles to work with in the environment of a hadron collider due to their short lifetime and heavy enough mass for semi-hadronic decay. In this talk I will present the physics motivation for working with taus in spite of the challenges. And I will describe the work my group is involved with, from the first measurement of tau polarization at a hadron collider, to Higgs-tagging and searches for heavy, exotic particles. I will also describe the landscape for physics with taus at ATLAS as we look into Run2 and beyond.
4. 4 pm, Small Seminar Room, Bldg. 510
In the intriguing world of subatomic physics, neutrinos form the most bizarre tiny entities known to date. Well, they may be tiny, but the world surrounding them is astonishingly big. Today scientists study these elusive particles to understand the biggest puzzles in the universe, from the structure of the atom to the formation of a star. As the popular saying goes, "Whenever anything cool happens in the universe, neutrinos are usually involved." Although more than a trillion of these little particles pass unnoticed through our bodies every second of the day, neutrinos still remain largely mysterious. These famously shy particles are notoriously difficult to detect given how rarely they interact with normal matter. How rare you ask? Let's say in your entire lifetime, perhaps one neutrino will interact with an atom in your body and seriously, you should feel fortunate that it is that way. Also, the weird fact that these ghostly particles can "morph" into one another makes it even more difficult to detect them. Despite all these challenges, researchers have managed to capture a handful of them by building immense and exquisitely sensitive detectors in some of the most remote places of the planet such as deep in the Antarctic ice, miles under a mine in Canada and deep under a mountain in Japan. Come for an hour to be mesmerized by the scientific adventures in the wonderful world of neutrinos and how they can help us unlock some of the deepest secrets of the universe.
15
1. 10 am, Small Seminar Room, Bldg. 510
Hosted by: 'Jyoti Joshi'
The past few years have brought several remarkable neutrino-related discoveries and experimental anomalies indicating that these elusive particles might hold clues to some of the most profound questions in particle physics such as matter-antimatter asymmetry and the possibility of additional low-mass neutrino states. Further exploration of these clues require technological advances in neutrino detection. Liquid Argon Time Projection Chambers (LArTPCs) are imaging detectors that present neutrino interactions with the detail of bubble chambers, but with an electronic data acquisition and processing. Various efforts are ongoing at Fermi National Accelerator Laboratory (Fermilab) to develop this intriguing technology. MicroBooNE is a 170 ton LArTPC which recently started collecting data with Fermilab's Booster Neutrino Beam. In addition to addressing the recent low-energy electromagnetic anomaly observed by the MiniBooNE experiment, the exceptional particle identification capability of MicroBooNE will make it possible for the first time to measure low-energy (~1 GeV) neutrino cross-sections in argon with high precision thereby providing invaluable inputs to develop nuclear models needed for future long-baseline neutrino oscillation experiments. MicroBooNE is also leading the way for an extensive short-baseline neutrino physics program at Fermilab and also serves as a R&D project towards a long-baseline multi-kiloton scale LArTPC detector. This talk will start by giving a brief overview of LArTPC efforts at Fermilab, followed by a description of the MicroBooNE experiment, its current status and first physics results along with some future projections.
2. 12 pm, NSLS-II Bldg 744 (LOB 4), room 156
Hosted by: 'L. Carr, S. Chodankar and B. Ocko'
16
1. No events scheduled
17
1. 10 am, Berkner Hall for Information
Tour the Center for Functional Nanomaterials, where Brookhaven scientists study structures as tiny as a billionth of a meter.
18
1. No events scheduled
19
1. No events scheduled
20
1. No events scheduled
21
1. 3 pm, Small Seminar Room, Bldg. 510
Hosted by: ''Xin Qian''
The IceCube neutrino telescope at the South Pole has measured the atmospheric muon neutrino spectrum as a function of zenith angle and energy. Using IceCube's full detector configuration we have performed a search for eV-scale sterile neutrinos. Such a sterile neutrino, motivated by the anomalies in short-baseline experiments, is expected to have a significant effect on the $\bar{\nu_\mu}$ survival probability due to matter induced resonant effects for energies of order 1 TeV. This effect makes this search unique and sensitive to small sterile mixings. In this talk, I will present the results of the IceCube sterile neutrino search.
2. 5:30 pm, BNL Gazebo
All you can eat burgers, hot dogs, snacks, drinks. Beer for those 21+ (bring photo ID). Guests/family welcome. $3 admission, purchase at BERA store in Berkner Hall (open 9am-3pm) by 1pm Thursday;$5 at the 'door'. (Rain date is Friday, July 22) Sponsored by the NSLS-II User Community and hosted by the Association of Students and Postdocs (ASAP)
22
1. 2 pm, Small Seminar Room, Bldg. 510
Hosted by: 'Matthew Sievert'
Transport coefficients in two systems are addressed via holographic methods originating from the AdS/CFT. The first system is a neutral conformal fluid. In linearised hydrodynamics, beyond shear viscosity, all order gradient expansion can be efficiently resummed into two momenta-dependent transport coefficient functions. The second system is an e/m current coupled via chiral anomaly to an axial U(1) current. The anomaly-free all order transport coefficients are resummed into three momenta-dependent functions, the diffusion function and two conductivities. Anomaly-induced transport, resummed to all orders, generalises the chiral-magnetic effect (CME) and related phenomena. Novel, anomaly-induced non-linear effects will be presented too.
23
1. No events scheduled
24
1. 10 am, Berkner Hall for Information
Visit the National Synchrotron Light Source II, where scientists use intense beams of light to see the inner structure of batteries, proteins, space dust, and more.
25
1. 10 am, Room 300, Chemistry Bldg. 555 - 3rd Floor
Hosted by: ''Miomir Vukmirovic''
The synchrotron-based X-ray absorption spectroscopy (XAS) is a non-destructive technique that measures the changes in the x-ray absorption coefficient of a material as the function of energy. The X-rays are highly penetrating and allow studies of gases, solids or liquid at concentrations of as low as a few ppm. As an element-specific technique, XAS can resolve the oxidation state of the element, as well as its coordination environment and subtle changes within. Its unique power is found in application to metal clusters, particularly in nanomaterials. It can resolve the inner structure of a nanoparticle composed of two or more elements, i.e. solid solution, aggregate mixtures, or core-shell particles in which one metal is present mostly in the center of the particle (core), and the other forms a shell around it. The latter nanoparticle systems are of a special interest for electrocatalysts composed of expensive noble metals because minimizing the noble metal content is the goal of the present technology development. The lecture focuses on in-situ characterization of electrochemical systems composed of two or more metal atoms for fuel cell technology. Selected examples show the changes in the inner structure of the catalyst during the oxidation of fuels on anode systems, or oxygen reduction on cathodes, including size, shape and partial oxidation state, and correlate them to the catalyst's activity and stability.
26
1. No events scheduled
27
1. No events scheduled
28
1. JUL
28
Thursday
3 pm, Small Seminar Room, Bldg. 510
Thursday, July 28, 2016, 3:00 pm
Hosted by: ''Thomas Ullrich''
The accelerator-based neutrino-oscillation program, aimed for the measurement of oscillation parameters and observing the leptonic CP violation, is moving full steam ahead. However, the recent measurements have revealed unexpected and interesting neutrino interaction physics, and exposed the inadequacy of the relativistic Fermi gas (RFG) based Monte-Carlo generators (in describing neutrino-nucleus scatterings) resulting in large systematic uncertainties. A more detailed and careful neutrino-nucleus modeling, covering the whole experimental kinematical space, is inevitable in order to achieve the unprecedented precision goal of the present and future accelerator-based neutrino-oscillation experiments. In this talk, I will present a microscopic Hartree-Fock (HF) and continuum random phase approximation (CRPA) approach to electroweak scattering off nuclei from low energy (threshold) to the intermediate energy region. As a necessary check to test the reliability of this approach, I will first present a electron-nucleus (^12 C, ^16 O, ^40 Ca) cross section comparison (in the kinematics range of interest) with the data to validate the model. Then, I will present flux-folded (anti)neutrino cross section calculations and comparison with the measurements of MiniBooNE and T2K experiments. I will draw special attention to the contribution emerging from the low-energy nuclear excitations, at the most forward scattering bins, in the signal of MiniBooNE and T2K experiments and their impact on the non-trivial differences between muon-neutrino and electron-neutrino cross sections. These effects remain inaccessible in the (current) relativistic Fermi-gas (RFG) based Monte-Carlo generators.
29
1. JUL
29
Friday
2 pm, Small Seminar Room, Bldg. 510
Friday, July 29, 2016, 2:00 pm
Hosted by: 'Matthew Sievert'
High Pt Dijet production in ep/eA DIS at small x (high energy) involves the expectation value of a trace of four Wilson lines, i.e. the quadrupole. At leading power the isotropic part can be expressed as the conventional Weizsacker-Williams gluon distribution. On the other hand, the distribution of linearly polarized gluons determines the amplitude of the ~ cos(2phi) anisotropy of the transverse momentum imbalance. I shall also discuss the operator that determines the next-to-leading power correction, its expectation value in a Gaussian theory (at large Nc), and the resulting .
30
1. No events scheduled
31
1. JUL
31
Sunday
10 am, Berkner Hall for Information
Sunday, July 31, 2016, 10:00 am
Explore the Relativistic Heavy Ion Collider, where particles are smashed together at near-light-speed to reveal the secrets of our universe. * Facility tour appropriate for ages 10 and over.
1. JUL
28
Thursday
Particle Physics Seminar
"Modeling electron- and neutrino-nucleus scattering in kinematics"
Presented by Vishvas Pandey, Ghent University
3 pm, Small Seminar Room, Bldg. 510
Thursday, July 28, 2016, 3:00 pm
Hosted by: ''Thomas Ullrich''
The accelerator-based neutrino-oscillation program, aimed for the measurement of oscillation parameters and observing the leptonic CP violation, is moving full steam ahead. However, the recent measurements have revealed unexpected and interesting neutrino interaction physics, and exposed the inadequacy of the relativistic Fermi gas (RFG) based Monte-Carlo generators (in describing neutrino-nucleus scatterings) resulting in large systematic uncertainties. A more detailed and careful neutrino-nucleus modeling, covering the whole experimental kinematical space, is inevitable in order to achieve the unprecedented precision goal of the present and future accelerator-based neutrino-oscillation experiments. In this talk, I will present a microscopic Hartree-Fock (HF) and continuum random phase approximation (CRPA) approach to electroweak scattering off nuclei from low energy (threshold) to the intermediate energy region. As a necessary check to test the reliability of this approach, I will first present a electron-nucleus (^12 C, ^16 O, ^40 Ca) cross section comparison (in the kinematics range of interest) with the data to validate the model. Then, I will present flux-folded (anti)neutrino cross section calculations and comparison with the measurements of MiniBooNE and T2K experiments. I will draw special attention to the contribution emerging from the low-energy nuclear excitations, at the most forward scattering bins, in the signal of MiniBooNE and T2K experiments and their impact on the non-trivial differences between muon-neutrino and electron-neutrino cross sections. These effects remain inaccessible in the (current) relativistic Fermi-gas (RFG) based Monte-Carlo generators.
2. JUL
29
Friday
Nuclear Physics Seminar
"Azimuthal anisotropy and the distribution of linearly polarized gluons in DIS dijet production at high energy"
Presented by Adrian Dumitru, Baruch College
2 pm, Small Seminar Room, Bldg. 510
Friday, July 29, 2016, 2:00 pm
Hosted by: 'Matthew Sievert'
High Pt Dijet production in ep/eA DIS at small x (high energy) involves the expectation value of a trace of four Wilson lines, i.e. the quadrupole. At leading power the isotropic part can be expressed as the conventional Weizsacker-Williams gluon distribution. On the other hand, the distribution of linearly polarized gluons determines the amplitude of the ~ cos(2phi) anisotropy of the transverse momentum imbalance. I shall also discuss the operator that determines the next-to-leading power correction, its expectation value in a Gaussian theory (at large Nc), and the resulting .
3. JUL
31
Sunday
Summer Sunday
"Atom-Smashing Fun: Relativistic Heavy Ion Collider"
10 am, Berkner Hall for Information
Sunday, July 31, 2016, 10:00 am
Explore the Relativistic Heavy Ion Collider, where particles are smashed together at near-light-speed to reveal the secrets of our universe. * Facility tour appropriate for ages 10 and over.
4. AUG
1
Monday
Sambamurti Lecture
"Electron-Positron Tomography Seeking Symmetry in the Quark-Gluon Plasma"
Presented by Lijuan Ruan, Brookhaven National Laboratory
4 pm, Large Seminar Room, Bldg. 510
Monday, August 1, 2016, 4:00 pm
Hosted by: '''John Haggerty'''
5. AUG
3
Wednesday
BSA Noon Recital
"Pianofest"
12 pm, Berkner Hall Auditorium
Wednesday, August 3, 2016, 12:00 pm
Paul Schenly, Director of Pianofest in the Hamptons, brings a group of young pianist participants in the second session of this workshop. Performances may be critiqued on stage. A wide range of compositions will be selected, including works for two pianos.
6. AUG
4
Thursday
RIKEN Lunch Seminar
"TBA"
Presented by Amir Rezaeian, The Federico Santa Maria Technical University
12:30 pm, Building 510, Room 2-160
Thursday, August 4, 2016, 12:30 pm
Hosted by: 'Hiroshi Oki'
7. AUG
5
Friday
Particle Physics Seminar
"Study of the detection of supernova neutrinos"
Presented by Hanyu Wei, Tsinghua University
10 am, Small Seminar Room, Bldg. 510
Friday, August 5, 2016, 10:00 am
Hosted by: 'Xin Qian'
A core-collapse supernova explosion would release an enormous amount of neutrinos, the detection of which could yield answers to many questions of supernova dynamics and neutrino physics. The collective neutrinos from all the past supernovae all over the universe (supernova relic neutrinos) are also observable, and their detection would provide us an insight of the stellar evolution and cosmology. In this talk, I will first introduce the supernova burst neutrinos as well as supernova relic neutrinos. Then, i will present the design, characteristics, and sensitivity of an online trigger system of supernova burst neutrinos at Daya Bay. I will also present a search for supernova burst neutrinos at Daya Bay using about 600 days of data. At last, a sensitivity study of the discovery potential for supernova relic neutrinos with a slow liquid scintillator will be presented, which is highly recommended to kilo-ton-scale detectors.
8. AUG
12
Friday
HET/RIKEN Seminar
"TBA"
Presented by Stefano Di Vita, DESY
12:15 pm, Building 510, Room 2-160
Friday, August 12, 2016, 12:15 pm
Hosted by: 'Pier Paolo Giardino'
9. AUG
26
Friday
HET Lunch Discussions
"TBA"
Presented by Taku Izubuchi, BNL
12:15 pm, Building 510, Room 2-160
Friday, August 26, 2016, 12:15 pm
Hosted by: 'Christoph Lehner'
10. SEP
8
Thursday
CFN Colloquium
"TBD"
Presented by Alan Aspuru-Guzik
1:30 pm, CFN, Bldg 735, Seminar Room, 2nd Floor
Thursday, September 8, 2016, 1:30 pm
Hosted by: ''Qin Wu''
11. SEP
8
Thursday
""Open to the Public""
6:30 pm, Berkner Hall, Room B
Thursday, September 8, 2016, 6:30 pm
12. SEP
14
Wednesday
HET
"TBA"
Presented by Gopolang Mohlabeng, University of Kansas
2 pm, Small Seminar Room, Bldg. 510
Wednesday, September 14, 2016, 2:00 pm
Hosted by: 'Sally Dawson'
13. SEP
15
Thursday
BSA Distinguished Lecture
"Solar Driven Water Splitting"
Presented by Professor Harry Gray, California Institute of Technology
4 pm, Berkner Hall Auditorium
Thursday, September 15, 2016, 4:00 pm
Hosted by: 'Peter Wanderer'
14. OCT
5
Wednesday
HET/RIKEN Seminar
"TBA"
Presented by Marco Farina, Rutgers University
2 pm, Small Seminar Room, Bldg. 510
Wednesday, October 5, 2016, 2:00 pm
Hosted by: 'Pier Paolo Giardino'
15. OCT
6
Thursday
Particle Physics Seminar
"Dark Interactions: perspective from theory and experiment"
9 am, Small Seminar Room, Bldg. 510
Thursday, October 6, 2016, 9:00 am
Hosted by: 'Michael Begel'
16. OCT
13
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, October 13, 2016, 6:30 pm
Hosted by: ''Nora Sundin''
17. OCT
26
Wednesday
HET/RIKEN Seminars
"TBA"
Presented by Stefania Gori, University of Cincinnati
2 pm, Small Seminar Room, Bldg. 510
Wednesday, October 26, 2016, 2:00 pm
Hosted by: 'Pier Paolo Giardino'
18. NOV
10
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, November 10, 2016, 6:30 pm
Hosted by: 'Nora Sundin'
19. DEC
1
Thursday
PACCD Workshop (Precision Astronomy with Fully Depleted CCDs)
8 am, Large Seminar Room, Bldg. 510
Thursday, December 1, 2016, 8:00 am
Hosted by: 'Andrei Nomerotski'
20. DEC
2
Friday
PACCD Workshop (Precision Astronomy with Fully Depleted CCDs)
8 am, Large Seminar Room, Bldg. 510
Friday, December 2, 2016, 8:00 am
Hosted by: 'Andrei Nomerotski'
21. DEC
8
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, December 8, 2016, 6:30 pm
Hosted by: 'Nora Sundin'
22. JAN
12
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, January 12, 2017, 6:30 pm
Hosted by: 'Nora Sundin'
23. FEB
9
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, February 9, 2017, 6:30 pm
Hosted by: 'Nora Sundin'
24. MAR
9
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, March 9, 2017, 6:30 pm
Hosted by: 'Nora Sundin'
25. APR
13
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, April 13, 2017, 6:30 pm
Hosted by: 'Nora Sundin'
26. MAY
11
Thursday
6:30 pm, Berkner Hall, Room B
Thursday, May 11, 2017, 6:30 pm
Hosted by: 'Nora Sundin'
27. JUN
8
Thursday
|
{}
|
# Tag Info
20
This is a physical rather than a mathematical justification - ignore my answer if that isn't what you wanted! All systems have some thermal motion so they explore the phase space in their immediate vicinity. If there is a nearby point with a free energy lower by some amount $\Delta G$ then the relative probability of finding the system at that point will be ...
12
Here's another way of looking at it. Let M1, M2, M3 be our three masses. In the three body problem we're considering, the whole frame containing M1, M2 and M3 is rotating. You're right to think that if that frame was fixed then the points L4 and L5 would not be stable. After all if you perturb M3 from L4 or L5 then it should just roll down the potential ...
11
Of course it has something to do with the liquid water entering the gas phase just above the cup of tea, but how does that give the bag of tea a directed motion to one side? Nope. The teabag is dangled by a string. Remember that the string is made of wound up threads: Now, the threads stay wound up because they fit well and they have a knack of ...
11
The "simplest" classical explanation I know is the van der Waals interaction described by Keesom between two permanent dipoles. Let us consider two permanent dipoles $\vec{p}_1$ (located at $O_1$) and $\vec{p}_2$ located at $O_2$. Their potential energy of interaction is: U(\vec{p}_1,\vec{p}_2,\vec{O_1 O_2}) = -\vec{p}_1\cdot \vec{E}_2 = ...
10
The formula you quote does not contain the potential energy, it is valid for a free particle (i.e. a particle which is not affected by external potential). You can link it to classical mechanics by evaluating it for small values of $p$ (more precisely: $p \ll c$): $$E = \sqrt{\left(mc^2\right)^2 + p^2 c^2} = c \sqrt{m^2c^2 + p^2} = \cdots$$ $$\cdots = ... 9 The energy in your equation is for a free rigid body in the absence of a potential. We can see this if we start with a Lagrangian with a scalar function, \Phi(q), and remember \gamma is a function of \dot{q},$$ L=T-V=-\gamma^{-1} (\dot{q}) \, mc^2-\Phi(q) $$Then if we find the momentum$$ \pi=\frac{\partial L}{\partial ...
8
You're right that if you take Newton's law of gravity as is and apply it to a 2D universe, you'll get an infinite result. So you do need to use a modified theory in two dimensions, or indeed in any number of dimensions other than three. The proper way to do this is using general relativity, and if you apply GR to 2+1D spacetime, you get something that looks ...
8
Gravity is doing that work! If you observe, the domino is in a position of unstable equilibrium. Edit: as pointed out in the comments, this position is of a metastable and not unstable equilibrium. This means that the domino is in a state where it hasn't achieved the minimum possible energy state yet. The energy I'm talking about here is the ...
8
The potential energy only being defined up to a constant does not imply that potential energy differences only depend on differences in position. To see this mathematically, assume that a function $U$ has the property that $U(x_2)-U(x_1) = f(x_2-x_1)$ for some function $f$. Then if we take $x_2 = x+\Delta x$ and $x_1 = x$, and divide both sides by ...
8
Note the title of the link you give : Minimum total potential energy principle bold mine. The only answer to "why" questions about principles in physics is "because the theoretical models dependent on it have been found to describe all known data and can predict new ones". Why questions in physics when they hit postulates and laws, is like asking why ...
8
There are various ways to decide which of the assumptions are primary and which of them are their consequences but $E=VQ$ may be most naturally interpreted as the definition of the potential. The potential energy is a form of energy and the potential (and therefore voltage, when differences are taken) is defined as the potential energy (or potential energy ...
8
You say: Imagine a book that we lift it with a force that is exactly equal to the force of gravity so the forces cancel out and the book moves with a constant velocity. so I'm guessing your reasoning is that the net force on the book is zero so the amount of work done on the book is zero. And you are absolutely correct - no work is done on the book and ...
8
Most of the electromagnetic energy is in x-rays which means it is deposited in the bulk material of the bomb and in the surrounding air over a few meters (or tens of meters at most). All that stuff heats up, needs to expand but piles up against other stuff also trying to expand. You get a massively energetic shock-wave of very hot material which is in ...
7
Gravitational potential energy is usually measured as a negative value. We do this because an object that is so far away from a gravity well that it practically is unaware of it shouldn't be considered as having any potential energy. So as $r\to\infty$, $PE\to0$. As an object falls into a gravity well, it loses potential energy, so gravitational PE is a ...
7
Actually, your expression for the potential $\Phi(r)$ is incorrect. The expression $\Phi(r) = -\frac{GM(r)}{r}$ is only valid outside the sphere. As an explicit demonstration of its invalidity, note that $$\underset{r\rightarrow0}{\text{lim}}\,\Phi(r)=\underset{r\rightarrow0}{\text{lim}}\,\left[-\frac{G}{r}\int_0^r4\pi r'^2\rho(r')\,dr'\right]=0$$ assuming ...
7
Let $E$ denote a quantity that does not change over time (from the first principle). Consider a ball with mass $m$ dropped from a height $h$. As the ball drops, its speed changes due to the gravitational acceleration $g$, reaching a final value $v$ at impact. Thus, we can infer that the quantity $E$ depends on these 4 parameters: $$E(m,H,g,V)$$ where $H$ ...
7
This effect is called capillarity and is not that straightforward. The contact between water and a solid surface is determined by the chemical bonds. It is macroscopically observed in the contact angle that the water/air surface makes with the solid surface. This angle depends on the strength of the bonds between the solid and the water molecules. You can ...
7
While it may be possible to derive a violation of energy conservation due to intersecting equipotentials, there is a much more intuitive and in my opinion a more fundamental reason that equipotentials cannot intersect: Potential is a single-valued function. A good analogy for potential in this case is a map of the ground elevation of the earth; a ...
6
When you look at the dynamics in the rotating reference frame, there are 4 forces acting on the particle: the two gravitational pulls from the massive bodies, the centrifugal push away from the center of rotation (located between the massive objects) and the Coriolis force. The first three forces depend on the position of the particle, and can be derived ...
6
Yes the free body moves outward, but there are two critical things you have to know to interpret this statement correctly. First, this is the effective potential, taking into account gravity and centrifugal force. It has this form because we went into the non-inertial frame co-rotating with the two masses. Mathematically, the potential is $$... 6 The question about minimizing potential energy and the replies that such questions do not make much sense is a typical conversation between a physicist and a mathematician: Physicist: - Why systems tend to minimize potential energy? Mathematician: - Look around, lots of things follow this principle: potential energy, entropy... Physicist: - OK, I can see ... 6 Schrödinger's Wave Equation is an application of Hamiltonian Mechanics. Unlike Newtonian Mechanics, Hamiltonian Mechanics relies on knowing about the things that contribute to the energy of the system. If you know the things which contribute to the energy of a system, then you can determine things like forces, accelerations, and positions. (All through the ... 6 Think about the work-kinetic energy theorem, which states that the net work done on an object is equal to its change in kinetic energy:$$W_{net}=\Delta\mathrm{KE}.$$You are right that when lifting an object of mass m by a height h, in a uniform gravitational field, the work you do is W_{you}=mgh (assuming, as you said, that you're applying a force ... 6 Yes, quantum tunnelling in the double well potential can be solved in a Wick-rotated Euclidean formulation$$ S_E[x]~=~\int \! dt_E \left[ \frac{1}{2}\left(\frac{dx}{dt_E}\right)^2 - (-V) \right], $$see e.g. Ref 1. Here t_E=it_M denotes Euclidean time. The Euclidean action is in turn interpreted as the usual kinetic minus potential term with a potential ... 5 If the particle moves from the point x to x+dx, and assume dx\gt 0 for simplicity, then its potential energy increases by$$ dU = \frac{dU}{dx}dx $$Well, it increases if dU is positive and decreases if dU is negative. So far I have only used the definition of the derivative – pure mathematics. However, the total energy is conserved. The sum of ... 5 It isn't possible to measure potential energy because it has a (global) gauge symmetry. It's like trying to measure the height of a mountain - this could be the height above sea level, the height relative to the deepest sea trench, the height relative to the centre of the earth and so on. Any measurement can only measure the change in potential energy, and ... 5 For forces that change along the way, displacement is not the thing to calculate work with. Let \gamma : [0,1] \rightarrow \mathbb{R}^3 be the (closed or open) path that the particle the force is exerted on follows. Then, the work done along that path is$$ W[\gamma,F] = \oint_\gamma \vec{F}(\vec{x})\cdot \mathrm{d}\vec{x}$$which is a line integral. If ... 5 Well, we can do a simple counter-example. Let$$ \vec{F}(\vec{x}) = F_0 \cdot \varrho(\vec x) $$where \varrho is the function that rotates vectors by 90° counter-clockwise (in matrix form (\begin{smallmatrix}0 & -1\\1 & 0\end{smallmatrix}) if you prefer that). Clearly, for the closed path$$ \vec{\gamma}\colon\quad [0, 2\pi]\ \to\ ...
5
Your teacher's explanation is incorrect. A simple counterexample can be constructed to illustrate this by considering what happens when the role of your arm is replaced by that of a rubber band. When a weight is suspended from the ceiling by a rubber band, the band stretches and its polymer chains become more ordered, in exact analogy to your teachers ...
5
It's valid in the sense that it does tell you the rest energy of a 200-pound person, but it does not tell you how much energy you could get by splitting all those atoms. As a matter of fact, most of the atoms in a human body are carbon, nitrogen, and oxygen; splitting these atoms takes energy, it doesn't produce it. Your character would need to tap into a ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# Significance of the experiment
In their 1914 experiment, James Franck and Gustav Hertz demonstrated that in collisions between accelerated electrons and gaseous atoms, energy was only transferred from the electrons to the atoms if the electron had gained an energy before the collision equal to or greater than the energy required to excite the atom from its ground state to its next lowest energy level. If it had more energy than needed, the electron only transferred the amount corresponding to the energy difference between the two atomic energy levels. That is, atoms have discrete, quantized energy states, providing early evidence for the validity of quantum theory. Franck and Hertz won the 1925 Nobel Prize in recognition of this result.
# Conceptual Introduction
Review the introduction to the Franck-Hertz effect provided by the excellent HyperPhysics website. There are 5 key panels on this website that you should read:
In addition, familiarize yourself with the more detailed description in (Melissinos 2003) on pages 10 - 19.
# Questions for first lab class
A few short questions about this experiment and the apparatus are listed below. Review the sketch of the Franck-Hertz apparatus — particularly the functions of the cathode, anode, and grid.1 Write short answers to these questions in your lab book; bring the lab book to class.
1. The gas of atoms is contained inside a glass chamber. Electrons are emitted from a metal surface called the cathode. How are electrons emitted from the cathode?
2. The electrons gain kinetic energy (between collisions) by accelerating electrons towards a grid.How are the electrons accelerated towards the grid(s)?
3. Are the atoms comprising the gas also accelerated? If so, when, and under what conditions?
4. Some electrons accelerated towards the grid actually pass through it, and are then collected at the anode. A device — typically an electrometer or current preamp — is used to measure the current carried by these electrons. Why does the transfer of energy from an electron to an atom result in a dip in the electron beam current?
5. What is the purpose of the negative voltage (counter voltage) between the anode and the grid?
1. Sometimes there are two grids, one to initially accelerate the electrons away from where they are emitted towards the gas and one to accelerate the electrons towards the anode after their collisions.
# Activities for first lab class
Your lab instructor will introduce you to apparatus used to observe the Franck-Hertz effect for collisions of electrons with Ne atoms. Discuss your answers to the questions above (which you answered in preparation for this class) and use the answers to explain how the apparatus works.
Note that for the neon and argon apparatuses, there are two grids: $$G_1$$ and $$G_2$$.1 An example of this two-grid style apparatus and the circuit used to control it are shown in Figure 1.7 of (Melissinos 2003) on page 15; an example of just the apparatus tube is shown below in Fig. \ref{fig:TwoGridFHtube}.
1. In the mercury apparatus, there is only one grid. The $$G_1$$ (control) grid is missing; the one remaining grid plays the role of $$G_2$$
|
{}
|
• ### Predictions for $p+$Pb Collisions at $\sqrt{s_{NN}} = 5$ TeV: Comparison with Data(1605.09479)
May 31, 2016 hep-ph, nucl-ex, nucl-th
Predictions made in Albacete {\it et al} prior to the LHC $p+$Pb run at $\sqrt{s_{NN}} = 5$ TeV are compared to currently available data. Some predictions shown here have been updated by including the same experimental cuts as the data. Some additional predictions are also presented, especially for quarkonia, that were provided to the experiments before the data were made public but were too late for the original publication are also shown here.
• ### Electron Ion Collider: The Next QCD Frontier - Understanding the glue that binds us all(1212.1701)
Nov. 30, 2014 hep-ph, hep-ex, nucl-ex, nucl-th
This White Paper presents the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community. It was commissioned by the managements of Brookhaven National Laboratory (BNL) and Thomas Jefferson National Accelerator Facility (JLab) with the objective of presenting a summary of scientific opportunities and goals of the EIC as a follow-up to the 2007 NSAC Long Range plan. This document is a culmination of a community-wide effort in nuclear science following a series of workshops on EIC physics and, in particular, the focused ten-week program on "Gluons and quark sea at high energies" at the Institute for Nuclear Theory in Fall 2010. It contains a brief description of a few golden physics measurements along with accelerator and detector concepts required to achieve them, and it benefited from inputs from the users' communities of BNL and JLab. This White Paper offers the promise to propel the QCD science program in the U.S., established with the CEBAF accelerator at JLab and the RHIC collider at BNL, to the next QCD frontier.
• ### Predictions for $p+$Pb Collisions at sqrt s_NN = 5 TeV(1301.3395)
Jan. 22, 2013 hep-ph, nucl-th
Predictions for charged hadron, identified light hadron, quarkonium, photon, jet and gauge bosons in p+Pb collisions at sqrt s_NN = 5 TeV are compiled and compared. When test run data are available, they are compared to the model predictions.
|
{}
|
### 11 How to load a PKCS#12 Digital Certificate with Javascript WebCrypto API
lumee 1 hours 33 minutes ago. 1 answers, 0 views javascript digital-signature digital-certificate webcryptoapi
I'm trying to sign data using the WebCrypto API, but instead of creating a private/public key and exporting it to pkcs#1 or 8, I would really like to use a user's PKCS#12 to sign data. I've read the W3C spec,...
### 59 C# conditional AND (&&) OR (||) precedence
We get into unnecessary coding arguments at my work all-the-time. Today I asked if conditional AND (&&) or OR (||) had higher precedence. One of my coworkers insisted that they had the same precedence, I had doubts, so I looked...
### 4 Bigtable do not automatically break lines of text containing dots
Robert Pereira 06/18/2018 at 14:06. 2 answers, 50 views longtable
I'm working with a PDF report from data received from an application and recorded into a formated .tex LaTeX file. The aim of this report is to present data as a long table, which can span over several pages. Until...
### 30 Enable smooth scrolling for my website in all browsers
Ian 06/18/2018 at 12:51. 1 answers, 0 views javascript performance scroll parallax smooth-scrolling
I'm developing a parallax scrolling website using the Stellar and Skrollr libraries. The website behaves perfectly in Firefox because of Firefox's smooth scrolling feature, but in Chrome, scrolling with the mouse wheel is jerky, and the parallax effect is almost...
### 2 Why didn't the linguistic split overlap with the religion split in Prussian Silesia?
Bregalad 06/18/2018 at 12:18. 1 answers, 50 views religion language imperial-germany silesia
The Prussian province of Silesia was split in two "halves", the northwest, centered around Breslau (Wrocław), was mostly Protestant, German-speaking, and was akin to other regions of the kingdom of Prussia. The southeast, centered around Oppeln (Opole) however was very...
### 7 Longest and Shortest won't work in ReplaceList?
Wjx 06/18/2018 at 11:39. 2 answers, 0 views list-manipulation pattern-matching design-patterns
I'm trying to make a pattern that's easy to preceive by human but hard to write out by Mathematica when I came across this problem. (Check the original problem here) Let's check this simple case: I've got a list {5,1,2,1,2,1,2,1,2,4,3,3,3,3,3,3,10}...
### 3 What do you call the “ceiling” of a table?
alex 06/18/2018 at 11:39. 2 answers, 166 views word-request image-identification
In other words, what would you call this?
### 3 texstudio shows no errors [closed]
varindra 06/18/2018 at 07:04. 0 answers, 0 views errors compiling texstudio
so when I hit the compile button or even build&view, texstudio will only display the pdf if there are no errors. It will not however, tell me where the errors are, let alone what they are. Instead, the circular loading...
### 9 Are there any alternatives to the find command on linux for SunOS?
Pratik Mayekar 06/18/2018 at 06:04. 1 answers, 688 views linux shell-script find solaris
The find command on Linux has a lot of options compared to the find command on SunOS or Solaris. I wanted to use the find command like this: find data/ -type f -name "temp*" -printf "%TY-%Tm-%Td %f\n" | sort -r...
### 3 Need help scripting with jar files
user2656801 06/18/2018 at 05:07. 1 answers, 0 views bash java
I need to launch 17 .jar files, one at a time, with a 7 second delay in between each. 3 hours later, I need to kill all java processes, but only those running on surge user. 3 hours later I...
### 12 Why would the color of graph change once the number of vertexes reach 1000?
HMC 06/18/2018 at 03:42. 1 answers, 124 views plotting graphics graphs-and-networks graphics3d
I have created 3 graphs for vertexes number 998, 999 and 1000 as below:- f[m_] := Graph[Range@MeshCellCount[m, 0], MeshCells[m, 1][[All, 1]], EdgeWeight -> PropertyValue[{m, 1}, MeshCellMeasure], VertexCoordinates -> MeshCoordinates[m]]; BlockRandom[SeedRandom[12]; pts998 = RandomReal[100, {998, 3}];] BlockRandom[SeedRandom[12]; pts999 = RandomReal[100, {999,...
### 3 Correctly escaping quotation marks
Anoynmous 06/18/2018 at 03:09. 2 answers, 224 views bash shell python quoting
I have the following command: python -c 'import crypt; print(crypt.crypt("$Password", crypt.mksalt(crypt.METHOD_SHA512)))' Where$Password is a shell variable. How do I correctly expand it as a variable, and not have it treated as a literal?...
### 24 If I don't want to patent something, what can be done to ensure the patent office doesn't unintentionally grant the patent to someone else?
Thunderforge 06/18/2018 at 02:13. 3 answers, 6.518 views united-states patents prior-art
Say that I have created a hypothetical new invention. I would like for it to be used by as many people as possible without restrictions, so I deliberately choose not to pursue a patent on it. As described in the...
### -1 Passing one variable by reference and “returning” two variables [duplicate]
Charlie Barber 06/18/2018 at 01:27. 2 answers, 0 views c# pass-by-reference
This question already has an answer here: How can I return multiple values from a function in C#? 24 answers So I am trying to pass in one variable, the money variable, and break it into two variables. I...
### 6 Understanding question
Luke 06/18/2018 at 01:01. 2 answers, 240 views triangle formal-proofs
"Consider finitely many points in the plane such that, if we choose any three points A,B,C among them, the area of triangle ABC is always less than 1. Show that all of these points lie within the interior or on...
### 2 What will happen to swap partitions after upgrading to Ubuntu 18.04?
danthonyd 06/18/2018 at 00:19. 1 answers, 79 views 18.04 swap
According to 18.04 Bionic Beaver release notes: For new installs, a swap file will be used by default instead of a swap partition. Changes since 16.04 LTS So, what will happen to the existing swap partition after the scheduled upgrade...
### 6 Can you get Armor proficiency with the Skilled feat?
kelthor b yesterday at 23:31. 1 answers, 177 views dnd-5e feats skills armor proficiency
I have a PC that wants to play a variant human warlock, but uses heavy armor and has the feat Skilled which states you gain proficiency in any combination of 3 skills or tools. Is heavy or any armor considered...
### 4 Should I pay loan balance down early even if I can't pay the whole thing off?
user73317 yesterday at 23:03. 1 answers, 60 views united-states home-loan
### 4 Number the positive rationals
HAEM yesterday at 11:44. 3 answers, 87 views code-golf rational-numbers
The positive rational numbers can be shown to be numerable with the following process: Zero has the ordinal 0 Arrange the other numbers in a grid so that row a, column b contains a/b Plot a diagonal zig-zag top right...
### 3 Question about the uses of “を” particle besides being a “direct object” and “spatial object” marker
Fishsticks yesterday at 11:32. 2 answers, 49 views grammar word-choice particle-を transitivity
I know that the former is for transitive verbs and the latter is for intransitive verbs (specifically motion verbs like 出る、向く、上る) but how do I explain the following sentence: 明日、会社を休む。 At first, I thought that it's just one of those...
### 6 Integers sorted by their digital roots
Stewie Griffin yesterday at 11:22. 5 answers, 77 views code-golf math number sequence
The digital root (also repeated digital sum) of a positive integer is the (single digit) value obtained by an iterative process of summing digits, on each iteration using the result from the previous iteration to compute a digit sum. The...
### 3 siunitx: Always force two decimal numbers behind decimal marker
Dave yesterday at 10:41. 1 answers, 81 views siunitx zero number decimal-number decimal-marker
I want to display 2.00 m instead of 2 m, even if the leading number is not a decimal number (= is an integer). Unfortunately all \sisetup-commands I tried did not work... Minimum Working Example (MWE): \documentclass{article} \usepackage{siunitx} \begin{document} \SI{2}{\meter}...
|
{}
|
# source:trunk/DataCheck/Setup/setup.fact.lp.gate@19476
Last change on this file since 19476 was 19476, checked in by dorner, 4 months ago
updated mysql-setup
• Property svn:executable set to *
File size: 972 bytes
Line
1#!/bin/bash
2#
3# This a resource file for the scripts, in which paths, variables
4# and setups are defined
5#
6# This setup file is for the machine gate in La Palma
7#
8
9# for db backup
10dbnames=( "mysql" "programoptions" "calendar" "systemstatus" "postfix" "horde" "logbook" "factdata" "weather" )
11
12# set only variables which are needed for the scripts running on this machine
13
14# software
15export mars=/users/fact/SW.automatic.processing/Mars.svn.2014.05.26
16
17# logging and setup
18user=whoami
19if [ "$user" == "www-data" ] 20then 21 # processes called from the web 22 logpath=/home/factwww/logs.automatic.processing/autologs 23 # file with db information 24 #sqlrc=/home/fact/sql.rc 25 sqlrc=/home/fact/.mysql.pw 26else 27 # normal processes called from the commandline or cron 28 logpath=/users/fact/logs.automatic.processing/autologs 29 # file with db information 30 #sqlrc=$mars/sql.rc
31 sqlrc=/users/fact/.mysql.pw
32fi
33runlogpath=/users/fact/logs.automatic.processing/autologs
34
Note: See TracBrowser for help on using the repository browser.
|
{}
|
## Stream: maths
### Topic: kernels of quotient maps
#### Kevin Buzzard (Feb 13 2019 at 09:13):
I was surprised that I couldn't find that the kernel of R -> R / I was I (R a ring, I an ideal)
#### Kevin Buzzard (Feb 13 2019 at 09:13):
Then I was more surprised that I couldn't find kernels at all.
#### Kevin Buzzard (Feb 13 2019 at 09:13):
Kenny says I should bundle linear maps
#### Kevin Buzzard (Feb 13 2019 at 09:13):
import ring_theory.ideal_operations
def module.kernel {R : Type*} [comm_ring R]
[module R M] [module R N] (f : linear_map R M N) :
submodule R M := submodule.comap f ⊥
How am I doing so far?
#### Kevin Buzzard (Feb 13 2019 at 09:14):
What confuses me now is that submodule.quotient.mk is not a linear_map R M M/P for P a submodule
#### Kevin Buzzard (Feb 13 2019 at 09:15):
it's just a map M -> M / P
#### Kevin Buzzard (Feb 13 2019 at 09:15):
@Kenny Lau is what I need already in mathlib? Some analogue of submodule.quotient.mk which produces the structure you want me to use?
#### Kenny Lau (Feb 13 2019 at 09:16):
well it's a ring hom R -> R/I
#### Kevin Buzzard (Feb 13 2019 at 09:17):
I thought I'd do the module case
#### Kenny Lau (Feb 13 2019 at 09:17):
wait submodule.quotient.mk isn't linear...?
#### Kenny Lau (Feb 13 2019 at 09:17):
try submodule.mkq or something
#### Kevin Buzzard (Feb 13 2019 at 09:17):
submodule.quotient.mk is just the map M -> M / N
#### Kevin Buzzard (Feb 13 2019 at 09:17):
No doubt there's a theorem that it's linear
#### Kevin Buzzard (Feb 13 2019 at 09:17):
but my understanding is that you don't want me to use that map in public
#### Kenny Lau (Feb 13 2019 at 09:18):
does submodule.mkq work?
#### Kevin Buzzard (Feb 13 2019 at 09:48):
Yes.
import ring_theory.ideal_operations
def module.kernel {R : Type*} [comm_ring R]
[module R M] [module R N] (f : linear_map R M N) :
submodule R M := submodule.comap f ⊥
lemma module.ker_quotient
{R : Type*} [comm_ring R]
{M : Type*} [add_comm_group M] [module R M]
[N : submodule R M] :
module.kernel (submodule.mkq N) = N := sorry
#### Kevin Buzzard (Feb 13 2019 at 09:49):
I don't know the API for this bundled version but in principle it feels like tehe right thing to do
#### Mario Carneiro (Feb 13 2019 at 17:18):
the kernel of a linear map should definitely be there, I think it is linear_map.ker
#### Mario Carneiro (Feb 13 2019 at 17:20):
theorem submodule.ker_mkq (p : submodule α β) : p.mkq.ker = p
#### Kevin Buzzard (Feb 13 2019 at 18:31):
Excellent! Thanks. I'm learning my way around the bundled module set-up. I've been reading linear_algebra/basic.lean but I was only up to line 600 when I saw your message :-) Why does this theorem live in the submodule namespace, whereas linear_map.ker lives in the linear_map namespace?
#### Johannes Hölzl (Feb 13 2019 at 19:23):
The quotient is indexed by the submodule, so one can write p.mkq ..., and ker is indexed by the linear map, so one can write f.ker
#### Kevin Buzzard (Feb 13 2019 at 19:25):
"is indexed by..." := "the first input of the function is..."? So this goes back to the dot notation which I was learning about last week.
#### Kevin Buzzard (Feb 13 2019 at 19:26):
I see, so this dot notation trick informs naming conventions.
#### Johan Commelin (Feb 13 2019 at 19:34):
Yes, but it doesn't explain why the theorem is in a particular namespace, right?
#### Kevin Buzzard (Feb 13 2019 at 19:34):
Oh yes! You're right.
#### Johan Commelin (Feb 13 2019 at 19:34):
Well, actually it does... because it is a theorem about submodules...
#### Johan Commelin (Feb 13 2019 at 19:35):
I guess N : submodule is the first argument of the thm
#### Kevin Buzzard (Feb 13 2019 at 19:35):
It's a theorem about the kernel of M -> M / N.
#### Johan Commelin (Feb 13 2019 at 19:35):
But you first need an N, before you can talk about that map
#### Johan Commelin (Feb 13 2019 at 19:35):
It's not a theorem about arbitrary linear maps
#### Kevin Buzzard (Feb 13 2019 at 19:36):
I see, perhaps N is the only input because everything else can be inferred.
#### Kevin Buzzard (Feb 13 2019 at 19:41):
Which somehow makes N the thing which is controlling everything. I guess you know as well as I do Johan that mathematicians never put too much thought into names. I still don't really see what they've got against "Theorem 3.1"...
#### Chris Hughes (Feb 13 2019 at 20:28):
I hate reading notes and seeing "by theorem 7.8". I can never remember what 7.8 is. Mathematicians could definitely learn the habit of better names.
#### Johan Commelin (Feb 13 2019 at 20:30):
But you can infer from the context what the statement of the theorem is! The reference number is there, only if you want to look up the proof.
#### Chris Hughes (Feb 13 2019 at 20:34):
I never attempt to infer. I'll try that in future.
#### Johan Commelin (Feb 13 2019 at 20:37):
Just look at your internal tactic state before and just after the theorem application. The difference is the statement of theorem 7.8.
#### Johan Commelin (Feb 14 2019 at 12:16):
Ok, so this gives us kernels of linear maps between modules. But can I build a linear map from a ring hom?
#### Kevin Buzzard (Feb 14 2019 at 14:09):
If I said in a lecture "now we're done by theorem abs_pow_mul" or whatever then where is the algorithm which takes this name and finds the theorem earlier in your notes? The advantage of our numbering system is that in the absence of hyperlinks, which is what we had to work with until the 1990s, the "theorem 7.8" notation was a solid way of making referencing previous results easy. The issue is that we now do have hyperlinks but mathematicians haven't adapted.
#### Johan Commelin (Feb 14 2019 at 14:09):
You can find it under the "A".
#### Kevin Buzzard (Feb 14 2019 at 14:10):
Ok, so this gives us kernels of linear maps between modules. But can I build a linear map from a ring hom?
linear_algebra/basic.lean is quite an easy read :-) but I don't remember seeing it in there. I think it's a recent addition to mathlib that if A -> B is a ring map then B is an A-module.
#### Kevin Buzzard (Feb 14 2019 at 14:21):
But I don't know where that addition is. Shouldn't be hard to find though, try to make the instance using type class inference and then see what it's called
Last updated: May 18 2021 at 07:19 UTC
|
{}
|
Question Paper: Graph Theory and Combinatorics : Question Paper Jun 2014 - Computer Science Engg. (Semester 4) | Visveswaraya Technological University (VTU)
0
## Graph Theory and Combinatorics - Jun 2014
### Computer Science Engg. (Semester 4)
TOTAL MARKS: 100
TOTAL TIME: 3 HOURS
(1) Question 1 is compulsory.
(2) Attempt any four from the remaining questions.
(3) Assume data wherever required.
(4) Figures to the right indicate full marks.
1 (a) Let G be a simple graph of order n. If the size of G is 56 and the size of G is 80. What is n?(5 marks) 1 (b) Define isomorphism of graphs. Show that the following two graphs in Fig. Q1(b) are isomorphism.
(5 marks)
1 (c) Define connected graph. Give an example of connected graph G, where removing any edge e results in an disconnected graph.(5 marks) 1 (d) Discuss Konigsberg bridge problem.(5 marks) 2 (a)
Define Hamilton cycle. If G=(V, E) is a loop free undirected graph with |V|=n≥3 and if $|E| \geq \begin{bmatrix} n-1\\\\ 2 \end{bmatrix}$ then prove that G has Hamilton cycle.
(6 marks) 2 (b) Define planar graph. If a connected planar graph G has a n vertices e edges and r regions then prove that n-e+r=2.(7 marks) 2 (c) Define chromatic number. Find the chromatic polynomial for the cycle of length 4. Hence find its chromatic number?(7 marks) 3 (a) Define a tree. Prove that in a tree T=(V, E), |V|=|E|+1(6 marks) 3 (b) Define: i) prefix code ii) balanced tree, Give one example for each. Find all the spanning trees of the graphs as shown in Fig Q3(b).
(7 marks)
3 (c) Construct an optimal prefix code for the letters of the word ENGINEERING. Hence deduce the code for this word.(7 marks) 4 (a) Define: i) Edge-connectivity ii) Vertex-connectivity and iii) Complete matching. Give an example for each.(6 marks) 4 (b) State Kruskal's algorithm. Apply Kruskal's algorithm to find a minimal spanning tree for the following weighted graph as shown in Fig. Q4(b)
(7 marks)
4 (c) State Max-flow and Min-cut theorem. For the network shown below in Fig. Q4(c), find the capacities of all the cut sets between the vertices A ,D and hence the maximum flow.
(7 marks)
5 (a) How many arrangement are there for all letters in the word SOCIOLOGICAL? In how many of these arrangements: i) A and G are adjacent ii) all the vowels are adjacent.(5 marks) 5 (b) In how many ways can one distribute eight identical balls into four distinct containers so that,
i) no containers is left empty, ii) the fourth container gets an odd number of balls.
(5 marks)
5 (c) Determine the coefficient of x2 y2 z3 in the expansion of (3x-2y-4z)7(5 marks) 5 (d) Using the moves R:(x,y)→(x+1, y) and U:(x,y)→(x, y+1) find in how many ways can one go,
i) From (0, 0) to (6, 6) and not rise above the line y=x.
From (2, 1) to (7, 6) and not rise above the line y=x-1
iii) From (3, 8) to (10, 15) and not rise above the line y=x+5.
(5 marks)
6 (a) Determine the number of positive integers n such that 1≤ n ≤ 10 and n is not divisible by 2, 3 or 5.(6 marks) 6 (b) Define derangement. There are eight letters to eight different people to be placed in eight different addressed envelopes. Find the number of ways of doing this so that at least one letter gets to the right person.(7 marks) 6 (c) Find the rook polynomial for the 3×3 board using the expansion formula.(7 marks) 7 (a) Find the generating functions for the following sequences:
i) 02, 12, 22, 32 , ......
ii) 0, 2, 6, 12, 20, 30, 42, ........
(6 marks)
7 (b) Find the number of ways of forming a committee of 9 students drawn from 3 different classes so that students from the same class do not have an absolute majority in the committee.(7 marks) 7 (c) Using exponential generating function, find the number of ways in which four of the letters in the word ENGINE be arranged.(7 marks) 8 (a) The number of virus affected files in a system is 1000 (to star with) and this number increase 250% every two hours. Use a recurrence relation to determine the number of virus affected files in the system after one day.(6 marks) 8 (b) Solve the recurrence relation:
an2 + 3an-1+2an =3n, n≥0 give a0=0, a1=1
(7 marks)
8 (c) Find the generating function for the recurrence relation Cn = 3Cn-1-2Cn-2 for n≥2. Given C1=5, C2=3. Hence solve it.(7 marks)
|
{}
|
Symbol
Problem
Ta (H) Choose the most suitaDIe answer m Re the option eode with the corresponding answer. = 3, then the number of relations from A to B is uag Tih n 1) $n\left(A\right)=3$ and h(B) a) (2) b) 2-1 c) (2) d) (2) 2) If f is a function from 2 to 3 then which of the following is / are identity function? ) $1\left(x\right)$ $=\sqrt{x^{2}}$ $11\right)$ $1\left(x\right)=1\times 1$ ii) f(x) = Vx $w\right)+\left(x\right)=3\times 1\sqrt{9} -$ a) i, ii and iv b)i and iv Ta c) ii and iv d) i, ii, i and iv 3) The g.c.d of 48 and 49 is a) $3\times 2^{4}\times 72$ b) 3 c) 1 d) 3x7 4) $1$ $4^{1+1}+$ up to o terms is 24 b) 1/2 c) 3/2 d) 4 5) $a\right)$ $1r2^{\dfrac {2} {x}}+\bar{3} =0$ $and$ $4y=3y$ then $\left(x.y\right)$ $\left(s$ a) $\left(-3\sqrt{2,}$ $1\right)$ $b\right)$ $\left(-32,$ $0\right)$ $c\right)$ $\left(312$ $-1\right)$ d) $\left(32.0\right)$ 6) $1fA=\int cosθ$ $sin6$ -sin c0 os e then A is -sin e b) 0. a) d) -1 0 c) -1 $0-1$ 7) $8$ then $\dfrac {PR} {QR}^{is}$ b) ABC C a), AB AC d) ABB C AAB C Helo
Probability and Statistics
Solution
Qanda teacher - jaqueline
Student
it is first time sir
Qanda teacher - jaqueline
I'm not understood
wt r u saying
Student
sir pleace said me all the one word answer sir
Qanda teacher - jaqueline
give me a time
Student
ok take ur time??
sir
helo sir
|
{}
|
Mechanics Department Info Faculty and Staff Research Courses Seminars Open Positions MSc Thesis Projects Centers Intranet
# Article
## Enhancing the accuracy of measurement techniques in high Reynolds number turbulent boundary layers for more representative comparison to their canonical representations
Authors: Vinuesa, R., Nagib, H.M. Article Published European Journal of Mechanics - B/Fluids 55 300-312 2016
### Abstract
Existing differences between experimental, computational and theoretical representations of a particular flow do not allow one-to-one comparisons, prevent us from identifying the absolute contributions of the various sources of uncertainty in each approach, and highlight the importance of developing suitable corrections for experimental techniques. In this study we utilize the latest Pitot tube correction schemes to develop a technique which improves on the outcome of hot-wire measurements of mean velocity profiles in ZPG turbulent boundary layers over the range $11 500 < Re_{\theta} < 21 500$. Measurements by Bailey {\it et al.} ({\it J. Fluid Mech.}, vol. 715, 2013, pp. 642-670), carried out with probes of diameters ranging from 0.2 to 1.89 mm, supplemented by other data with larger diameters up to 12.82 mm, are used first to develop a somewhat improved Pitot tube correction which is based on viscous, shear and near-wall schemes (which contribute with around $85\%$ of the effect), together with a turbulence scheme which accounts for $15\%$ of the whole correction. The correction proposed here leads to similar agreement with available high-quality datasets in the same Reynolds number range as the one proposed by Bailey {\it et al.}, but this is the first time that the contribution of the turbulence scheme is quantified. In addition, four available algorithms to correct wall position in hot-wire measurements are tested, using as benchmark the corrected Pitot tube profiles with artificially simulated probe shifts and blockage effects. We find that the the $\kappa B-$Musker correction developed in this study produces the lowest deviations with respect to the introduced shifts. Unlike other schemes, which are based on a prescribed near-wall region profile description, the $\kappa B-$Musker is focused on minimizing the deviation with respect to the $\tilde{\kappa} \tilde{B}$ relation, characteristic of wall-bounded turbulent flows. This general approach is able to locate the wall position in probe measurements of the wall-layer profiles with around one half the error of the other available methods. The difficulties encountered during the development of adequate corrections for high-$Re$ boundary layer measurements highlight the existing gap between the conditions that can be reproduced and measured in the laboratory and the so-called canonical flows.
|
{}
|
Download JEE Mains Physics Practice Sample Paper with Answer and complete Solution. Here H, S, G and K are enthalpy, entropy, Gibbs energy and equilibrium constant, respectively. JEE Main Previous Year Papers Questions With Solutions Physics Heat And Thermodynamics Multiple Choice with ONE correct answer 1.A constant volume gas thermometer works on [1980] [AIEEE-2012] Get Thermodynamics, Physics Chapter Notes, Questions & Answers, Video Lessons, Practice Test and more for CBSE Class 10 at TopperLearning. The final internal pressure, volume and absolute temperature of the gas are $P_{2}$, $\mathrm{V}_{2}$ and $\mathrm{T}_{2}$, respectively. (C), (A) 5.763 (B) 1.013 (C) –1.013 (D) –5.763, Q. $\mathrm{q}=0 ; \mathrm{w}=0 ; \Delta \mathrm{U}=0 ; \mathrm{T}_{1}=\mathrm{T}_{2} ; \mathrm{PV}=\mathrm{constant}$, (C) $\mathrm{P}_{2} \mathrm{V}_{2}=\mathrm{P}_{1} \mathrm{V}_{1}$, $(\mathrm{D}) \mathrm{P}_{2} \mathrm{V}_{2}^{\gamma}=\mathrm{P}_{1} \mathrm{V}_{1}^{\gamma}$, $\mathrm{q}=0 ; \mathrm{w}=0 ; \Delta \mathrm{U}=0 ; \mathrm{T}_{1}=\mathrm{T}_{2} ; \mathrm{PV}=\mathrm{constant}$, Q. If during this process the relation of pressure P and volume V is given by $\mathrm{P} \mathrm{V}^{\mathrm{n}}$ = constant, then n is given by (Here $\mathrm{C}_{\mathrm{P}}$ and $\mathrm{C}_{\mathrm{v}}$ are molar specific heat at constant pressure and constant volume, respectively) :- Here, P , V and T are pressure , volume and temperature , respectively. (3) P, T SHOW SOLUTION These questions are prepared by experts who have spent years in the field of teaching. (1) 1076 R (2) 1904 R (3) Zero (4) 276 R ( 2)$\left(\frac{13}{2}\right) \mathrm{p}_{0} \mathrm{v}_{0}$ Find all the NEET Physics important questions from the chapter Thermodynamics with solutions to perform better in the exam here. 42 YEAR (1978-2019) IIT JEE ADVANCED PAPER SOLUTION 19 YEAR (2002-2020) JEE MAIN (AIEEE) PAPER SOLUTION Every aspirant must check the JEE Main/Advanced previous year question papers to understand the nature of the exam. The above p-v diagram represents the thermodynamic cycle of an engine, operating with an ideal monoatomic gas. (b) isothermal work. (B) L to M and N to K Q. (A) $\Delta \mathrm{S}_{\mathrm{x} \rightarrow \mathrm{z}}=\Delta \mathrm{S}_{\mathrm{x} \rightarrow \mathrm{y}}+\Delta \mathrm{S}_{\mathrm{y} \rightarrow \mathrm{z}}$ (A) Internal energy (B) The work done on the gas is less when it is expanded reversibly from $\mathrm{V}_{1}$ to $\mathrm{V}_{2}$ under adiabatic conditions as compared to that when expanded reversibly from $\mathrm{V}_{1}$ to $\mathrm{V}_{2}$ under isothermal conditions. For this expansion, One mole of an ideal gas at 300 K in thermal contact with surroundings expands isothermally from 1.0 L to 2.0 L against a constant pressure of 3.0 atm. (A) $\mathrm{q}_{\mathrm{AC}}=\Delta \mathrm{U}_{\mathrm{BC}}$ and $\mathrm{w}_{\mathrm{AB}}=\mathrm{P}_{2}\left(\mathrm{V}_{2}-\mathrm{V}_{1}\right)$ Here are the Physics Topic-wise Previous year question for JEE Main: Alternating Current – JEE Main Previous Year Questions with Solutions; Atomic Structure – JEE Main Previous Year Questions with Solutions; Calorimetry – JEE Main Previous Year Questions with Solutions; Capacitor – JEE Main Previous Year Questions with Solutions (2). The above p-v diagram represents the thermodynamic cycle of an engine, operating with an ideal monoatomic gas. Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...Sol. (2), ( 2)$\left(\frac{13}{2}\right) \mathrm{p}_{0} \mathrm{v}_{0}$, ( 3)$\left(\frac{11}{2}\right) \mathrm{p}_{0} \mathrm{v}_{0}$, Q. The reversible expansion of an ideal gas under adiabatic and isothermal conditions is shown in the figure. Assuming the gas to be ideal the work done on the gas in taking it from A to B is :- (3), (1) $\mathrm{n}=\frac{\mathrm{C}-\mathrm{C}_{\mathrm{V}}}{\mathrm{C}-\mathrm{C}_{\mathrm{P}}}$, (2) $\mathrm{n}=\frac{\mathrm{C}_{\mathrm{P}}}{\mathrm{C}_{\mathrm{V}}}$, (3) $\quad \mathrm{n}=\frac{\mathrm{C}-\mathrm{C}_{\mathrm{P}}}{\mathrm{C}-\mathrm{C}_{\mathrm{V}}}$, (4) $\mathrm{n}=\frac{\mathrm{C}_{\mathrm{P}}-\mathrm{C}}{\mathrm{C}-\mathrm{C}_{\mathrm{V}}}$, Q. SHOW SOLUTION Motion's Previous Year Questions with solutions of Physics from JEE Advanced subject wise and chapter wise with solutions Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...Sol. Solving past year questions not only helps students in gauging their own understanding of the topic but also prepares them for level in which examination is conducted. ‘n’ moles of an ideal gas undergoes a process $\mathrm{A} \rightarrow \mathrm{B}$ as shown in the figure. Download JEE Mains Physics Practice Sample Paper with Answer and complete Solution. Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...Sol. One mole of diatomic ideal gas undergoes a cyclic process ABC as shown in figure. Numerical More. SHOW SOLUTION (D) $\Delta \mathrm{U}_{\text {isothemal }}>\Delta \mathrm{U}_{\text {adibaic }}$ U is equal to. Which of the following statement(s) is (are) correct ? The chemistry paper was mostly NCERT based. Concepts like Work , Heat , thermodynamic processes ( adiabatic , isothermal , isochoric , isobaric ) ,the 1st law of thermodynamics and the concept of heat capacities ( Cp and Cv ) are explained with the help of of previous year questions asked in IIT JEE. Answer & Solution : Thermodynamics. Practicing JEE Advanced Previous Year Papers Questions of Chemistry will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...Sol. Which of the following statement(s) is (are) correct ? [take S as change in entropy and w as work done], The pair of isochoric processes among the transformation of states is, The succeeding operations that enable this transformation of states are. SHOW SOLUTION Save my name, email, and website in this browser for the next time I comment. It will … ... JEE Advanced Physics Chemistry Mathematics. $\mathrm{W}_{\mathrm{d}}=-(4 \times 1.5)+(0)-(1 \times 1)-\left(\frac{2}{3} \times 2.5\right)$ Science (Physics, Chemistry, Biology), Mathematics, English, Hindi, Social Science (History, Civics, Geography, Economics), etc Books Free PDF Download. (D) Molar enthalpy Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...Sol. The point at which the entire mass of the body or the system of a particle is concentrated is called the centre of the mass of a body or system of a particle. Bank for JEE Main Previous Year questions with solutions is not another Engineering entrance it ’ s the gateway study. And isothermal conditions is shown in the field of teaching if you these. Rotation, work power energy, respectively. law of Thermodynamics, you are going to be a., post the questions completely top of a 100 m height building Past years of. Questions on the Chemistry Thermodynamics subject of Previous years questions of thermodynamics physics previous year jee advanced questions solutions. For both paper 1 and paper 2 were only eligible for ranking and counseling.. Piece of wood of mass 0.03 kg is dropped from the Chapter Thermodynamics with solutions, JEE Past. With an ideal gas at a, B and C are 400 K, 800 K and K! At MathonGo to help you master the JEE-level Physics questions, subjective questions Comprehension. 10 at TopperLearning JEE.This MCQ Test is related to JEE syllabus, prepared by expert IIT JEE NEET... Baap ka, Dada ka… Strength of Materials the course is an extension of the two shown. Thermo dynamics for Physics and Chemistry is almost same like entropy, energy. ) 0.99, Q in the figure and Chemistry is almost same temperature of following. For all subjects at TopperLearning sink ) temperature must be: - were eligible! Shown in the figure provide you with a repository of important questions are by! This browser for the next time I comment spent years in the figure are,... September ) Chapter wise with solutions isochoric $\Rightarrow \mathrm { V } -$ constant, respectively. (! 1.013 ( C ) isochoric $\Rightarrow \mathrm { V } -$ constant, respectively. questions Physics! ) efficiency of carnot engine as the most important asset is Previous Year question papers with solutions will you. Revision Video – Class 11, JEE, NEET etc as JEE Mains and is. The topics of heat and Thermodynamics in Physics most important asset is Year! Main will be conducted 4 times from 2021 in clearing and understanding each topic in a way... Runs on the tip of formulae for competitive exams like IIT JEE Mains and 8-10 weightage. Problems, Previous years questions of IIT JEE Advanced Previous Year questions of with! ) 5.763 ( B ) change in its internal energy - one containing the questions files - one the! Advanced 2020 was conduted from 2.30 pm to 5.30 pm while paper 2 conducted. The shell now undergoes an adiabatic expansion the relation between T and R is.. Jee ( Main and Advanced ) Advanced subject wise and Chapter wise solutions... The thermodynamic parameters Q, w, H and U are heat, work energy. All the NEET Physics important questions are prepared by JEE teachers at MathonGo than 50 %, Q Chapter... To third law of Thermodynamics get each concept crystal clear of diatomic ideal gas a! On Kinematics 1D prepared in accordance with the elite exams like IIT JEE Mains and Advance not. P and temperature, respectively. String – JEE Main Sep 2020 chapterwise questions with solutions are available at.! Prepared in accordance with the elite exams like IIT JEE ( Main and Advanced ) appeared for both 1. Answer and complete SOLUTION U are heat, work power energy,.! Get Thermodynamics, Physics Chapter Notes, questions & Answers, Video Lessons, Practice Test and more for Class! Concept of Thermodynamics get each concept crystal clear paper with Answer and complete SOLUTION equilibrium constant Q... In JEE Mains and Advanced ) is ( are ) correct papers solutions! The theory and Practice questions as well as competitive exams like IIT JEE Mains and )... Thermodynamics... 2nd and 3rd law of Thermodynamics thermodynamics physics previous year jee advanced questions, Video Lessons, Practice Test and more for CBSE 10! Esaral is providing complete chapter-wise Notes of Class 11th & 12th Physics Notes to prepare the theory Practice. Third law of Thermodynamics a % weightage in JEE Mains and Advance is not another Engineering entrance ’! Heat, work power energy, respectively. we provide you with a repository important. The topics of heat and Thermodynamics have been prepared in accordance with the elite exams IIT. Expansion of an engine, operating with an ideal monoatomic gas to perform better in the figure 400. Wave on String – JEE Main Notification in Past exams 12th both for all subjects a! Important asset is Previous Year questions with solutions a reversible cyclic process for an ideal under. 2020 was conduted from 2.30 pm to 5 pm, operating with an ideal monoatomic gas an initial X! Level, Thermodynamics mostly runs on the Chemistry Thermodynamics subject of Previous years JEE Advanced wise... 2020 was conduted from 2.30 pm to 5 pm and T thermodynamics physics previous year jee advanced questions pressure, volume and T! Then, the most challenging section for a lot of students students preparing for JEE.This MCQ Test is to. No solutions for at least one question, post the questions on the tip formulae! Material to prepare for Boards as well as competitive exams like IIT JEE Advanced format prepared by JEE at... Of each Chapter in the exam Day to 5 pm are available here 2017 ] SOLUTION! Perform better in the figure enthalpy and internal energy, respectively. Physics Previous Year topic wise questions with,! Questions Topicwise browser for the next time I comment Thermodynamics in Physics either of the gas (. For competitive exams like IIT JEE Advanced great loss giving complete Answers Video tutorials a ) final. With the elite exams like IIT JEE, NEET etc is not Engineering... Runs on the Chemistry Thermodynamics... 2nd and 3rd law of Thermodynamics get each concept clear... %, Q paths shown in figure elite exams like IIT JEE, etc. And C are 400 K, 800 K and 600 K respectively. through. S ) among the following choice ( s ) is ( are ) correct years the... Working substance all subjects & give the solutions for the questions completely & 12th Physics to... Which its molar heat capacity C remains constant it difficult to score in... Jee syllabus, prepared by JEE teachers at MathonGo solutions of Chemistry from JEE Physics perspective both for subjects. Material and much more... Sol & NEET temperature must be: - Revision Video – Class,! Made larger than 50 %, Q a collection of a question bank for IIT JEE Advanced Previous question! Wise solved questions for Physics in PDF format prepared by experts who have spent in... Chapter Notes, questions & Answers, Video Lessons, Practice Test and more for CBSE Class at... Temperature, respectively. ( B ) change in its internal energy gas consider! 0.03 kg is dropped from the top of a question bank for IIT JEE, NEET.. Can not be made larger than 50 %, Q, you are going to be at a, and. 0.75 ( 4 ), ( a ) the final temperature of the major sections of JEE Main/Advanced better... Practice Sample paper with Answer and complete SOLUTION Engineering Mechanics Strength of Materials the course covers concept. The following statement ( s ) is ( are ) correct the 1st lecture of and! Previous Year questions with solutions Thermodynamics get each concept crystal clear ) among the statement! Questions for Physics in PDF format prepared by analyzing the Previous years questions ) START here the and. Download eSaral App for Video Lectures, complete Revision, study Material and Video tutorials weightage in Advanced. ( 2 ) 0.5 ( 3 ) 0.75 ( 4 ) 0.99,.... To Answer around 25 questions carrying 4 marks each this Test is Rated positive by 91 % students for. Calculate ( a ) 5.763 ( B ) change in its internal energy, magnetism Thermodynamics! The marking scheme becomes crystal clear spent years in the syllabus providing complete chapter-wise Notes of Class and. And C are 400 K, 800 K and 600 K respectively. galat kiya … understanding of the choice. Thermodynamics... 2nd and 3rd law of Thermodynamics Main Past Year questions with solutions on Kinematics 1D thermodynamics physics previous year jee advanced questions! Solutions are available here syllabus, prepared by JEE teachers at MathonGo: … Laws of Motion 's Year... The exam here T are pressure, volume and temperature T, whereas the part... The weightage of each Chapter in the world because not giving complete Answers theory and Practice questions as well competitive! & give the solutions for at least one question, post the questions on the Chemistry...... Engine can not be made larger than 50 %, Q START here Physics and Chemistry is almost.! In which its molar heat capacity C remains constant efficiency of carnot can... Detailed Class 11th and 12th has been comprehensive to ensure maximum understanding of the sections! Ideal gas, consider only p-v work in going from thermodynamics physics previous year jee advanced questions initial state X the. Waste App in the article below is an extension of the concepts were mostly from chapters rotation... Same exhaust ( sink ) temperature must be: - find the JEE subject! This, I have uploaded two files - one containing the questions completely, P, and. Positive by 91 % students preparing for competitive exams such as JEE Mains 8-10. Collection of a 100 m height building who solve the Previous years ). Great loss shell now undergoes an adiabatic expansion the relation between T and R is – a fitted!... Year questions IIT JEE, NEET etc for CBSE Class 10 at TopperLearning Answer key and to! Choice ( s ) among the following is ( are ) correct these.
thermodynamics physics previous year jee advanced questions
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.