url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://math.stackexchange.com/questions/2594631/does-a-continuous-mapping-have-to-map-the-boundary-to-the-boundary/2594635
|
Does a continuous mapping have to map the boundary to the boundary
This question comes from when I read my Complex Analysis book.
On the chapter of Mobius transformation, there is an example:"construct a Mobius transformation that maps $\{\mathrm{Im} (z)>0\}$ onto $\{|z|<1\}$". In the solution, it just starts with "it must maps the real line to the circle".
I think about it and try to prove this . I know that the open mapping theorem can do this, since the theorem suggests that a holomorphic function must map the interior point to inner point, thus it also maps the boundary point to boundary point. But in the book, the open mapping theorem is in the later chapters. And I come up with another proof: since Mobius transformation maps a circline to a circline, it should map the real line to a circle. If it is not the boundary, there would be a contradiction that the image have points both within and outside that circle.
But I come up with another more general question. For a continuous function from $R^2$ to $R^2$, if it maps a region onto a region(say it maps $\{y>0\}$ onto $\{x^2 + y^2 < 1\}$), can we claim it must maps the $x$-axis to the unit circle? Or can we say it maps an interior point to an interior point?
In the one-variable function it is not right, $\sin x$ is a counterexample. But I am not sure if there is any counterexample for the $R^2$ function.
Thank you!
• Möbius transformations are homeomorphisms of the Riemann sphere. Homeomorphisms map boundaries to boundaries. – Daniel Fischer Jan 6 '18 at 18:31
$$f(x+iy) = \frac{(y-1)^2}{1+(y-1)^2}e^{ix}.$$
Then $f$ is continuous on $\mathbb R^2$ and $f$ maps $\{y>0\}$ onto $\{x^2+y^2<1\}.$ But $f(\mathbb R)= \{|w|=1/2\}.$
If $f\colon\mathbb{R}^2\longrightarrow\mathbb{R}^2$ is the null function, then it maps the set $\{z\in\mathbb{C}\,|\,\operatorname{Im}z>0\}$ into the open unit disk, but no real point is mapped to the unit circle.
By the way, suppose that $f(D) = D'$ are open regions where D has compact closure. Then you statement holds, i.e. $f(\partial D) = \partial D'$. Infact if $y \in \partial D'$, take a succession $y_n$ which converges to $y$. Take $x_n$ such that $f(x_n) = y_n$ and extract a convergent subsuccession, say to $x$. Then $f(x) =y$, and $x \in \partial D$, otherwise $x \in D \ \Rightarrow \ y \in D'$ which is disjoint with its boundary.
In your case, this proof still works because mobius functions extends to $S^2$ (adding "infinity" to $R^2$), which is compact (thus every set has compact closure).
|
2019-11-19 21:24:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630206823348999, "perplexity": 171.15861310163928}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00553.warc.gz"}
|
https://software.belle2.org/sphinx/recommended-training/whatsnew.html
|
# 1. What’s New¶
This page explains the new features in the current release. This cannot cover all the changes in all packages but should help users to find out what needs to be adapted when changing to the new release.
## 1.1. Changes since release-05¶
HepMCInput, HepevtInput and LHEInput modules do not anymore boost the MCParticles
The modules HepMCInput, HepevtInput and LHEInput do not anymore boost the MCParticles, and the paramater boost2Lab is now removed from the modules. These modules can not read the BeamParameters payloads from the conditions database, so having the particles boosted correctly and in a reproducible way was non-trivial. A new module, BoostMCParticles, is added for boosting into the laboratory frame the MCParticles using the information stored in the conditions database. The module must be appended to the steering path just after the HepMCInput, HepevtInput or LHEInput module and before running the detector simulation.
The jitter of the L1 trigger is included in the standard simulation
The L1 trigger jitter is randomly extracted from a double gaussian whose parameters have been tuned with 2020 data. The machine filling pattern is taken into account in the simulation of the jitter.
The L1 trigger simulation is included in simulation.add_simulation()
The L1 trigger simulation (tsim) is now executed in the standard simulation: before SVD and PXD simulation but after the simulation of the rest of the subdetectors. For this reason, the python function add_tsim() is deprecated. If you already have a add_simulation in your path, you already get L1 trigger simulation. If you do not have add_simulation, and you need the L1 trigger simulation, please use L1trigger.add_trigger_simulation().
Discontinue the support of the old “fullFormat” for cDSTs and extend the “rawFormat” cDSTs to MC
The support of the fullFormat cDSTs is discontinued. reconstruction.add_cdst_output() does not store anymore additional branches when the option rawFormat=False is selected, being simply an alias of mdst.add_mdst_output(). The users have to explicitly define the additional branches they want to store using the additionalBranches paramenter.
The only supported format is the rawFormat, that is now extended to MC. If rawFormat=True and mc=False are selected, the rawdata + tracking data objects are stored, while with rawFormat=True and mc=True the digits + tracking data objects, including the MCParticles and the relations between them and the digits, are stored.
Removal of old and deprecated database functions
Some functions used in the past to handle the conditions database (like basf2.use_local_database or basf2.reset_database) are removed, and any script using them does not work anymore. This removal does not imply any functionality loss, since the users can use the basf2.conditions object to properly configure the conditions database in their steering files (see also Configuring the Conditions Database).
Photons generated by PHOTOS in continuum events
Fixed the issue where PHOTOS photons were not correctly flagged in continuum events, e.g., charm decays (BII-5934). This was present in release-05-00-01 and earlier, including MC13 files.
Unification of B2BII settings
A single switch between Belle and Belle II settings has been implemented, which is automatically set when reading in a Belle type mdst. No individual options have to be set in modular analysis functions.
### 1.1.1. Changes in the analysis package since release-05-02¶
#### Vertex Fitting¶
• KFit has been extended to be able to handle vertex fits (with and without mass constraint) that involve an eta particle reconstructed in eta -> gamma gamma.
• Added a method scaleError in order to scale up helix errors and avoid underestimation of vertex error.
• Added a new fit-type option massfourC to function vertex.kFit(). The kinematic fit is performed with the 4 momentum constraint of a mother particle and mass constraints of intermediate particles specified by the option massConstraint, simultaneously.
• Changed the treatment of Bremsstrahlungs-corrected tracks in vertex.treeFit. The previous implementation exhibited bad performance when a Bremsstrahlungs-photon is applied. The new implementation fits the corrected Track as a single particle, analogous to, what KFit does.
#### Modules¶
• Added a module TrackIsoCalculator, which takes as input a standard charged particle list, and calculates for every particle’s track in the list the 3D distance to the closest track in the event. The distance is calculated as the 3D separation between the points where two extrapolated track helices cross a given detector’s inner surface, where the detector is an input parameter to the module. This variable can be used to parametrise charged particle identification efficiencies and mis-identification probabilities, as they can depend on the activity around each charged particle candidate.
• The InclusiveDstarReconstruction creates antiparticle lists correctly now. The module’s input changed to a DecayString of the form D* -> pi and MC Matching is applicable (one can use isSignal).
• In BtubeCreator module, functionality to apply a cut on the confidence level of the fit of fully reconstructed B to the beamspot is added.
• The EventKinematics module can now compute event kinematics using generated particles as an input.
• Arguments of writePi0EtaVeto have been updated. downloadFlag and workingDirectory have been removed since the download processes can be skipped. New arguments have been added for several reasons. suffix allows to calculate this veto for multiple photons. hardParticle allows to call this function for a given particle other than a photon. Four new arguments have been added to override the payload names and soft photon selections, pi0PayloadNameOverride, etaPayloadNameOverride, pi0SoftPhotonCutOverride, and etaSoftPhotonCutOverride.
• A new helper method updateROEUsingV0Lists has been added to facilitate application of V0Finder results in Rest Of Event.
• Added a possibility to add a ROE mask for KLM-based particles and an experimental option of including KLM-based particles into ROE 4-vector computation.
Warning
The option useKLMEnergy of RestOfEventBuilder module is only meant for performance studies and NOT for a physics analysis.
• The PrintMCParticles module and thus also the printMCParticles function has a completely new layout that should be much easier to parse, especially for complicated events. By default it shows much less information but in an easier to parse tree representation. See the documentation of printMCParticles for details.
• The ParticleLoader now creates photon candidates from KLMCluster if the parameter loadPhotonsFromKLM is set to true. It is off by default.
Warning
Photons from KLMCluster should only be used in specific use-cases and after a thorough study of their effect.
• In BtubeCreator, new extrainfo TubeB_p_estimated was added. This returns the magnitude of the estimated momentum of the B which should fly in the direction of the Btube.
• Added a module HelixErrorScaler, which multiplies constant scaling factors to helix errors of input charged particles and stores them in a new list.
• In ParticleStats add functionality to produce a json containing the information printed on stdout. Added also a tool b2plot-particleStats: Analyze a json produced by ParticleStats to produce plot of retention rate and pass matrix to analyze the json and produce some plots.
• The TrackingMomentum module has been extended. A ParticleList of composite particles can now be processed as well. In that case the momenta of all track-based daughter particles are scaled by the provided factor.
• The wrapper functions for the tracking systematics modules have been renamed from trackingMomentum to scaleTrackMomenta and from trackingEfficiency to removeTracksForTrackingEfficiencyCalculation.
• Added the TauDecayMode module, which is an actualization of the TauDecayMarker module for the new TauolaBelle decays. Using a txt file, which defines the mapping between decay strings and decay numbers from TauolaBelle, the module assigns a decay number for each tau in the event. This decay number is stored in the variables tauMinusMCMode and tauPlusMCMode. It’s possible to provide the path of a different txt file for the mapping as a parameter to the module.
• Added a EnergyBiasCorrection module, sub-percent correction only applied to E of photons not on clusterE and should only be applied on data. correctEnergyBias is a corresponding wrapper function.
• The ChargedPidMVA and ChargedPidMVAMulticlass modules now apply by default charge-dependent BDT training for particle identification. The charge-independent training can be used optionally.
• Added a PhotonEfficiencySystematics module which adds photon detection efficiency Data/MC ratios, systematic, statistical and total uncertainties as variables in extrainfo to a given photon list. Ratios can only added for particles in photon lists. addPhotonEfficiencyRatioVariables is a corresponding wrapper function.
• Bug fixed in FlavorTaggerInfoBuilder and FlavorTaggerInfoFiller: FlavorTaggerInfoMap is created now in the builder to make it accessible in main paths outside the ROE loops. This permits now to save the FT outputs for every B-meson candidate.
#### Full Event Interpretation¶
• Added option to reconstruct strange B mesons (at Y(5S)) in 51 decay channels. Can be switched on with the strangeB flag in fei.get_default_channels().
• The FEI has been retrained with MC14 and release-05. The prefix FEIv4_2021_MC14_release_05_01_12 has to be set in the FEI configuration.
#### Standard Particle Lists¶
• The selection criteria of the pi0 standard particle lists have been updated from the January 2020 to the May 2020 recommendations.
• Defined new stdCharged.stdE and stdCharged.stdMu lepton lists, based on uniform target efficiency selection working points. Available both for likelihood-ratio based PID selection and new BDT-based selection, both global and binary ($$\ell$$ vs. $$\pi$$). These are recommended for analysis once correction factors payloads become available.
### 1.1.2. Changes in the analysis package since release-05-01¶
#### Variables¶
• Added b2help-variables which behaves identical to basf2 variables.py but more in-keeping with the b2help-<something> theme.
#### Full Event Interpretation¶
• Background sampling for $$B^{0} \rightarrow J/\psi K_{S}^{0}$$ and $$B^{+} \rightarrow J/\psi K^{+}$$ are deactivated.
• The baryonic tagging modes are activated by default.
### 1.1.3. Changes in the analysis package since release-05-00¶
#### Bremsstrahlung correction¶
• Fixed that the copies of particles to which Bremsstrahlung was applied (whether this resulted in a added photon or not) are considered as sources of tracks. This allows to update the daughters in vertex fits that involve brems-corrected particles.
#### Modules¶
• Added the new wrapper function applyRandomCandidateSelection, which uses the BestCandidateSelection module to reduce the number of candidates in the input particle list to one candidate per event based on a random value.
• In TagVertex, all charged particles from the ROE are loaded to be included in the tag vertex fit (and not only those with a pion hypothesis)
• The ParticleCombinerFromMC and the reconstructMCDecay function can set decayModeID extraInfo with an argument dmID. One can decide whether charge conjugated mode should be reconstructed with new boolean argument chargeConjugation (true by default).
• The special treatment of MCMatching for the tau-decay is fixed. The treatment was working fine in release-04, but it was broken in the release-05.
### 1.1.4. Changes in the framework package since release-05-01¶
• Added b2help-modules which behaves identical to basf2 -m but more in-keeping with the b2help-<something> theme.
#### Conditions Database¶
• The format in which local database payloads are created has changed slightly, see Creation of new payloads
• b2conditionsdb-tag-merge has been added to merge a number of globaltags into a single globaltag in the order they are given. The result is equivalent to having multiple globaltags setup in the conditions access for basf2.
• b2conditionsdb-tag-runningupdate has been added to calculate and apply the necessary updates to a running globaltag with a given staging globaltag.
• b2conditionsdb-download has learned a new argument to clean all payload files not mentioned in the database file from the download area.
#### Tools¶
• Added b2rundb-query to perform simple rundb queries from the command line.
## 1.2. Changes since release-04¶
Neutral hadrons from ECLClusters get momentum from the cluster energy
Since release-04 it has been possible to load ECLClusters under the neutral hadron hypothesis. Previously we assumed a mass when calculating the particle momentum, however this leads to problems when, for example, a $$K_L^0$$ deposits less than its mass energy in the ECL. This happens about 50% of the time.
The momentum of neutral hadrons from the ECL is now set to the clusterE.
Bremsstrahlung correction
The BremsFinder module has been developed to find relations between tracks and photons that are likely to have been emitted by these tracks via Bremsstrahlung. The matching quality figure of merit is based on the angular distance between the photon ECL cluster and the extrapolated hit position of the track at the ECL. The function correctBrems performs the actual correction. There is also a reimplementation of Belle’s Bremsstrahlung correction approach of looking for photons in a cone around tracks (correctBremsBelle), which is recommended for b2bii analyses.
Warning
While it is technically possible to perform a TreeFit after applying Bremsstrahlung correction, the fit performance is unfortunately quite bad. However, there is already an improvement in the pipeline that should fix this issue. It will probably be available in one of the next light releases.
MC reconstruction and MC matching
The ParticleCombinerFromMC module and its corresponding wrapper function reconstructMCDecay should be used instead of findMCDecay to reconstruct decay modes based on MC information.
The DecayStringGrammar has been extended with new exception markers for Bremsstrahlung, decay in flight, and misidentification.
Exceptions for the MC matching of daughter particles with the DecayStringGrammar are propagated to the mother particle.
Redefinition of angle variables
The kinematic variables decayAngle, daughterAngle and pointingAngle now return the angle instead of its cosine.
Protection of ParticleLists and particle combinations
It is no longer allowed to use the label "all" for a particle list if a cut is applied. Reconstructed decays need to preserve electric charge. However, this can be deactivated if you know what you are doing, e.g. in searches for New Physics.
### 1.2.1. Changes in the analysis package since release-04-02¶
Warning
Global includes like from basf2 import * are almost completely removed from the analysis package.
#### Vertex Fitting¶
Warning
The convenience function for the TreeFitter module has been renamed from vertexTree to vertex.treeFit().
Warning
The KFit convenience functions have been merged to the new function vertex.kFit(). One of its input arguments is fit_type, which specifies whether you want to perform a mass fit (fit_type=mass), a vertex fit (fit_type=vertex), a mass-constrained vertex fit (fit_type=massvertex), or a vertex fit with the mother constrained to the beam four-momentum (fit_type=fourC).
• Added smearing parameter to vertex.kFit. When you perform a vertex fit with an IP tube constraint (fit_type=vertex and constraint=iptube), the IP profile is smeared by the given value. The IP tube is defined by a neutral track pointing towards the boost direction starting from the smeared IP profile. Setting smearing=0 gives the original iptube option, which is an elongated ellipsoid in boost direction.
• Fixed TreeFitter bias on V0s by propagating the correct TrackFitResult from V0 daughters (see BII-6336).
• Added FeedthroughParticle and InternalTrack to vertex.treeFit. This now provides compatibility of TreeFitter with the bremsstrahlung recovery modules. In principle, TreeFitter can now be used for decays for which Bremsstrahlung correction has been applied. However, there are massive performance issues. Do not blindly use TreeFit but cross-check your results with KFit. Further improvements are planned.
#### Standard Particle Lists¶
• The list label "all" is now forbidden from use with cuts. This can introduce some very subtle bugs in user scripts. The following code will no longer work:
fillParticleList("gamma:all", "clusterE > 0.02", path=mypath)
Instead you should replace it with a meaningful name of your choice. For example:
fillParticleList("gamma:myMinimumThresholdList", "clusterE > 0.02", path=mypath)
• Added stdHyperons.py, this analysis script reconstructs the standard hyperons Xi-, Xi0, and Omega-.
• The default vertex fitter for the V0 lists stdKshorts and stdLambdas has been changed from Rave to TreeFit. However, a new option called fitter has been added that allows to change the vertex fitter back to raveFit or to kFit. This change is motivated by a much faster execution time (see BII-5699) and a compatible performance as summarized here.
• Added a new standard list stdCharged.stdMostLikely which creates 5 mutually-exclusive lists of track-based particles under their most likely hypothesis.
• Moved the KlId check from ParticleLoader to stdKlongs. This modification does not change the definition of the K_L0:all list and, in general, of the user-defined $$K_{L}^0$$ lists.
• Removed the functions mergedKshorts and mergedLambdas, which returned the standard V0 lists anyway.
• Removed the deprecated pi0 lists pi0:loose and pi0:veryloose
• stdPi0s and stdPhotons updated to match Jan2020 recommendations. Naming scheme updated to indicate date of optimization.
#### Variables¶
Note
All variables will return a quietNaN instead of -999 or other out-of-the-sky numbers in case of errors. To study the failures use the meta variable ifNANgiveX, which replaces nan with a value of your choice.
Warning
The variables decayAngle, daughterAngle and pointingAngle now return an angle instead of a cosine.
Note
This is not the generator-level match (see BII-5741) but simply ignores photons added by the BremsFinder.
Hint
It is recommended to use variablesToExtraInfo prior to a vertex fit to access pre-fit values, not only for mass significances but also for vertex positions, etc.
#### Modules¶
Hint
This tool is the recommended way to correct for bremsstrahlung photons in Belle II analyses.
• Modified writePi0EtaVeto function and added oldwritePi0EtaVeto function. The weight files used in release-03 were not available in release-04. The latter uses old weight files optimized with MC9, while new weight files, which are optimized with MC12, are used in writePi0EtaVeto
• Enabled to select daughters which will be used to perform fitting with KFit. One can use the selector ^ to select daughters.
• KFit can be used to fit vertices of fully-inclusive particles, for which it ignores the daughters without defined p-value.
• RestOfEventBuilder, EventKinematics and EventShapeCalculator modules can use most likely charged particle hypothesis according to PID information.
• Added the ParticleCombinerFromMC module and the reconstructMCDecay to find and create ParticleList from a given DecayString. They can be used instead of the MCDecayFinder and findMCDecay which are not fully tested and maintained.
• In TagVertex added Btube as a possible constraint for the tag B vertex fit. The useFitAlgorithm parameter in the module is replaced by constraintType (which can be set to tube) and trackFindingType. All computations done in the module are now using double precision. The module is also updated to allow for the computations of new variables related to the tracks used in the tag vertex fit.
• Added a InclusiveDstarReconstruction to inclusively reconstruct D* mesons by estimating the four vector using slow pions.
• In the TagV function, added the possibility to use KFit instead of Rave to perform the tag vertex fit.
• Added new DecayString grammar for c_AddedRecoBremsPhoton, c_DecayInFlight, and c_MisID. If one uses ?addbrems in own DecayString, c_AddedRecoBremsPhoton is ignored, thus isSignal works as isSignalAcceptBremsPhotons. One can add (decay) and/or (misID) to the beginning of the particle name to accept c_DecayInFlight and/or c_MisID. The following is an example, 'reconstructDecay('D0:sig -> (misID)K-:loose (decay)pi+:loose', '', path=mypath)'
• Modified modularAnalysis.matchMCTruth to work always recursively. Once one calls modularAnalysis.matchMCTruth for a particle, the MC Matching is done for not only the particle but also all daughters of the particle correctly. This modification does not change the MC Matching of the particle for which modularAnalysis.matchMCTruth is called.
• Added feature to add multiple TTrees to the same output file via the VariablesToEventBasedTree module.
• Rest Of Event can be created using MCParticles using modularAnalysis.buildRestOfEventFromMC
• Removed -->, =>, and ==> from the list of allowed arrow types for the DecayString.
• In TagVertex module, added the possibility to use the truth information from the tag particles in the vertex fit. The option useTruthInFit = True switches that on.
• In TagVertex module, implemented an internal change: tag particles are loaded as Particle object from the ROE and not anymore as TrackFitResult. This should have no effect to users.
• Added a warning to the ParticleCombiner module for decay strings violating electric charge conservation which can be turned off by setting allowChargeViolation=True.
• In TagVertex module, added the possibility to perform vertex fit with the tag particle tracks rolled back to their primary vertex points. The option useRollBack = True switches that on.
• Added an argument to the ParticleCombiner module that allows to deactivate the automatic reconstruction of the charge-conjugated mode. In reconstructDecay the option is called chargeConjugation, which is True by default.
• The MC matching with the DecayString properly works in hierarchical decay. For example,
from modularAnalysis import reconstructDecay, matchMCTruth
reconstructDecay('B0:signal -> mu+ mu- ... ?gamma', '', path=mypath)
reconstructDecay('B0:generic -> D*-:Dpi pi+:all', '', path=mypath)
reconstructDecay('Upsilon(4S):BB -> B0:generic B0:signal', '', path=mypath)
matchMCTruth('Upsilon(4S)', path=mypath)
In the above case, missing daughters (massive FSP and gamma) of B0:signal are accepted for not only B0:signal but also Upsilon(4S), so that isSignal can be 1. Another example,
reconstructDecay('D-:pipi0 -> pi-:all pi0:all', '', path=mypath)
reconstructDecay('B0:Dpi -> D-:pipi0 pi+:all ...', '', path=mypath)
matchMCTruth('B0:Dpi', path=mypath)
In this case, one wants to accept missing massive daughters in B0:Dpi decay but not in D-:pipi0 decay. So, if the decay of D-:pipi0 in the MC truth level is D- -> pi+ pi- pi- pi0, isSignal of D-:pipi0 and B0:Dpi will be 0, since there are missing daughters in D-:pipi0 decay. If one wants to accept missing daughters in D-:pipi0, please use the DecayString grammar in the reconstruction of D-:pipi0 or use isSignalAcceptMissing variable instead of isSignal.
#### Utilities and core objects¶
• A set of functions DistanceTools has been added to compute distances between (straight) tracks and vertices.
• Added a relation between Track-based Particles and TrackFitResults.
• Added a getTrackFitResult method to Particle.
• Added a function to retrieve electric charge of a particle based on its pdg code.
• Added a RotationTools.h file with few functions related to rotation
#### Tutorials and Examples¶
• Added a tutorial about creating aliases (examples/VariableManager/variableAliases.py)
### 1.2.2. Changes in the analysis package since release-04-00¶
#### Variables¶
• Now vertex variables dr, dx, dy, dz, dphi, dcosTheta... take into account the nontrivial transformation of track parameters relative to IP when used for tracks. dr did not center at zero before and now it does.
• Added isDescendantOfList and isMCDescendantOfList meta variables that allow to check whether the particle is the descendant of the list (or if it is matched to the descendant of the list) at any generation. The variables have recursive search inside, they extend isDaughterOfList and isGrandDaughterOfList.
• Added mcParticleIsInMCList which checks the underlying MC particles (either matched or generator level).
• Fixed bug and renamed mcFlavorOfOtherB0 variable to mcFlavorOfOtherB, which accepts now neutral and charged B candidates.
• Removed clusterCRID which duplicates clusterConnectedRegionID.
• Fixed goodBelleLambda and now it returns extraInfo(goodLambda) on Belle data.
• Allow use of meta variables in creation of aliases by replacing non alphanumeric characters with underscores in the alias name
• Modified daughterAngle to accept generalized variable indices instead of simple indices. A generalized index is a column-separated string of daughter indexes belonging to different generations, starting from the root particle. For example, 0:2:1 indicates the second daughter (0) of the third daughter (2) of the first daughter (0) of the particle. Of course, conventional indexes are still working as expected: 1 still indicates the second daughter of the particle.
• Added daughterCombination, that returns a variable calculated on the sum of an arbitrary number of daughters. Generalized indexes are supported. This variable is mostly intended to calculate the invariant mass or the recoil mass of any set of particles belonging to a decay tree.
• Fixed isSignal, which always accepted c_MissingResonance and c_MissFSR/c_MissPHOTOS even if one used =direct=>, =norad=>, or =exact=>. Now it correctly respects the decay string grammar. The other isSignal* variables, such as isSignalAcceptMissing, are also fixed.
• Added useAlternativeDaughterHypothesis, that returns a variable re-calculated using alternative mass assumptions for the particle’s daughters.
• Restructured the mc_flight_info collection by removing the non-sense error variables and replacing mc_flightTime and mc_flightDistance, which were aliases created using matchedMC, with the dedicated variables mcFlightTime and mcFlightDistance, respectively.
• Removed output_variable option in the Deep Flavor Tagger and introduced a standard variable DNN_qrCombined for the output. The new variable returns the flavor times the dilution factor as the category-based Flavor Tagger output variables FBDT_qrCombined and FANN_qrCombined do. Now we can evaluate the output of both taggers in the same way.
• Bug fix to guard against range exceptions in KSFWVariables (see BII-6138).
#### Modules¶
• In TreeFitter fixed a bug in life time calculation. A constant in the jacobian was missing. As a result the propagated error was slightly overestimated.
• Fix to nested RestOfEvent objects (see BII-5649)
• Fixed bugs in MCDecayFinder and findMCDecay. Inefficiency and large background when one used =direct=> or sub-decay such as D*+ -> [D0 -> pi+ pi- pi0] pi+ are fixed. But still the module has some bugs if one uses K_S0.
#### Conditions DB¶
• In ChargedPidMVAWeights payload class, added MVA category cut strings in basf2-compliant format.
#### Full Event Interpretation¶
• Addition of hadronic FEI channels involving baryons. This includes the addition to default channels the following particles: p, Lambda_c+, Sigma+ and Lambda0. Baryonic modes must be switched on when calling particles = fei.get_default_channels(baryonic = True) using a flag (baryonic = True).
### 1.2.3. Changes in the framework package since release-04-00-00¶
#### Conditions Database¶
• b2conditionsdb: Conditions DB interface has been optimized to work with larger globaltags.
• b2conditionsdb-diff and b2conditionsdb-iov by default don’t show any database internal ids anymore but those can be re-enabled with --show-ids
• b2conditionsdb-dump has learned a new argument to show the content of a payload valid for a given run in a given globaltag
• There are new python classes to handle iovs, see conditions_db.iov
#### Miscellaneous¶
• Added support for writing udst files as output from BGxN N>0 files in light releases (see BII-3622). This means skimming is fully supported with a light release.
• Added support for b2bii in light releases. However, this comes at the cost of no longer being able to convert ExtHits and ECLHits.
• The RootInput will now by default skip events which have an error flag set. This can be changed with the discardErrorEvents parameter.
• Added the function basf2.get_file_metadata to quickly obtain the FileMetaData object of a given basf2 output file.
• Added the tools b2code-sphinx-build and b2code-sphinx-warnings to build the sphinx documentation or check for warnings when building the documentation.
### 1.2.4. Changes in the decfiles package since release-04-02¶
Note
The decfiles directory was part of the analysis package until release-03-02. For documentation on earlier changes, please refer to the comments at the top of DECAY_BELLE2.DEC.
#### Changes to DECAY_BELLE2.DEC¶
Note
All changes listed below are documented in BELLE2-NOTE-PH-2020-008 - please refer to this note for a more detailed and specific description of updates.
• Substantial changes to D0, anti-D0, D+, D-, D_s+, and D_s- decay tables
• Changes to charm baryon decay tables
• Added decay table for sigma_0 particle, previously missing from decay file
### 1.2.5. Changes in the b2bii package since release-04-01¶
Warning
b2biiConversion.setupB2BIIDatabase() will be deprecated soon.
#### convertBelleMdstToBelleIIMdst¶
• Added a switch to deactivate nisKsFinder.
• Renamed applyHadronBJSkim to applySkim, and added switches to deactivate HadronA and HadronB skims.
• Added a switch to deactivate converting RecTrg_Summary3 table.
#### Conversion¶
• Removed convertECLCrystalEnergies() and convertExtHits() in the conversion.
• Added mass-constraint fit information of pi0:mdst, including chiSquared, ndf and pValue as extraInfo().
• Added m_final(3) in RecTrg_Summary3 table as eventExtraInfo(rectrg_summary3_m_final).
## 1.3. Changes since release-03¶
Removal of default analysis path and NtupleTools
Warning
The default path (”analysis_main”) and the NtupleTools are now removed.
This is a major backward-compatibility breaking change. Please update your user scripts to create your own path (basf2.create_path) and to use the variable manager tools (such as VariablesToNtuple).
If your previously working example script from release-03 looked something like this:
from basf2 import *
from stdCharged import stdPi
from modularAnalysis import *
stdPi("good")
ntupleFile("myFile.root") # <-- now removed
ntupleTree("pi+:good", ['pi+', 'Momentum']) # <-- now removed
process(analysis_main)
print(statistics)
You should update it to this:
import basf2 # better not to import all
from stdCharged import stdPi
from modularAnalysis import variablesToNtuple
mypath = basf2.Path() # create your own path (call it what you like)
stdPi("good", path=mypath)
variablesToNtuple("pi+:good", ['px', 'py', 'pz', 'E'], path=mypath)
basf2.process(mypath)
print(basf2.statistics)
The example scripts available here:
BELLE2_RELEASE_DIR/analysis/examples/VariableManager Switch of beam spot information from nominal to measured values. The interaction point position and its uncertainties are now taken from the database with values provided by the tracking group. All beam kinematics information is also moved to the database, which will eventually be measured on data. For now they are the values provided by the accelerator. Warning The previous definition of the boost included a small rotation to align it with the HER. This is no longer possible with the new structure. The definition of CMS is therefore slightly changed. The impact should be at the percent level. If you have a physics analysis sensitive to this change: please discuss with the software / performance groups and add a comment to BII-4360. See also The beam information can be accessed with Ecms, beamPx, beamPy, beamPz, and beamE. Redesign of the Conditions Database Interface The configuration and handling of the connection to the conditions database has been completely rewritten in a more coherent and modular way. We now have a new and consistent configuration interface, global tag replay and advanced checks: If users specify a global tag to be used which is either marked as invalid in the database or which cannot be found in the database the processing is now aborted. See Conditions Database Overview for details. Restrict usage of useDB=False for Geometry creation Creating the geometry from XML files instead of the configuration in the Database may lead to wrong results. So while the option useDB=False is still necessary to debug changes to the geometry definitions it is now restricted to only be used for exp, run = 0, 0 to protect users from mistakes. This also changes the behavior of add_simulation() and add_reconstruction(): If a list of components is provided this will now only change the digitization or reconstruction setup but will always use the full geometry from the database. Loading ECLClusters under multiple hypotheses It is now possible to load $$K_L^0$$ particles from clusters in the ECL. This has several important consequences for the creation of particles and using combinations containing $$K_L^0$$ s or other neutral hadrons in the analysis package. This is handled correctly by the ParticleLoader and ParticleCombiner (the corresponding convenience functions are modularAnalysis.fillParticleList and modularAnalysis.reconstructDecay). Essentially: it is forbidden from now onwards for any other analysis modules to create particles. Deprecated RAVE for analysis use The (external) RAVE vertex fitter is not maintained. Its use in analysis is therefore deprecated. We do not expect to remove it, but do not recommend its use for any real physics analyses other than benchmarking or legacy studies. Instead we recommend you use either KFit (vertex.kFit) for fast/simple fits, or TreeFit (vertex.treeFit) for more complex fits and fitting the full decay chain. Please check the Tree Fitter pages for details about the constraints available. If you are unable to use TreeFitter because of missing functionality, please submit a feature request! Warning The default fitter for vertex.fitVertex has been changed to KFit. Tidy up and rename of Helicity variables. Renamed helicity variables in the VariableManager following consistent logic. We added the new variable cosAcoplanarityAngle. Warning cosHelicityAngle is now cosHelicityAngleMomentum, and cosHelicityAngle has a new definition (as in the PDG 2018, p. 722). Old name New name cosHelicityAngle cosHelicityAngleMomentum cosHelicityAnglePi0Dalitz cosHelicityAngleMomentumPi0Dalitz cosHelicityAngleIfCMSIsTheMother cosHelicityAngleBeamMomentum New DecayStringGrammar for custom MCMatching Users can use new DecayStringGrammar to set properties of the MCMatching. Then isSignal, mcErrors and other MCTruthVariables behave according to the property. Once DecayStringGrammar is used with reconstructDecay, users can use isSignal instead of several specific variables such as isSignalAcceptMissingNeutrino. If one doesn’t use any new DecayStringGrammar, all MCTruthVariables work same as before. The grammar is useful to analyze inclusive processes with both fully-inclusive-method and sum-of-exclusive-method. There are also new helper functions genNMissingDaughter and genNStepsToDaughter to obtain the detailed MC information. You can find examples of usage in Marker of unspecified particle, Grammar for custom MCMatching. ### 1.3.1. Changes in the analysis package¶ #### TreeFitter¶ • Fix the $$\phi$$-dependent loss of performance for displaced vertices (BII-4753). #### Flavor Tagger¶ • Default Expert (testing mode) does not create repositories and does not save weight files locally. It only loads the payloads directly from the database using the database payload names as mva identifiers. • BtagToWBosonVariables adapted to work with the new ROE. • All release validation and performance evaluation scripts added to BELLE2_RELEASE_DIR/analysis/release-validation/CPVTools.
• The flavor tagger creates and adds default aliases into the collection list flavor_tagging.
#### Vertex Fitting¶
• Added IP tube constraint to KFit.
• The parameter confidenceLevel of the ParticleVertexFitter now always rejects the particle candidates with a p-value lower than the specified one. Specifically, setting confidenceLevel to 0 does not reject candidates with p-value equal to 0 anymore. Thus, the meaning of this parameter is now the same as for the TreeFitter.
#### FEI¶
• Removed the backward compatibility layer (pid_renaming_oktober_2017). Only FEI trainings from release-02 are supported. Please update to FEIv4_2018_MC9_release_02_00_01 or newer.
#### Variables¶
Warning
we overhauled the helicity variables and added new ones to replace the NtupleHelicityTool. We renamed cosHelicityAngle to cosHelicityAngleMomentum, cosHelicityAnglePi0Dalitz to cosHelicityAngleMomentumPi0Dalitz, and cosHelicityAngleIfCMSIsTheMother to cosHelicityAngleBeamMomentum. We added the variables cosHelicityAngle and cosAcoplanarityAngle defining them as the PDG 2018, p. 722.
#### Modules¶
• Added the new module AllParticleCombiner which is also available via the function modularAnalysis.combineAllParticles. It creates a new Particle as the combination of all unique Particles from the passed input ParticleLists.
• The ParticleLoader can load Rest Of Event as a particle using a new function modularAnalysis.fillParticleListFromROE. This ROE particle can be combined with any other particle or written down using usual variables.
Another option is to load missing momentum as a particle by supplementing useMissing = True argument to the function, mentioned above.
• Fixed a bug in the BestCandidateSelection module: When allowMultiRank=True there always at least two candidates with rank of one, even if they didn’t have the same variable value (BII-4460)
This affects all users of modularAnalysis.rankByLowest() and modularAnalysis.rankByHighest() if they passed allowMultiRank=True
• Fixed a bug in the BestCandidateSelection module: now the numBest parameter works as expected.
• Removal of the unsupported ECLClusterInfoModule.
• Added vertex.fitPseudo function and a pseudo vertex fitting module to add a covariance matrix when a vertex fit is not possible. E.g. for $$\pi^0$$ decays.
• Added the new module SignalSideVariablesToDaughterExtraInfo. It adds ExtraInfo to a specified particle (typically a daughter on the signal side). The corresponding information is calculated in the RestOfEvent so it is supposed to be executed only in for_each ROE paths.
• Fixed a bug and extended the functionality of the RestOfEventBuilder module. When providing a ParticleList of composite particles as an additional source for building the ROE, the composite particles are now decomposed and their final state daughters are added to the ROE (unless they are part of the signal side or already present in the ROE). Previously, composite particles were not decomposed and the first composite particle of the first ParticleList of composite particles (and only this one) was always added to the ROE.
• The module VertexFitUpdateDaughters now always updates the daughters (as advertised).
#### Modular Analysis¶
• A new boolean argument has been added to the function modularAnalysis.buildRestOfEvent. It is called belle_sources and should be switched to True (default is False) if you are analyzing converted Belle MC or data. It triggers the ROE to be constructed from all tracks and from Belle’s specific gamma:mdst list.
• Added signal region function modularAnalysis.signalRegion. By default, this function enables a new variable isSignalRegion and excludes the defined area when processing is done on data.
#### Standard Particle Lists¶
• Updated V0 lists. The standard list became the merged list, a combined list of particles coming from V0 objects merged with a list of particles combined using the analysis ParticleCombiner module.
stdV0s.stdKshorts returns a Ks list called K_S0:merged stdV0s.stdLambdas returns a Lambda list called Lambda0:merged
mergedKshorts() and mergedLambdas() are now deprecated, and return the standard lists.
• The definition of the standard V0 lists slightly changed. For Lambdas, the modularAnalysis.markDuplicate() function is now used to detect duplicates among the same list (V0 and ReconstructDecay), rather than between different lists. For Ks, the modularAnalysis.markDuplicate() function is no longer used.
• Updated V0 lists by making them explicitly call the appropriate vertex fit.
• Fixed a bug in the merged Lambda list, which used an incorrect mass window.
• Fix the stdKlongs.stdKlongs lists.
• Updated charged standard PID cut values to reflect the correct efficiencies. This recovers the efficiency loss reported in BIIDP-1065.
#### Tutorials¶
• Fix B2A801 to use JPsiKs as signal channel and to show how to run on Belle data/MC. Central database global tag updated.
### 1.3.2. Changes in the framework package¶
#### Job Information File¶
basf2 has a new command line parameter --job-information=<filename> to create a json file with some statistics on the processing and detailed information on all created output files.
#### Core Framework¶
• Addition of helpful StoreArray::hasRelationTo and StoreArray::hasRelationFrom methods.
• Deprecated static methods to register StoreArray instances have been removed.
#### Command line Tools¶
• b2file-metadata-add will now only update the file in the FileCatalog if it is already registered. It will also now correctly remove the old LFN from the file catalog.
• b2file-merge will by default no longer register files in the file catalog. You can either supply --add-to-catalog as argument or run b2file-catalog-add on the output file. Use the latter if you want to write scripts which work also with older releases.
• b2file-normalize has been added to remove timestamps and similar non-reproducible features from ROOT files.
• b2file-remove-branches has been added to remove obsolete branches from old files.
• the commands of the b2conditionsdb tool are now also available with dashed version, e.g. b2conditionsb-tag-show
• A number of old tool names which were kept for compatibility have been removed. The old tool names still exist but raise an error pointing to the new name.
#### Modules¶
• b2:mod:RootOutput module: changed default value for updateFileCatalog to False. This will avoid creating Belle2FileCatalog.xml or warning about overwriting entries in the file catalog by default. The file catalog is only needed when loading parent files and can always be created later using b2file-catalog-add
### 1.3.3. Changes in the ecl package¶
#### Modules¶
• Remove all getHypothesisId() calls and logic by a new logic that allows clusters to have multiple flags to indicate their hypothesis. This is used to remove duplicate clusters that are identical under different hypothesis.
## 1.4. Changes since release-02-01¶
Moved to C++17
The whole software including the ROOT in the externals is now compiled using the C++17 standard. This should not affect many users but there are a few rare cases where this might lead to compilation problems of analysis code as some deprecated features have been removed. The most notable are
• throw(TypeName) exception specifiers, just remove them.
• std::auto_ptr which should be replaced by std::unique_ptr
• some older parts of the <functional> header.
In particular if you compile a standalone program that links against the ROOT in the Belle2 externals this now also needs to be compiled in C++17 mode. You can do this by adding -std=c++17 to the compiler arguments.
Note
It’s best to directly pass the output of root-config --cflags to the compiler. That way you always pass the correct flags needed for the particular ROOT version setup.
Build system moved to Python3
This is a major update of SCons but most users should not notice any difference except for two instances:
• If you update an existing working directory from an older release you might run into an error
scons: *** [...] TypeError : intern() argument 1 must be string, not unicode
scons: building terminated because of errors.
TypeError: intern() argument 1 must be string, not unicode:
[...]
In this case please remove all .scon* files in the top level of your software directory and rerun scons
• In the unlikely case that you have custom SConscript files which are not Python 3 compatible you will have to update them.
### 1.4.1. Changes in the analysis package¶
#### Variables¶
There have been improvements and additions to the variables available in the variable manager. Some older, unhelpful, or deprecated variables have been removed, but this should be less than previous major releases. As usual, please ask at https://questions.belle2.org if anything is unclear or your favourite variable seems to have been removed. There is likely a good reason.
#### Fitters¶
• Many improvements to TreeFitter, which is now the recommended vertex fitter almost all use cases, even for simple vertices with two tracks. Please refer to the TreeFitter documentation for full details.
This TreeFitter tutorial (October 2018).
• Bug fixes to OrcaKinFit.
• KFit is now accessible from other basf2 modules.
• When loading V0 particles (i.e. $$K_S^0$$, $$\Lambda^0$$, or converted photons) using the ParticleLoader (fillParticleList) you must now specify the daughters in a decay string. For example, to load $$\Lambda^0\to p^+\pi^-$$ decays from V0s:
from modularAnalysis import fillParticleList
fillParticleList('Lambda0 -> p+ pi-', '0.9 < M < 1.3', path=mypath)
#### Tutorials and Examples¶
• The style of many of the tutorial scripts have been updated to assist with and provide examples of these new changes. See:
\$BELLE2_RELEASE_DIR/analysis/examples/tutorials
### 1.4.2. Changes in the framework package¶
#### Python Interface¶
The basf2 python interface has been restructured into multiple files to ease maintenance. For the user this should be transparent. Some changes the user might notice are
• There is now a new utility function basf2.find_file to allow looking for files in the release directory or separate examples or validation directories.
• There is now an automatic Jupyter integration: Calling basf2.process in a Jupyter notebook should now automatically run the processing in a separate process and show a nice progress bar.
• The obsolete “fw” object has been deprecated and all functionality which previously was accessed using basf2.fw.* is now directly accessible as basf2.*. If you use basf2.fw you will get a deprecation warning.
• When using from basf2 import * the sys and os packages were also silently imported and available in the current script. This has been deprecated. In general, using import * is not recommended but if you have to use it and use the sys or os module in the script please make sure you import them yourself after from basf2 import *
• Display and colouring of log messages in Jupyter has been significantly improved and should now be much faster.
• There’s a new implementation for pager.Pager which can also show output from C++ programs and will display the output incrementally instead of waiting for all output before showing everything.
#### Command Line Tools¶
• b2file-check now supports files with zero events correctly.
• b2file-merge now checks that real/MC data flag is consistent for all input files and refuse to merge mixed real/MC files.
• The subcommands iov and diff of b2conditionsdb have improved output and learned the new option --human-readable to convert the IntervalOfValidity numbers to easier to read strings.
• There is a new sub command dump for b2conditionsdb to dump the contents of a payload file in a human readable form on the terminal for quick inspection.
• There is a new command b2conditionsdb-extract which allows to convert a payload to a TTree with one entry per requested run number. This allows to easily monitor how payloads change over the course of time.
• There is a new command b2conditionsdb-recommend which will recommend users a global tag to use when processing a given input file.
• There is a new command b2conditionsdb-request to allow requesting the inclusion of locally prepared database payload into the official global tags.
#### Core Framework¶
• Environment::isMC() is now available to consistently distinguish between real and MC data. To use it in C++ please use
#include <framework/core/Environment.h>
bool isMC = Environment::Instance().isMC();
and for use in python
from ROOT import Belle2
isMC = Belle2.Environment.Instance().isMC();
• We have now a new prototype for advanced multi processing using ZMQ. It is disabled by default but can be activated using
from ROOT import Belle2
env = Belle2.Environment.Instance().setUseZMQ(True)
Pull request [PR#2790]
• There is now support for named relations to allow multiple relations between the same pair of StoreArray.
#### Core Modules¶
• The ProgressBar module will now notice if the output is written to a log file instead of the terminal and behave accordingly which should clean up logfiles considerably when this module is used.
• The RootOutput module now allows to split output files after a certain file size is reached using the outputSplitSize parameter.
Warning
This will set the amount of generated events stored in the file metadata to zero as it is not possible to determine which fraction ends up in which output file.
Also the user can now choose the compression algorithm to choose either no compression, zlib, LZMA or LZ4 compression. LZ4 is a newer compression standard with slightly worse compression than zlib but much faster decompression speed.
• removed CrashHandler module
• removed FileLogger module
Conditions Database:
• Database objects are now immutable (const) to prevent accidental modification of conditions data.
• The old and outdated fallback database in framework/data/database.txt has been removed. If you still set this by hand in your steering file your script will fail. Please use /cvmfs/belle.cern.ch/conditions if you really have to set a fallback database manually.
• The RunInfo database object which is supposed to contain all necessary information about each run now has support to contain the trigger pre-scale information.
#### Logging System¶
• We now have “Log message variables” which allow to send the same log message with varying content. This greatly helps with filtering log messages as it allows to group messages which have the same content and just differ in their variables. In C++ they can be used by adding a LogVar instance to the output,
B2INFO("This is a log message" << LogVar("number", 3.14) << LogVar("text", "some text"));
while in python the variables can be given as additional keyword arguments,
basf2.B2INFO("This is a log message", number=3.14, text="some text")
In both cases the names of the variables can be chosen feely and the output should be something like
[INFO] This is a log message
number = 3.14
text = some text
Log Variables
• The logging system is now able to send its message to python sys.stdout objects to allow intercepting log messages in python. To enable please set basf2.logging.enable_python_logging to True. This is automatically enabled when running inside of a jupyter notebook.
• Log messages can also be formatted as json objects where each log message will be printed as a one line json object to allow parsing of logfiles using scripts.
#### Utilities¶
• We have a new and advanced formula parser implementation in the framework package. It manages to handle ** correctly as in python and now allows using normal parenthesis for grouping operations in addition to square brackets.
formula variable
• We now provide a variety of RAII scope guards to free or restore a resource or value when the guard object goes out of scope. For example to make sure a variable is reset to is original value one could use
#include <framework/utilities/ScopeGuard.h>
int main() {
int myValue{5};
{
auto guard = Belle2::ScopeGuard::guardValue(myValue, 32);
// now myValue is 32
}
// now myValue is reverted to 5 independently of how the scope
// is left (normal, return statement, exception)
}
We provide convenience functions to guard simple values, pointer deletion, output stream flags and the current working directory but the interface is general enough that almost anything can be guarded by this ScopeGuard object.
• There is now RootFileManager to allow multiple modules to write to the same root output file. It will take care to open the file when the first module requests it and close it when the last module is finished with the root file. The primary use case is for the VariablesToNtuple and similar modules to allow having multiple ntuples or trees in the same root file.
• We now have a implementation of the c++17 std::visit overloaded pattern in framework/utils/Utils.h called Belle2::Utils::VisitOverload.
|
2022-01-24 17:36:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35353243350982666, "perplexity": 4431.942810684342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00026.warc.gz"}
|
https://ncatlab.org/nlab/show/mapping+telescope
|
Contents
# Contents
## Idea
Given a sequence
$X_\bullet = \left( X_0 \overset{f_0}{\longrightarrow} X_1 \overset{f_1}{\longrightarrow} X_2 \overset{f_2}{\longrightarrow} \cdots \right)$
of (pointed) topological spaces, then its mapping telescope is the result of forming the (reduced) mapping cylinder $Cyl(f_n)$ for each $n$ and then attaching all these cylinders to each other in the canonical way.
At least if all the $f_n$ are inclusions, this is the sequential attachment of ever “larger” cylinders, whence the name “telescope”.
The mapping telescope is a representation for the homotopy colimit over $X_\bullet$. It is used for instance for discussion of lim^1 and Milnor sequences (and that’s maybe the origin of the concept?).
## Definition
###### Definition
For
$X_\bullet = \left( X_0 \overset{f_0}{\longrightarrow} X_1 \overset{f_1}{\longrightarrow} X_2 \overset{f_2}{\longrightarrow} \cdots \right)$
a sequence in Top, its mapping telescope is the quotient topological space of the disjoint union of product topological spaces
$Tel(X_\bullet) \coloneqq \left( \underset{n \in \mathbb{N}}{\sqcup} \left( X_n \times [n,n+1] \right) \right)/_\sim$
where the equivalence relation quotiented out is
$(x_n, n) \sim (f(x_n), n+1)$
for all $n\in \mathbb{N}$ and $x_n \in X_n$.
Analogously for $X_\bullet$ a sequence of pointed topological spaces then use reduced cylinders to set
$Tel(X_\bullet) \coloneqq \left( \underset{n \in \mathbb{N}}{\sqcup} \left( X_n \wedge [n,n+1]_+ \right) \right)/_\sim \,.$
## Properties
### For CW-complexes
###### Proposition
For $X_\bullet$ the sequence of stages of a (pointed) CW-complex $X = \underset{\longleftarrow}{\lim}_n X_n$, then the canonical map
$Tel(X_\bullet) \longrightarrow X$
from the mapping telescope, def. , is a weak homotopy equivalence.
###### Proof
Write in the following $Tel(X)$ for $Tel(X_\bullet)$ and write $Tel(X_n)$ for the mapping telescop of the substages of the finite stage $X_n$ of $X$. It is intuitively clear that each of the projections at finite stage
$Tel(X_n) \longrightarrow X_n$
is a homotopy equivalence, hence a weak homotopy equivalence. A concrete construction of a homotopy inverse is given for instance in (Switzer 75, proof of prop. 7.53).
Moreover, since spheres are compact, so that elements of homotopy groups $\pi_q(Tel(X))$ are represented at some finite stage $\pi_q(Tel(X_n))$ it follows that
$\underset{\longrightarrow}{\lim}_n \pi_q(Tel(X_n)) \overset{\simeq}{\longrightarrow} \pi_q(Tel(X))$
are isomorphisms for all $q\in \mathbb{N}$ and all choices of basepoints (not shown).
Together these two facts imply that in the following commuting square, three morphisms are isomorphisms, as shown.
$\array{ \underset{\longleftarrow}{\lim}_n \pi_q(Tel(X_n)) &\overset{\simeq}{\longrightarrow}& \pi_q(Tel(X)) \\ {}^{\mathllap{\simeq}}\downarrow && \downarrow \\ \underset{\longleftarrow}{\lim}_n \pi_q(X_n) &\underset{\simeq}{\longrightarrow}& \pi_q(X) } \,.$
Therefore also the remaining morphism is an isomorphism (two-out-of-three). Since this holds for all $q$ and all basepoints, it is a weak homotopy equivalence.
examples of universal constructions of topological spaces:
$\phantom{AAAA}$limits$\phantom{AAAA}$colimits
$\,$ point space$\,$$\,$ empty space $\,$
$\,$ product topological space $\,$$\,$ disjoint union topological space $\,$
$\,$ topological subspace $\,$$\,$ quotient topological space $\,$
$\,$ fiber space $\,$$\,$ space attachment $\,$
$\,$ mapping cocylinder, mapping cocone $\,$$\,$ mapping cylinder, mapping cone, mapping telescope $\,$
$\,$ cell complex, CW-complex $\,$
## References
• Robert Switzer, Algebraic Topology - Homotopy and Homology, Die Grundlehren der Mathematischen Wissenschaften in Einzeldarstellungen, Vol. 212, Springer-Verlag, New York, N. Y., 1975.
Last revised on May 2, 2017 at 17:17:41. See the history of this page for a list of all contributions to it.
|
2022-11-28 22:55:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 51, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217153191566467, "perplexity": 849.0644504132339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00679.warc.gz"}
|
https://www.sparrho.com/item/branching-laws-of-generalized-verma-modules-for-non-symmetric-polar-pairs/8c39db/
|
# Branching Laws of Generalized Verma Modules for Non-symmetric Polar Pairs
Research paper by Haian HE
Indexed on: 06 Apr '14Published on: 06 Apr '14Published in: Mathematics - Representation Theory
#### Abstract
We give branching formulas from $so(7,\mathbb{C})$ to $\mathfrak{g}_2$ for generalized Verma modules attached to $\mathfrak{g}_2$-compatible parabolic subalgebras of $so(7,\mathbb{C})$, and branching formulas from $\mathfrak{g}_2$ to $sl(3,\mathbb{C})$ for generalized Verma modules attached to $sl(3,\mathbb{C})$-compatible parabolic subalgebras of $\mathfrak{g}_2$ respectively, under some assumptions on the parameters of generalized Verma modules.
|
2021-02-25 02:36:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202429413795471, "perplexity": 1951.331193937344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00549.warc.gz"}
|
https://openreview.net/forum?id=b-SNWfqkZc
|
## A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization
Keywords: Projection-Free algorithm, Conditional gradient algorithm, Stochastic multi-level composition optimization, Moving-average, Oracle complexity, High-probability bounds
Abstract: We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set. Our algorithm assumes access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle satisfying certain standard unbiasedness and second-moment assumptions. We show that the number of calls to the stochastic first-order oracle and the linear-minimization oracle required by the proposed algorithm, to obtain an $\epsilon$-stationary solution, are of order $\mathcal{O}_T(\epsilon^{-2})$ and $\mathcal{O}_T(\epsilon^{-3})$ respectively, where $\mathcal{O}_T$ hides constants in $T$. Notably, the dependence of these complexity bounds on $\epsilon$ and $T$ are separate in the sense that changing one does not impact the dependence of the bounds on the other. For the case of $T=1$, we also provide a high-probability convergence result that depends poly-logarithmically on the inverse confidence level. Moreover, our algorithm is parameter-free and does not require any (increasing) order of mini-batches to converge unlike the common practice in the analysis of stochastic conditional gradient-type algorithms.
Supplementary Material: zip
14 Replies
|
2023-02-05 00:54:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745635449886322, "perplexity": 403.49455672891906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00286.warc.gz"}
|
https://kseebsolutions.guru/kseeb-solutions-for-class-6-maths-chapter-2-ex-2-3/
|
Students can Download Chapter 2 Whole Numbers Ex 2.3 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka State Syllabus Class 6 Maths Chapter 2 Whole Numbers Ex 2.3
Question 1.
Which of the following will not represent zero:
a) 1 + 0
b) 0 × 0
c) $$\frac{0}{2}$$
d) $$\frac{10-10}{2}$$
a) 1 + 0 = 1 It does not represent zero
b) 0 × 0 = 0 It represent zero
c) $$\frac{0}{2}=0$$ It represents zero
d) $$\frac{10-10}{2}=\frac{0}{2}=0$$ It represent zero
Solution:
a) 1 + 0 = 1
Question 2.
If the product of two whole numbers is zero, can we say that one or both of them will be zero ? Justify through examples.
Solution:
If the product of 2 whole numbers is zero, then one of them is definitely zero,
For example, 0 × 2 =0 and 17 × 0 = 0
It the product of 2 whole numbers is zero them both of them may be zero 0 × 0 = 0
However, 2 × 3 = 6
(Since number to be multiplied are not equal to zero, the result of the product will also be non-zero.
Question 3.
If the product of two whole numbers is 1, can we say that one or both of them will be 1? Justify through examples?
Solution:
If the product of 2 numbers is, then both the numbers have to be equal to 1
For example ,1 × 1 = 1 However, 1 × 6 = 6
Clearly, the product of two whole numbers will be 1 in the situation when both numbers to be multiplied are 1.
Question 4.
Find using distributive property:
a) 728 × 101
b) 5437 × 1001
c) 824 × 25
d) 4275 × 125
e) 504 × 35
Solution:
a) 728 × 101 = 728 × (100+1)
= 728 × 100 + 728 + 1
= 72800 + 728 = 73528
b) 5437 × 1001 = 5437 × (1000 + 1)
= 5437 × 1000 + 5437 × 1
= 5437000 + 5437 = 5442437
c) 824 × 25 = (800 + 024) × 25
= (800 + 25 – 1) × 25
= 800 × 25 + 25 × 25 – 1 × 25 = 20000 + 625 – 25
= 20000 + 600 = 20600
d) 4275 × 125
= (4000 + 200 + 100 – 25 ) × 125 = 4000 × 125 + 200 × 125 + 100 + 125 – 25 × 125
= 500000 + 25000 + 12500 – 3125
= 534375
e) 504 × 35 = ( 500 + 4) × 35
= 500 × 35 + 4 × 35
= 17500 + 140 = 17640
Question 5.
Study the pattern :
1 × 8 + 1 = 9
12 × 8 + 2 = 98
123 × 8 + 3 = 987
1234 × 8 + 4 = 9876
12345 × 8 + 5 = 98765
Write the next two steps, can you say how the pattern works?
(Hint: 12345 = 11111 + 1111 +111 +11 +1).
Solution
123456 × 8 + 6 = 987648 + 6 = 987654
1234567 × 8 + 7 = 9876536 + 7 = 9876543
Yes, the pattern works.
As 123456= 111111 + 11111 + 1111 + 111 + 11 + 1.
123456 × 8 = ( 111111 + 11111 + 1111 + 111 + 11 + 1) × 8
= 111111 × 8 + 11111 × 8 + 1111 × 8 + 111 × 8 + 11 × 8 + 1 × 8
= 888888 + 88888 + 8888 +888 + 88 + 8 = 987648
= 123456 × 8 + 6 = 987648 + 6 = 987654
|
2022-12-07 09:57:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43635061383247375, "perplexity": 618.8540228836257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00382.warc.gz"}
|
http://mathhelpforum.com/calculus/22348-real-analysis-proof-print.html
|
# Real Analysis Proof
• November 9th 2007, 07:34 AM
fifthrapiers
Real Analysis Proof
Can someone check my work? I've attached it. Thanks!
• November 9th 2007, 07:37 AM
ThePerfectHacker
It is wrong. Becauase $y\not = f(x)$, $y$ is some other point in the set. So $|x-y|<\delta$ are all points in the set so that this is true and not $|x-f(x)|<\delta$.
• November 9th 2007, 07:43 AM
fifthrapiers
Quote:
Originally Posted by ThePerfectHacker
It is wrong. Becauase $y\not = f(x)$, $y$ is some other point in the set. So $|x-y|<\delta$ are all points in the set so that this is true and not $|x-f(x)|<\delta$.
Ok, change all the f(x)'s to y's. Then, I think, it is correct.
• November 9th 2007, 07:59 AM
fifthrapiers
TPH, how does it look now? Attached the new one.
• November 9th 2007, 08:32 AM
ThePerfectHacker
Good proof.
In the second part where you show $(0,1]$ is not unfiormly continous, trying using the definition, i.e. show (without Cauchy sequences) that you can violate the definition of uniform continuity.
|
2014-07-25 02:42:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964027166366577, "perplexity": 1032.3713236859337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00085-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/cosmology/hot
|
Tag Info
5
You are presumably thinking of the FLRW metric for a universe with greater than critical density i.e. a closed universe. We normally use comoving coordinates to describe this, in which case the time coordinate is not curved and at every point along this time coordinate the three spatial coordinates have the topology of a 3-sphere. That is, if we draw a ...
5
Whether the dark energy is constant or not will ultimately be determined by experiment. At the moment there is no evidence that the dark energy is changing, but the experimental errors are still quite large so a change is not ruled out. There are lots of papers on this subject, but as yet no firm conclusions. It is important to be clear that dark energy ...
4
Good numbers for this have only been coming out for a decade or so, so its a relatively new topic. There does seem to be a strong tendency for dwarf and satellite galaxies to have much lower mass-to-light ratios, and correspondingly smaller baryon-to-DM ratios. See, for example, Stringer+2009, Strigari+2008. These observations are backed up by simulations ...
4
I wonder of you are overthinking this. Wald says: If the universe had always expanded at its present rate that is, $\dot{a}$ is a constant and independent of time. In that case the value of $a$ at time $t$ after the Big Bang is simply: $$a = \dot{a} t$$ So if you define $T$ by $T = a/\dot{a}$ then $T$ is necessarily the age of the universe.
4
The answer is the same reason why a glass of water left out at room temperature will evaporate. Even though most of the particles will be below the boiling point, the equilibrium one expects is not entirely in the liquid phase. The occasional particularly energetic water molecule will vaporize, just as the occasional neutral hydrogen atom will be struck by a ...
3
The heat death of the universe is the idea you are describing (this idea is also known as the Big Freeze). The problem with this idea is for it to work the cosmological constant has to be zero...and it isn't zero. It's very tiny, but it isn't zero. The other problem with your idea is the belief that because it all "freezes", so to speak, it'll all collapse ...
3
You must not really have looked hard enough. They are the same phenomenon The Big Freeze, which is also known as the Heat Death, is one of the possible scenarios predicted by scientists in which the Universe may end. It is a direct consequence of an ever expanding universe. The most telling evidences, such as those that indicate an increasing rate of ...
3
I would like to answer with the words of L.D. Landau, from his book Statistical Physics (first edition $1958$):
3
There are a few problems I can think of with this idea - Gravity has to, at some distance turn from positive, to zero, to negative. It would be interesting to know that distance. Will the repulsion increase, or decrease with increasing distance? Dark energy hypothesis indicates repulsion would go up with increasing distance. Which does not make sense - ...
2
The technique is to sight in on known frequencies of sources in the Milky Way and other galaxies. Any signal bearing the multiband set of data is subtracted.
2
In the Standard Model coupled to GR as an effective theory, the cosmological constant is predicted to be $m_{Pl}^4$ i.e. $10^{123}$ times the correct value (you mentioned the correct value). SUSY improves this situation by cancellations between superpartners (fermions contribute the same to the C.C. as their bosonic partners but with the opposite sign if ...
2
"If inertia is a property of the matter form of mass-energy, and it is a property that allows for the transfer of energy, then why doesn't the energy dissipated in a vacuum, as does applied radiant/free energy" The problem with your logic is that is flawed. It is equivalent to "some fruits are apples; oranges are fruits: why do not oranges taste like ...
2
The raisin bread analogy can be used to help in understanding this too: Dough is much more expandable than the raisin material. Raisins will expand a bit due to the heat and the pull from the dough stuck on their surface, but it is the dough that is moving. The forces that are holding the raisin together are much stronger than the force expanding the ...
2
I'd put this as a comment, but don't have enough rep...anyway, as this answer and the comments within state, the equation of state isn't necessarily linear. One thing I'd add is that one can define $w$ to be the ratio $\frac{P}{\rho}$ (as it's dimensionless), and since in general both pressure and density depend on time (no $\vec x$ dependence is allowed in ...
2
So, there are several possible ways the universe could be baryon symmetric: A region of the universe where antimatter dominates. There is a problem with this theory, though - 30 years' worth of scientific research has calculated just how far away this type of region would have to be, and from these calculations it is considered very unlikely that any part ...
2
The derivation by Pols is correct. Ryden makes the strange decision to plug the relativistic rest energy $\varepsilon = \rho c^2$ into the classical ideal gas law. Surely it makes more sense to define a classical kinetic energy $$u = \frac{1}{2}\rho\langle v^2\rangle$$ so that $$P = \frac{2kTu}{\mu\langle v^2\rangle} = \frac{2}{3}u.$$
2
No. To make a long story short, if the Higgs field changed its coupling to particles with time then particles in the distant past would have different masses. This would mean atomic spectra of distant galaxies would has differences from spectra now here on Earth. No such change is observed.
2
You are right that the universe formed atoms much earlier (at the temperature when photons can no longer ionize the atoms, i.e. at around $T = 150,000 K$ as you point out with your order of magnitude calculation). However, photons could still scatter off these atoms. Indeed this was quite likely considering the high density of matter in the universe. The ...
2
I don't know if negative pressure (but see my added edit below) , more importantly there is a theory of inflation, and some good evidence for it. It was caused by a yet unknown inflation field, with its parameters somewhat matching what the cosmic microwave background (CMB) measurements show. [edit added: The field is a quantum field that rolled from a high ...
2
There's not reason to assume nature should treat everything symmetrically. There are many phenomena in nature that we actually know are asymmetric. For example the weak force violates parity symmetry (meaning the weak force has a preference for right or left handedness).
2
Gravity, per general relativity (GR), is normally attractive. Normally means that the sources of the gravity, and thus the sources that determine the geometry and curvature of spacetime, have positive energy density, and obey other positive energy conditions. The pressure and other factors that enter into the stress energy tensor that is the source of the ...
1
I'm not sure if this is exactly what you want, but there's a book called Practical Statistics for Astronomers by J.V. Wall and C.R. Jenkins that might fit the bill. According to the Cambridge University Press website (the book is a part of Cambridge Observing Handbooks for Research Astronomers): Astronomy needs statistical methods to interpret data, but ...
1
The key statement is that the $a_{\ell,m}$ are independent Gaussian random variables. For each $\ell$, there are $2\ell+1$ of them. So their sum is, essentially by definition, a chi-squared distribution with $2\ell+1$ degrees of freedom. Now, it is a known fact that the variance of a chi-squared distribution with $k$ degrees of freedom is just $2k$, so ...
1
The idea of a "Zero-Energy Universe" is a theory held by a limited number of scientists. There are several stackexchange question that expand on the theory and may help you. Zero energy universe Total energy of the Universe
1
Infinity is a mathematical concept, as well as the concept of variables describing dimensions. Physics is about observations, either in the laboratory or of the cosmos, which are fitted with mathematical models. It started with the geocentric system, became the heliocentric system and then the realization that the galaxy is composed out of sun like stars, ...
1
Sticking with the sphere analogy, first remember that in this analogy, the Universe is a shell, i.e. only the points on the surface of the sphere exist in the Universe, not points inside or outside. If the Universe has a spherical geometry, then the centre would be the centre of this sphere, which is not in the Universe anymore (which is why one would say ...
1
You are correct, the recession velocity predicted by the hubble law is negligible at the local group, even if gravity among them could be absent. Their gravitational attraction though, is hard enough to keep them bound together.
1
The FLRW energy equation for the motion of test masses in the universe is $$\left(\frac{\dot a}{a}\right)^2 = \frac{8\pi G\rho}{3}.$$ the scale factor for space is $a$ and its time derivative is $\dot a$. I derived this from Newtonian dynamics. The density of mass $\rho$ for the case of a quantum vacuum energy level is constant. I now replace this with ...
1
The noboundary condition means there is no boundary that marks the end of space or time. With respect to time one might think of the lines of longitude on a globe as representing the time direction at different point in a spatial manifold modeled as the lines of latitude. As one looks further to the north, which is the big bang that eventually you look north ...
1
Under the assumptions that $a > 0$ and that the universe is expanding, we can derive some interesting results about the fate of such a universe. From the Friedmann equations alone, we may derive $$\frac{d}{d \tau} (\rho a^3) = - P \frac{d}{d \tau} (a^3).$$ For $P = w \rho$, as long as $w \neq -1$, this yields \rho \propto \frac{1}{a^{3(1 + w)}}, \$...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-06-29 14:30:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074335813522339, "perplexity": 340.09698370058686}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://hackage.haskell.org/package/distributed-process-lifted-0.3.0.1/docs/Control-Distributed-Process-Lifted-Extras.html
|
Control.Distributed.Process.Lifted.Extras
Description
Utility functions for working with Processes outside of the Process monad.
Synopsis
# Documentation
fromProcess :: forall a m. MonadBase IO m => LocalNode -> Process a -> m a Source #
A variant of runProcess which returns a value. This works just like runProcess by forking a new process with a captured MVar, but it will return the result of the computation. If the computation throws an exception, it will be re-thrown by fromProcess in the calling thread.
Represents a handle to a process runner that communicates through a Chan. Create with spawnProxy or spawnProxyIO. Use this to call process actions (using fromProxy or inProxy) from any IO that will be executed in a single process that will have a persistent pid and mailbox across invocations. Sharing a single proxy between threads may yield poor performance and is not advised.
Instances
Source # MethodsshowList :: [ProcessProxy] -> ShowS #
Spawn a new process and return a ProcessProxy handle for it.
spawnProxyIO :: forall m. MonadBase IO m => LocalNode -> m ProcessProxy Source #
Same as spawnProxy but can be used from any IO
spawnProxyIO node = fromProcess node spawnProxy
inProxy :: forall m. MonadBase IO m => ProcessProxy -> Process () -> m () Source #
Use a ProcessProxy created with spawnProxy to run a Process computation in the existing Process asynchronously.
fromProxy :: forall a m. MonadBase IO m => ProcessProxy -> Process a -> m a Source #
Use a ProcessProxy created with spawnProxy to run a Process computation in the existing Process and return the result in any IO.
|
2020-11-28 20:57:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2092185765504837, "perplexity": 8305.643593705727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00173.warc.gz"}
|
https://www.authorea.com/users/3234/articles/3788/_show_article
|
# Introduction
Stars with masses less than 0.6 M$$_\odot$$ are the most numerous in our Galaxy. These are intrinsically cool and faint stars, with complex spectra characterised by molecular absorption of TiO, CaH and VO in the optical, and FeH and H$$_2$$O in the near infrared. Some of them are known to be quite active, with flares larger than the ones produced by the sun. Few of them are the hosts of the closest rocky planets to the Earth, and overall, they should be the most likely hosts of Earth-like planets in the galaxy. The study of M dwarfs has been greatly benefited by surveys covering different regions of the galaxy.
We present colour selected M dwarfs in the b201 tile of the VISTA Variables in the Vía Láctea (VVV) survey. In section 2, we give the description of the survey and of the tile b201. In section 3, we present our M dwarf selection method based on 6 colour selection cuts obtained from SDSS spectroscopically observed M dwarfs. A spectral subtype calibration based on $$(Y-J)$$, $$(Y-K_s)$$, and $$(H-K_s)$$ is given in section 4. In section 5, we show interesting objects blah blah. We discuss our results and conclusions in section 6.
# Data
VISTA Variables in the Vía Láctea (VVV) is a public ESO near-infrared (near-IR) variability survey aimed at scanning the Milky Way Bulge and an adjacent section of the mid-plane. The VVV survey gives near-IR multi-colour information in five passbands: $$Z$$ (0.87 $$\mu m$$), $$Y$$ (1.02 $$\mu m$$), $$J$$ (1.25 $$\mu m$$), $$H$$ (1.64 $$\mu m$$), and $$K_s$$ (2.14 $$\mu m$$) which complements surveys such as 2MASS1, DENIS, GLIMPSE-II, VPHAS+, MACHO, OGLE, EROS, MOA, and GAIA (Saito et al., 2012). The survey covers a 562 square degrees area in the Galactic bulge and the southern disk which contains ~$$10^{9}$$ point sources. Each unit of VISTA observations is called a (filled) tile, consisting of six individual (unfilled) pointings (or pawprints) and covers a 1.64 $$deg^{2}$$ field of view. To fill up the VVV area, a total of 348 tiles are used, with 196 tiles covering the bulge (a 14 14 grid) and 152 for the Galactic plane (a 4 38 grid) (Saito et al., 2012a). We selected one specific tile from the bulge to characterise M-dwarf stars called “b201” which center’s galactic coordinates are $$l$$=350.74816 and $$b$$=-9.68974. This tile is located in the border of the bulge where star density is lower and extinction is small allowing good photometry. Photometric catalogues for the VVV images are provided by the Cambridge Astronomical Survey Unit (CASU2). The catalogues contain the positions, fluxes, and some shape measurements obtained from different apertures, with a flag indicating the most probable morphological classification. In particular, we note that -1 is used to denote the best-quality photometry of stellar objects (Saito et al., 2012a). Some other flags are -2 (borderline stellar), 0 (noise), (sources containing bad pixels), and -9 (saturated sources).
1. http://apm49.ast.cam.ac.uk/
# Selection Method
In order to identify potential M dwarfs in the VVV tile “b201”, we performed several colour selection cuts using the VVV passbands as described in the subsections below. Before performing those cuts, we did a pre-selection of the objects in the tile “b201” to assure that the objects have the best-quality photometry. The pre-selection consisted on including only objects that had photometry in all five passbands and that were classified as “stellar” in each passband. The total number of 142,321 objects in the tile “b201” satisfied these conditions.
## Color Selection Cuts from SDSS-UKIDSS M dwarfs
The color selection cuts were defined by selecting spectroscopically identified M dwarfs with UKIRT Infrared Deep Sky Survey (UKIDSS) photometry.
We used the Sloan Digital Sky Survey DR7 Spectroscopic M dwarf catalog by West et al. (2011) as the comparative M dwarf sample. The 70,841 M dwarf stars in this catalog had their optical spectra visually inspected and spectral type was assigned by comparing them to spectral templates. Their spectral types range from M0 to M9, with no half subtypes. These catalog also provides values for the CaH2, CaH3 and TiO5 indices, which measure the strength of CaH and TiO molecular features present in the optical spectra of M dwarfs.
We performed a cone search with a radius of 0.5of these SDSS M dwarf stars in the UKIDSS-DR8 survey (Lawrence et al., 2012). The UKIDSS survey is carried out using the Wide Field Camera (WFCAM), with a $$Y$$ (1.0um), $$J$$ (1.2um), $$H$$ (1.6um) and $$K$$ (2.2um) filter set. There were UKIDSS-DR8 matches for almost half of the SDSS M dwarf sample (34,416 matches) . Next, we only kept the UKID
|
2017-03-26 09:14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6819136142730713, "perplexity": 3716.3766537778724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189198.71/warc/CC-MAIN-20170322212949-00177-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://richarizardd.me/research/icm_cardiac/
|
# Cardiac Output Prediction from Arterial Blood Pressure Waveforms
## Introduction
Cardiac output (CO) is a global blood flow parameter of interest in hemodynamics, as it indicates how efficiently the heart is meeting the demands of the body. CO also serves as an important metric in diagnosing circulatory system diseases such as ischemia, hypertension, and heart failure. Unfortunately, CO cannot be measured directly in a noninvasive manner. Measurement of CO by thermodilution (TCO) involves inserting a catheter into the pulmonary artery, which is only done for critically ill patients in the intensive care unit (ICU). Algorithms in the past have sought to estimate CO from peripheral arterial blood pressure (ABP) waveforms, but no single method has emerged as a leading candidate for clinical use. In this project, we reproduced results from Sun et al. 2009, Parlikar et al. 2007, and investigated how Co-from-ABP algorithms can perform differently in evaluating ABP waveforms. Using a large number of radial ABP waveform segments from the i2b2/MIMIC2 Waveform Database, we constructed two patient cohorts: patients with chronic ischemic heart disease, and patients without chronic ischemic heart disease. Our results indicate that algorithms that measure intra-beat variations (Liljestrand \& Zander) can accurately estimate CO in healthy patients, while algorithms that measure intra-beat variations (Parlikar) can accurately estimate CO in ischemic patients. Moreover, our analysis gives broad insight into how different Co-from-ABP algorithms can be tailored for different clinical data subtypes of patients in the ICU.
## Background
### Circulatory System
The circulatory system forms a closed loop in which blood flows to carry oxygen from the lungs to the tissues throughout the body and to carry carbon dioxide back to the lungs. Specifically, the left side of the heart pumps oxygen-rich blood into the systemic arteries, and nutrients diffuse to tissue in the capillaries. The oxygen-depleted blood would then return to the heart via the systemic veins and the right side of the heart would pump this blood into the pulmonary arteries to be distributed in the lungs. The oxygen-rich blood then returns to the left side of the heart via the pulmonary veins. Blood cells transit the full circuit in about one minute.
### Windkessel Model
In the late 1800s, German physiologist Otto Frank formulated one of the earliest models of the heart and systemic arterial system, known as the Windkessel model. The Windkessel model describes the load against the heart pumping blood throughout the systemic arterial system and the relationship between blood pressure and stroke volume in the aorta, as a closed hydraulic circuit.
A simple model of a blood vessel assumes it has a resistance R, and that a blood flow Q is linearly proportional to the pressure drop P along the vessel. This relationship is similar to Ohm’s Law, where a drop in electric potential across the resistor is linear proportional to the voltage. In Frank’s 2-Element Windkessel model, as water is pumped into the chamber, the water both compresses the air in the pocket and pushes water out of the chamber, back to the pump. The compressibility of the air in the pocket simulates arterial compliance, the elasticity and extensibility of the major artery, as blood is pumped into it by the heart ventricle. The resistance that water encounters while leaving the Windkessel and flowing back to the pump, simulates the total peripheral resistance (TPR), the resistance to flow encountered by the blood as it flows through the arterial tree.
The basic 2-element Windkessel model calculates the exponential pressure curve determined by the systolic and diastolic phases of the cardiac cycle. The model assumes that the cardiac cycle starts at systole, and that the flow of blood in the blood vessel follows Pouseuille’s Law. As the number of elements in the model increases, the model accounts for new physiological factors. The 2-element WindKessel takes into account the effect of arterial compliance and TPR, where in the electrical analog, the arterial compliance is represented as a capacitor, and TPR is represented as an energy dissipating resistor. The flow of blood Q from the heart is analogous to current flowing in the circuit, and the blood pressure P in the aorta is modeled as a time-varying electric potential. The following differential equation can be used to model blood flow and pressure:
## Materials and Methods
Sun et al. 2009 and Parlikar et al. 2007 were used as the basis of our mathematical derivations for the intra/inter-beat averaged model.
### Intra-beat Averaged Models
The differential equation representing the Windkessel model is given by:
where $P(t)$ represents arterial blood pressure at the aortic root at time t, $R$ is TPR, and $Q(t)$ is blood flow. $Q(t)$ can also be seen as an impulsive current source that deposits a stroke volume $(SV_{n})$ into the systemic arterial system in the $n^{th}$ cardiac cycle.
where $t_{n}$ is the onset time of the $n^{th}$ beat and $\delta(t)$ is the unit Dirac impulse. By integrating the original equation over the ejection phase, we obtain:
$PP_{n}$ is the peripheral pulse pressure of the $n^{th}$ cardiac cycle. In the 2-element Windkessel model, $PP_{n}$ can be calculated as the difference between systolic and distolic aterial pressure: $PP_{n} = SAP_{n} - DAP_{n}$. This calculation varies across different intra-beat averaged models. Using $T_{n}$ as the period of the $n^{th}$ cardiac cycle, cardiac output is given by:
The three intra-beat averaged models used is the 2-element Windkessel described above, mean arterial pressure (MAP), and the Liljestrand & Zander algorithm.
### Inter-beat Averaged Models
To taken into account beat-to-beat variations in ABP waveforms, we can average the Windkessel differential equation over the cardiac cycle. For the $n^{th}$ beat we obtain:
where $T_{n}$ is the period of the $n^{th}$ cardiac cycle, $\Delta P_{n}$ is the beat-to-beat pressure change at onset times, and $\bar{P}_{n}$ is the average ABP over the cycle. In steady-state, the change in ABP is proportional to the change in volume of the circulation, which is equal to the volume of blood ejected from the heart, or stroke volume.
Using the CO estimation from the intra-beat average model, we can rewrite our relation over the $n^{th}$ cycle.
where $\tau_{n}$ is the time constant $R_{n}C_{n}$. From Parlikar et al 2007, a different calculation for $PP_{n}$ was used to correct for wave reflections by assuming a triangular pulse shape generated from $\alpha = 2$.
Since $R_{n}C_{n}$ cannot be observed, we derived an expresdsion for $\tau_{n}$ in terms of $PP_{n}$, $\bar{P}_{n}$, $\Delta P_{n}$, and $T_{n}$, which can be observed in the ABP waveform.
The only inter-beat averaged model used was the Parlikar estimator.
### Clinical and Waveform Data Preprocessing
In Sun et al. 2009, we reproduced “Figure 1,” which plots 20 pulses of ABP waveforms. This figure also plots features such as the onset of each beat, end of systole (estimated from beat period), and end of systole (estimated from the lowest non-negative slope method). In addition, we reproduced “Figure 4,” which plots a time series of CO from ABP measurements over a 50-hour interval. CO was estimated using the Liljestrand & Zander algorithm and the Parlikar algorithm in MATLAB, and then calibrated with episodic TCO measurements. Time series data for Pulse Pressure (PP), Mean Arterial Pressure (MAP), and Heart Rate (HR) were also plotted. Each subplot is annotated with a stem plot of values taken from TPR.
#### Figure 1 Sun et al 2009
To replicate Figure 1 in Sun et al. 2009, in MATLAB, we extracted ABP waveform data of Patient 20 (s200) from the MIMIC2 Waveform Database. In the text file, the first column represents the time (in seconds) at which ABP is sampled, and the second column represents ABP values in mmHG. Each row represents a sample taken in a sphygmomanometer, and Sun et al. 2009 estimates that the waveform data has a sample rate of 125. To obtain the first 20 pulses starting at the 10th hour, we noted that there are 4500000 samples in 10 hours, and that about 1250 samples were needed to capture the first 20 pulses of ABP waveforms. These calculations (samples to hours, hours to samples) were used to index and subset through the waveform text file in MATLAB. The function wabpresults obtains the onset sample time of the pulse in MATlab. From the onset of the pulse (trough), we used the function abpfeature to calculate an ABP Feature Matrix, which contains end of systole times from both systole estimators.
From these calculations, we plotted a time series of ABP waveforms with markers for the onset and end of systole times. Empirically, when we graphed the time series with 1250 samples, we only captured a fraction of the 20 pulses. The trough in a ABP waveform models the heart muscle resting between each pulse; as a result, we used a for loop was used to iteratively increase the sample rate until wabpresults returned a vector of 20 onset sample times. This procedure was repeated for Patient 20 starting at 11 hours, Patient 138, Patient 214, and Patient 217.
#### Figure 4 Sun et al 2009
For the first 12 hours, similar to Figure 1, we found onset sample times for pulse using wadpresults, and calculated the ABP Feature Matrix using abpfeature. The function jSQI returns a binary signal quality assessment of each beat from the ABP Feature Matrix and onset sample times, with “0” being good and “1” being bad. The function estimateCOv3 estimates CO from the ABP Feature Matrix, jSQI binary values, onset sample times for pulse, and a switch statement for different CO estimators.
In addition to extracting ABP waveform data from the MIMIC2 Waveform Database, we also extracted clinical data from the MIMIC2 Clinical Database, which contains episodic TCO measurements. Within the clinical data text file, we extracted the time (in minutes) at which the first CO measurement occurred, and the CO measurement itself. Unlike the waveform data, where ABP measurements were not uniformly sampled, clinical data measurements were sampled every 30 seconds. By dividing the first CO measurement in the clinical data with its corresponding CO measurement in waveform data, we created a calibration factor to scale our estimated CO. For loops were used to index the samples in the ABP waveform data by time in seconds.
We plotted the time series of the calculated CO, PP, MAP, and HR from estimateCOv3, in addition to stem plots of the corresponding measurements from episodic TCO. In the clinical data, most measurements of CO were 0. When the sphygmomanometer recorded a value for CO, the clinical data was subsetted to also obtain PP, MAP, and Heart Rate at that time. This procedure was reproduced with patients 20 & 214.
#### Inter vs. Intra beat comparison
Within i2b2, we selected 5 patients with chronic ischemic heart disease and 5 patients without chronic ischemic heart disease within our cohort. Patients were selected within i2b2 by searching for patients with the ICD9 code corresponding to chronic ischemic heart disease, and cross-checked with a patients that had CO TCO measurements.
## Results & Discussion
### ABP Waveform Visualization & Measuring End of Systolic
A goal of our project was to visually mark these key features, such as onset point times and the end of systole, as done in Figure 1 of Sun et al. 2009. Onset point, labeled by an asterisk, was determined by our algorithm as the lowest arterial blood pressure point in each beat. This was clearly indicated in each patient waveform. The end of systole was identified in two manners: 1) $0.3 \cdot \sqrt{\text{beat-period}}$, and 2) the point after systolic peak with the lowest non-negative slope. These methods were indicated in the waveform plot with X’s and O’s respectively. Interestingly enough, the accuracy of both methods in identifying the end of systole among different patients vary. In patients 20, 138, 214, and 217 of figure 4, estimating the end of systole with the lowest non-negative slope proved to be a superior method than estimating by beat-period. Visually, the O’s in each waveform plot was typically placed at the trough of each beat, a good estimator of the end of systole. However, X’s were placed in a much less precise manner, as most clearly shown in patient 138 and 214. These X’s were placed in the rapid decrease in ABP prior to the trough.
Interestingly, the accuracy of both methods in identifying the end of systole among different patients vary. In patients 20, 138, 214, and 217 of our Figure 1, estimating the end of systole with the lowest non-negative slope proved to be a superior method than estimating by beat-period. Visually, the O’s in each waveform plot was typically placed at the trough of each beat, a good estimator of the end of systole. However, X’s were placed in a much less precise manner, as most clearly shown in patient 138 and 214. These X’s were placed in the rapid decrease in ABP prior to the trough.
The issue with estimating the end of systole due to beat period is that this methodology is heavily parameter dependent. Every patient will vary in CO, ABP, HR etc, causing great variability in the beat period. This method faced difficulty with waveform data in patients 138 and 214 and would mark the end of systole too early due to a longer cardiac cycle. Waveform data for patient 20 and 217 was more accurately marked. Estimating the end of systole by measuring the lowest non-negative slope is the best method since troughs will have this property.
### Comparison of Intra-beat averaged models
Figures 2 show four plots, the continuous CO from ABP estimated by the Liljestrand algorithm, PP, MAP, and HR. Stem plots were also placed throughout the plots to represent discrete episodic thermodilution CO measurements as controls.
Comparison between cardiac output by the Liljestrand estimator and mean arterial pressure demonstrates the superiority of the former. Compared to mean arterial pressure, the Liljestrand estimator cardiac output is more sensitive to major changes in CO and enhances the display vital signs. This estimator also unfortunately is much more sensitive to spikes, as noted throughout the plots in figure 3 of patient 20 and 214. There seems to be little correlation between heart rate and cardiac output.
In figure 4, we also compared different estimators (Liljestrand, MAP, Windkessel) to the cardiac output measured by thermodilution in patients 20 and 214. For patient 20, Liljestrand cardiac output estimator demonstrated the least sample variance, while MAP demonstrated the most. For patients 214, MAP cardiac output estimator demonstrated the least sample variance, whereas the Windkessel model showed the most.
In figures 5, we plotted a regression line through mean arterial pressure values and compliance values at the onset of each pulse for patients 20 and 214. In figures 6, CO, PP, MAP, and HR were measured based on the Parlikar estimator. All four displayed measurements demonstrated similar patterns and changes throughout time. Cardiac output was the most noisy and sensitive, whereas heart rate was the least noisiest. Additionally, cardiac output was more sensitive to high peaks in heart rate, as opposed to PP and MAP.
### Comparison of Intra and Inter-beat averaged models
#### Liljestrand & Zander vs. Parlikar Estimator in Patients 20, 214
In figures 1 and 5, we plotted a regression line through mean arterial pressure values and compliance values at the onset of each pulse for patients 20 and 214.
In figures 3 and 4, CO, PP, MAP, and HR are measured based on the Parlikar estimator. All four displayed measurements demonstrate similar patterns and changes throughout time. Cardiac output is the most noisy and sensitive, whereas heart rate is the least noisiest. Additionally, cardiac output is more sensitive to high peaks in heart rate, as opposed to PP and MAP.
In figures 3 and 6, qualitatively, the Parlikar cardiac output estimates were noisier than the Liljestrand cardiac output estimates, as seen in the abundance of highly variable peaks. In figure 9, we quantitatively compared Parlikar and Liljestrand cardiac output estimates using sample variance from known thermodilution values. For Patient 20, we see that the Parlikar performed much worse than Liljestrand, however, in Patient 214, Parlikar performed relatively better than Liljestrand, though its sample variance was still high. This can be explained by how the Parlikar estimator uses beattobeat variations to calculate cardiac output (ΔPn). ΔP n greatly contributes to the noise seen in figures 1 and 5, as any variation between beats will amplify the signal. Since Liljestrand uses only intracycle features (SAP, DAP), changes in arterial blood pressure are not amplified. It is interesting to note that Patient 214 belonged to a cohort in i2b2 that has circulatory system diseases, while Patient 20 does not. Patients such as Patient 214 with circulatory system diseases will have greater variability in arterial blood pressure at onset, which Parlikar accounts for using ΔP n. Therefore, Parlikar estimator produces noisier signals because it accounts for patients with beattobeat variations. This also explains why Parlikar performed better compared to Liljestrand in Patient 214. In addition, Parlikar notes that algorithms that use intracycle features are only valid for patients with a cyclic steady state, such as Patient 20, who doesn’t have circulatory system diseases. This finding helped motivate the way we constructed out cohorts later on.
Figures 8 and 9 show total peripheral resistance as determined by the Parlikar estimator. Similar to cardiac output data estimated by Parlikar, estimated TPR is very noisy. Furthermore, our data shows that high values of cardiac output at a specific time are correlated with a decrease in TPR. This agrees with the equation we utilized to estimate TPR; TPR and cardiac output are inversely related.
The total peripheral resistance (TPR) is the net resistance to flow seen by the heart, and is the ratio of mean ABP to CO (in close analogy to electrical resistance, which is the ratio of potential difference to current). TPR plays an integral role in determining blood pressure, and as a result can be an indicator for many cardiovascular diseases such as hypertension or atherosclerosis. It can also give some indication on the fluid dynamics and composition of the blood. For example, a more viscous blood flow due to clotting factors or important blood components would cause more resistance to flow.
#### Liljestrand & Zander vs. Parlikar Estimator in patients with chronic ischemic heart disease
We selected two cohorts: patients with chronic ischemic heart disease, and patients without chronic ischemic heart disease. Although our second cohort reflected patients diagnosed without hypertensive disease, it does not exclude patients with other circulatory diseases, such as heart failure, cardiac dysrhythmia, or cardiomyopathy. In fact, there were only two patients within the entire database without any type of circulatory disease. These alternative circulatory diseases may have a larger impact on cardiac output than chronic ischemic heart disease. While ischemic heart disease is associated with high arterial blood pressure, diseases such as heart failure is associated with low stroke volume, changing a parameter for cardiac output that is not accounted for by our ABP estimators. As a consequence, this may have confounded our cohorts. We also noticed that, for our cohort representing chronic ischemic heart disease patients, five of the six patients had mean systolic blood pressure lower than 110mmHg, which is below the commonly accepted reading for healthy systolic blood pressure (120mmHg). We picked this cohort under the impression that these patients would have a higher than normal blood pressure. However, based only on the mean systolic blood pressure, we found that these patients do not reflect expected blood pressure readings.
Using the Parlikar and Liljestrand algorithms, we were able to generate plots that allowed us to make comparisons between the two algorithms shown in the following figures:
Based on these representative plots (Figures 1 and 3) of the Liljestrand and Parlikar algorithms for both healthy and ischemic heart disease patients, it seems that Parlikar qualitatively performs better as a cardiac output estimator for ischemic patients than it does for healthy patients. This is also reflected in the variances (Figures 5 and 6), in which Parlikar had higher variance in four of the five healthy patients (one healthy patient not shown due to yscaling) but had lower variance in four of the five ischemic patients. Not much can be said regarding the difference between the total peripheral resistances between the healthy and ischemic patients. These results corroborate our hypothesis that because the Parlikar algorithm takes into account intra and interbeat variations, it should be a better estimator for patients with chronic ischemic heart disease.
#### Challenges
Some interesting things in working with this data set are the possible confounding factors that may arise when measuring this data, such as previous health conditions, hospital room environment, nurse/doctor care, etc.
We faced several challenges while working with patient data. The first challenge was the noise or missing information in the given data sets. Patient files often revealed inconsistent time intervals, forcing our group had to write an algorithm that would manually look for patient data that shared the same time interval. A second challenge was the conversion of sample and time. The sample rate was not consistently sampling at 125 in the ABP waveform data, and as a result, for loops were often used to index matrices by time. This challenge added a second layer of difficulty when trying to index through the clinical data, as the clinical data sampled every 60 seconds instead of every ~1/125 second. A third challenge was presented when we tried to compare the cardiac outputs of different estimation algorithms. Without calibration, plotting the cardiac outputs from different algorithms was nearly impossible due to a wide variety of pressure values. Ultimately, we needed to rescale each algorithm using the C2 method to effectively compare cardiac outputs.
|
2018-01-23 03:41:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 32, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6001449227333069, "perplexity": 2145.992555576173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891706.88/warc/CC-MAIN-20180123032443-20180123052443-00427.warc.gz"}
|
https://tor.stackexchange.com/questions/423/what-are-good-explanations-for-relay-flags?noredirect=1
|
What are good explanations for relay flags?
We received an Atlas patch that adds tooltips to relay flags, so that relay operators can better understand what these flags mean. The patch author used dir-spec.txt as their guide. I wonder if these are good explanations, or if they should be closer to the spec or even implementation, or if they should be more general. Here are the suggestions:
• BadExit: This relay breaks stuff, either maliciously or through misconfiguration.
• Fast: This relay has lots of bandwidth available.
• Guard: This relay is suitable to be the first hop (entry relay) in a Tor circuit.
• HSDir: This relay is a v2 hidden service directory.
• Named: This relay has a nickname.
• Running: This relay has been online within the past 45 minutes.
• Stable: This relay is considered stable.
• V2Dir: This relay supports the v2 directory protocol.
• Valid: This relay is running a version of Tor not known to be broken, and the directory authority has not blacklisted it as suspicious.
• Unnamed: This relay's configured nickname is used by another relay.
• Exit: This relay is configured to be the last hop (exit relay) in a Tor circuit.
Are there better phrasings for some/all of these?
• Maybe it is a good idea to broaden your question and make a glossary like community wiki of all Tor-related words. – Jens Kubieziel Oct 8 '13 at 9:21
• Is the community wiki a better place to develop such definitions (regardless of how broad the question is)? Then let's move this question there? (How would we do that?) And regarding scope, how about we make a glossary for all terms used in Onionoo/Atlas/Globe, because all Tor-related words might be too much to start with. – karsten Oct 8 '13 at 10:37
• This is a rather odd case. While I feel that questions about design or implementation should be on-topic, IMO "good phrasings" of a tool-tip will almost always be opinion based and hence are not a good fit for the SE format. – asheeshr Oct 8 '13 at 12:29
• Some of these are also just plain wrong. In particular Fast and Named. – weasel - Peter Palfrader Oct 8 '13 at 15:57
• @karsten if any of the following posts answered your question, please mark it as the answer to your question. ;) – Ron Sep 16 '15 at 9:39
So I discovered there are some decent one-liner descriptions written in the dir-spec.txt after all. I propose I yoink these verbatim:
"Authority" if the router is a directory authority.
"BadExit" if the router is believed to be useless as an exit node
(because its ISP censors it, because it is behind a restrictive
proxy, or for some similar reason).
"Exit" if the router is more useful for building
general-purpose exit circuits than for relay circuits. The
path building algorithm uses this flag; see path-spec.txt.
"Fast" if the router is suitable for high-bandwidth circuits.
"Guard" if the router is suitable for use as an entry guard.
"HSDir" if the router is considered a v2 hidden service directory.
"NoEdConsensus" if any Ed25519 key in the router's descriptor or
microdesriptor does not reflect authority consensus.
"Stable" if the router is suitable for long-lived circuits.
"Running" if the router is currently usable over all its published
ORPorts. (Authorities ignore IPv6 ORPorts unless configured to
check IPv6 reachability.) Relays without this flag are omitted
from the consensus, and current clients (since 0.2.9.4-alpha)
assume that every listed relay has this flag.
"Valid" if the router has been 'validated'. Clients before
0.2.9.4-alpha would not use routers without this flag by
default. Currently, relays without this flag are omitted
fromthe consensus, and current (post-0.2.9.4-alpha) clients
assume that every listed relay has this flag.
"V2Dir" if the router implements the v2 directory protocol or
higher.
There's a good break-down of most of these at https://github.com/torproject/torspec/blob/master/dir-spec.txt
It doesn't cover your full list, but the ones it does cover are very clearly explained:
Authority
A router is called an ‘Authority’ if the authority generating the network-status document believes it is an authority
Exit
A router is called an 'Exit' iff it allows exits to at least one /8 address space on each of ports 80 and 443. (Up until Tor version 0.3.2, the flag was assigned if relays exit to at least two of the ports 80, 443, and 6667.)
Fast
A router is 'Fast’ if it is active, and its bandwidth is either in the top 7/8ths for known active routers or at least some minimum (20KB/s until 0.2.3.7-alpha, and 100KB/s after that).
Guard
A router is a possible 'Guard’ if its Weighted Fractional Uptime is at least the median for “familiar” active routers, and if its bandwidth is at least median or at least 250KB/s.
To calculate weighted fractional uptime, compute the fraction of time that the router is up in any given day, weighting so that downtime and uptime in the past counts less.
A node is 'familiar’ if 1/8 of all active nodes have appeared more recently than it, OR it has been around for a few weeks.
HSDir
A router is a v2 hidden service directory if it stores and serves v2 hidden service descriptors, and the authority believes that it’s been up for at least 25 hours (or the current value of MinUptimeHidServDirectoryV2).
Named
Directory authority administrators may decide to support name binding. If they do, then they must maintain a file of nickname-to-identity-key mappings, and try to keep this file consistent with other directory authorities. If they don’t, they act as clients, and report bindings made by other directory authorities (name X is bound to identity Y if at least one binding directory lists it, and no directory binds X to some other Y’.) A router is called 'Named’ if the router believes the given name should be bound to the given key.
Two strategies exist on the current network for deciding on values for the Named flag. In the original version, relay operators were asked to send nickname-identity pairs to a mailing list of Naming directory authorities’ operators. The operators were then supposed to add the pairs to their mapping files; in practice, they didn’t get to this often.
Newer Naming authorities run a script that registers routers in their mapping files once the routers have been online at least two weeks, no other router has that nickname, and no other router has wanted the nickname for a month. If a router has not been online for six months, the router is removed.
Running
A router is 'Running’ if the authority managed to connect to it successfully within the last 45 minutes.
Stable
A router is 'Stable’ if it is active, and either its Weighted MTBF is at least the median for known active routers or its Weighted MTBF corresponds to at least 7 days. Routers are never called Stable if they are running a version of Tor known to drop circuits stupidly. (0.1.1.10-alpha through 0.1.1.16-rc are stupid this way.)
To calculate weighted MTBF, compute the weighted mean of the lengths of all intervals when the router was observed to be up, weighting intervals by $\alpha^n$, where $n$ is the amount of time that has passed since the interval ended, and $\alpha$ is chosen so that measurements over approximately one month old no longer influence the weighted MTBF much. [XXXX what happens when we have less than 4 days of MTBF info.]
Unnamed
Directory authorities that support naming should vote for a router to be 'Unnamed’ if its given nickname is mapped to a different identity.
Valid
a router is 'Valid’ if it is running a version of Tor not known to be broken, and the directory authority has not blacklisted it as suspicious.
V2Dir
A router supports the v2 directory protocol if it has an open directory port, and it is running a version of the directory protocol that supports the functionality clients need. (Currently, this is 0.1.1.9-alpha or later.)
• The link provided is no longer working. – Avec Feb 10 '18 at 23:51
|
2021-02-26 07:48:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3174874782562256, "perplexity": 2929.64953269208}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00282.warc.gz"}
|
https://direct.mit.edu/neco/article/13/5/1119/6517/Architecture-Independent-Approximation-of
|
## Abstract
We show that minimizing the expected error of a feedforward network over a distribution of weights results in an approximation that tends to be independent of network size as the number of hidden units grows. This minimization can be easily performed, and the complexity of the resulting function implemented by the network is regulated by the variance of the weight distribution. For a fixed variance, there is a number of hidden units above which either the implemented function does not change or the change is slight and tends to zero as the size of the network grows. In sum, the control of the complexity depends on only the variance, not the architecture, provided it is large enough.
This content is only available as a PDF.
|
2021-06-14 00:22:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560960292816162, "perplexity": 179.42837059434297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00113.warc.gz"}
|
https://electronics.stackexchange.com/questions/558411/small-signal-modelling-of-buck-converter-varying-frequency
|
# Small signal modelling of buck converter, varying frequency
It seems that the most popular method of voltage mode control of buck converters is by varying the duty cycle. Is this because it is easier to do so because we can set the duty cycle using PWM ICs?
How do I start small-signal modelling of a buck converter (DCM mode) keeping input voltage Vin and duty cycle D constant, varying switching frequency? I want to get the transfer function $$\dfrac{\hat{v}(s)}{\hat{f}(s)}$$
EDIT 1:- For example, here's a question that asks to find this, [SMPC book, V Ramnarayan],
I'm not able to get the small signal model mentioned in part B, neither the answer to part C. I imagine a similar approach can then be taken for a buck converter.
• If the converter operates in continuous conduction mode (CCM), the switching frequency has no impact on the transmitted power. This is true for any of the switching cells operated in CCM. If the converter enters DCM, you express the output voltage with a large-signal equation featuring $F_{SW}$ and linearize it. This is a complicated exercise and I can show an example in an answer. I have not covered this control principle in my new book as it is rarely used beside frequency foldback for efficiency reasons. Apr 5, 2021 at 8:10
• @VerbalKint, yes it is in DCM, I forgot to mention that. My hardware circuit is built in such a way that it has provisions to be used as a synchronous buck, or a boost converter, because the inductor is connected externally. So I need to know this for either buck or boost, thank you.
– SM32
Apr 5, 2021 at 9:33
• @SM32 Can you tell me the name of this book? May 13, 2021 at 9:09
I have looked into a model like this long time ago. It was when the first PWM controllers having frequency foldback in light-load conditions were released. As the loop was going through different operating modes, it was important to check stability in all these loading conditions. One mode was when the peak current was frozen while the frequency was controlled through a voltage-controlled oscillator (VCO).
Most of the available small-signal models imply a fixed operating frequency where the error voltage controls either the duty ratio directly (voltage-mode control) or the inductor peak current (current-mode control). In continuous conduction mode (CCM), the transfer function and the dc transfer characteristic ignore the switching frequency and load values (ideal model). In discontinuous conduction mode (DCM), the switching frequency plays a role as well as the loading conditions for determining the output voltage. As thus, controlling the output via the switching frequency is a possibility if you freeze the peak current as in the previously-described example.
For many years now, I have adopted the PWM switch model to analyze power converters. Released in 1986 by Vatché Vorpérian, it cannot be beaten in terms of simplicity of analysis. The below figure shows you in the left side the PWM switch operated in peak-current-mode control with $$\V_c\$$ the control voltage. In all the equations, the frequency is fixed. In the right side, the model is tweaked to unveil the switching frequency contribution:
The difficulty now is to derive a small-signal approach with this large-signal model. This is not the place to show the complete linearization steps but I did it for the flyback converter exercise, look here. The model is invariant and you can reuse it in a buck converter. You first start with the large-signal model with equations reworked for future linearization:
When this is done, you start the linearization of the PWM switch operated in variable frequency. This is not a simple thing to do and the right-side window shows the many coefficients to determine:
When you have confirmed your model is correct, then you insert it in the buck configuration and you start the analysis using for instance the fast analytical circuits techniques or FACTs as described in my new book entirely dedicated to small-signal analysis of switching converters. I have covered many switching cells but did not touch variable frequency - except for the LLC converter - because it is rarely used as a control means.
• Thanks a lot for your answer, but I'm afraid I got a bit confused. I modified the question a little. Could your approach be applied to get the model written in Part B in my edit?
– SM32
Apr 7, 2021 at 6:29
• As explained in the post, this is a complicated matter. One way to simplify things rather than resorting to the complete model is to write an average equation of the output power. Have a look at slide 33 in the presentation I linked. Express the power delivered by the DCM-operated buck converter and apply partial differentiation to obtain a small-signal equation. It is a simplified approach but will perfectly work in your case. Apr 7, 2021 at 6:59
|
2022-05-22 19:21:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5729855895042419, "perplexity": 904.1724545493737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00642.warc.gz"}
|
http://freemathvideos.com/pc-4-4-graphing-sine-and-cosine-with-transformations/
|
## Math Problems
Check Out These Examples
# PC 4.4 Graphing Sine and Cosine with Transformations
In this unit I will show you how to graph the sine and cosine function when given different transformations such as amplitude, period change, phase shift as well as vertical transformations.
|
2017-11-19 14:20:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132497906684875, "perplexity": 1442.566995949879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805649.7/warc/CC-MAIN-20171119134146-20171119154146-00034.warc.gz"}
|
http://indi.com.ua/life-companion-nhcdm/properties-of-multiplication-c4f1d7
|
# properties of multiplication
4, times, 3, equals, 3, times, 4. These worksheets focus on the associative & commutative properties of addition. combination: Definition. Associative Property. This property helps us solve the questions with brackets. 5 Downloads Grade 3 Solve using Properties of Multiplication. Commutative property: When two numbers are multiplied together, the product So, to multiply 7x5x3, would be the same as solving (7×5) x3 as 7x (5×3), it will always give me 105. An answer key is provided. Properties of Multiplication: Learn. It has an easily understandable rationale and impressive immediate application: it reduces the number of independent basic multiplication facts to be memorized. Here's a quick summary of these properties: Commutative property of multiplication: Changing the order of factors does not change the product. Practice: Use associative property to multiply 2-digit numbers by 1-digit. To link to this Properties of multiplication page, copy the following code to your site: Changing the order of multiplication doesn’t change the product. Properties of multiplication: Commutative property: When two numbers are multiplied together, the order of numbers does not affect the product or result. 4 \times 3 = 3 \times 4 4×3 = 3×4. It also speeds up our mental calculations. STUDY. Case 1: We can group the numbers as 2×(3×4), Our answer will be: 2×(3×4)=2×12=24 [2 lots of 3 lots of 4s], Case 2: We can also group the numbers as (2×3)×4, Then our answer will be: (2×3)×4=6×4=24 [4 lots of 2 lots of 3s]. easier to solve. On multiplying 3 lots of 5 we get, 3×5=15, Now on reversing the order of multiplication, we get 5 lots of 3. 2 types of shirts x 2 colors of pants = 4 possible combinations because 2x2=4: Term. The identity property of multiplication says that a number multiplied by 1 will result to the same number. Hence, multiplication is associative. Multiplying by tens. There are four mathematical properties which involve mutliplication. equal to the sum of each addend times the third number. is the same regardless of the order of the multiplicands. Case 2: If we distribute multiplication over addition, 2×(3+1)=2×3+2×1=6+2=8 [2 lots of 3s and 2 lots of 1]. when shopping). Example 1- Let us consider anyone number and multiply it by 1. Download Now! Properties of Multiplication 7 Day Unit 3.OA.5 This 7-day unit is designed for your students to learn about 5 different properties of multiplication. The commutative property states that the two numbers in a multiplication problem can be written in any order without changing the product (answer). The identity property of multiplication states that if you multiply any number by 1, the answer will always be the same number. Commutative, Associative, Distributive, Zero and Identity. That's right! Example 1- Let us consider two numbers 3 and 5. Example 1- Let us consider the calculation, 2×(3+1), then our answer will be: 2×(3+1)=2×4=8 [2 lots of 4s]. Properties of triangle worksheet. variable: Definition. Catch the Factors 39,682 Plays Grade 3 … 1. For example, 2 × 4 = 4 × 2.. Associative Property: When three or more numbers are multiplied, the grouping of numbers does not affect the product or result.For example, 10 × (6 × 2) = (10 × 6) × 2. Thus, this property states that, in a series of consecutive multiplications, the order in which their factors are multiplied makes no difference. In this animation, you get to revise the properties of multiplication of numbers. With the Commutative Property of Multiplication, when only multiplication is involved, numbers can move ("commute") to anywhere in the expression. Properties of Multiplication Commutativ e Property Associative Property Distributive Property 5. Commutative; In multiplication, the commutative property implies that, the order of multiplication of two or more numbers does not affect the final answer. 0x9=0 3x0=0 5x0=0 7x0=0 12x0=0 2x0=0: Term. See more ideas about Properties of multiplication, Multiplication, Teaching math. They are the commutative, associative, multiplicative identity Example 2- Let us consider anyone number and multiply it by 1. Example: a x (b x c) = (a x b) x c. 4 x (5 x 8) = (4 x 5) x 8 . Distributive property: The sum of two numbers times a third number is product is the same regardless of the grouping of the factors. Associative Property of Multiplication a x (b x c) = (a x b) x c; Changing the grouping of the factors does not change the product. The printable multiplication properties worksheets in this page contain commutative and associative property of multiplication; distributive property; identifying equivalent statement; multiplicative inverse and identity; and more. and distributive properties. 3 x ( 5 + 2) = 3 x 5 + 3 x 2 Children often use the properties of multiplication without getting to understand them properly and without really knowing why they work. Therefore, 6 x 9 = 54 and 9 x 6 = 54. Estimating percent worksheets. Quadratic equations word problems worksheet. More Multiplication Games . that number. The properties are the commutative, associative, identity and distributive properties. Associative property of multiplication states that if we want to multiply any three numbers together, the answer will always be the same irrespective of the order in which we multiply the numbers. As students grasp the properties, they will become better mathematicians! Sorry, we could not process your request. Copyright © 2020 Studypad Inc. All Rights Reserved. As in both cases, the answer we get is the same, hence, multiplication is distributive. Example: (2 x 3) x 5= 2 x (3 x 5) Identity Property of Multiplication a x 1 = a ; Any number multiplied with one is that number. 5 Downloads Grade 3 Input/Output Tables For Multiplication. This lesson shows five properties of Multiplication in under 7 minutes! You already know this! Properties of Multiplication. For instance, Example 1- Let us consider anyone number and multiply it by 1. Commutative Property: When two numbers are multiplied together, the product is the same regardless of the order of the multiplicands. In this article, we'll learn the three main properties of multiplication. Understanding these 5 properties of multiplication can help your child become more masterful at working with math equations. Also, multiplying by 1 does not change the Identity of a number. There are four properties involving multiplication that will help make problems easier to solve. It stays the same. Learn and practice basic facts up to 10 or 12 with these printable games, lessons, and worksheets. array: Definition. For example, due to the property, it’s sufficient to know that the product 4 × 6 equals 24 to also know the product 6 × 4. Integers and absolute value worksheets. You can use the properties of multiplication to evaluate expressions. Grade recommendation: 3 • CCSS.Math.Content.3.OA.B.5. Mastering multiplication properties requires practice and these matching task cards will give your 3rd and 4th graders and homeschool students meaningful and rigorous practice with five different multiplication properties matching activities! These worksheets have 2, 3, and 4-digit multiplication problems. Commutative Property. 6 x 5 = 5 x 6. commutative property. Identity property of multiplication. 12x3=36 and 3x12=36 so, 12x3=3x12: Term. Consider how people use these properties in every day life to figure out problems (i.e. Using associative property to simplify multiplication. Identity Property. 1 x 17 = 17. zero property. Commutative Property. The distributive property of multiplication states that multiplication can be distributed over addition, as well as, subtraction. Video transcript. Example 1- Let us consider any three numbers, say 2, 3, and 4 and multiply them. Associative Property. Download the 16-page Properties of Addition. Multiplication properties can seem tough, but with scaffolded support our students can definitely get it. Do you remember what happens to a number when it's multiplied by 1? The Properties Of Multiplication And Division l earning objective — based on CCSS and state standards — delivers improved student engagement and academic performance in your classroom, as demonstrated by research. Practice: Associative property of multiplication. Basic Multiplication Worksheets. If you multiply any number by 0, the answer will always be zero. PLAY. See All . 5 x (3 x8) = (5 x 3) x 8. identity property. a x 8=24 a=3 because 3x8=24: Term. Some of these worksheets are absolutely free of cost. Commutative property of multiplication: Definition. Multiplicative Identity Property: The product of any number and one is The properties of multiplication give you rules that can help you multiply numbers quickly. The pdf worksheets cater to the learning requirements of children in grade 3 through grade 6. Multiplication Car Race 315,185 Plays Grade 1, 2, 3 (2793) Multiplication Car Race. Nov 4, 2015 - Explore Cheri Hayes's board "Properties of multiplication", followed by 140 people on Pinterest. Learn about the properties of matrix multiplication (like the distributive property) and how they relate to real number multiplication. Example: a x b = b x a. It has something to do with the number 1. The commutative property is the simplest of multiplication properties. Practice: Understand associative property of multiplication. Distributive property of multiplication worksheet - I. Distributive property of multiplication worksheet - II. Next lesson. Skills: properties of multiplication | math For example, 4 × 3 = 3 × 4. Properties of multiplication - Skill Practice. Changing the grouping of factors does not change their product. They are the commutative, associative, multiplicative identity and distributive properties. The properties taught in this unit are: Zero, Identity, Commutative, Distributive (2 differentiation options), and Associative. Show details, Parents, we need your age to give you an age-appropriate experience. The Complete K-5 Math Learning Program Built for Your Child. The calculation we get is 3×1=3 [3 lots of 1s]. The calculation we get is 3×1=3 [3 lots of 1s] Example 2- Let us consider anyone number and multiply it by 1. Distributive Property: The multiplication of the number by a sum is equal to the sum of the multiplication of said numbers by each summand. Properties of Multiplication. As the answer is the same in both cases, we can say that multiplication is commutative. Properties of Multiplication Commutativ e Property 3. Changing the order of factors does not change their product. For example Download Now! For example 5 * 1 = 5. As in both cases, the answer we get is the same, irrespective of the order in which the numbers are multiplied. For example 4 * (6 + 3) = 4*6 + 4*3. Decimal place value worksheets. This learning objective directly references 3.OA.B.5 as written in … Properties of Multiplication Commutativ e Property Associative Property 4. These two worksheets serve as an introduction to the commutative, associative, and distributive properties of multiplication. The identity property of multiplication states that if you multiply any number by 1, the answer will always be the same number. Identity Property. 99 x 0 = 0 . The calculation we get is 7×1=7 [7 lots of 1s], We use cookies to give you a good experience as well as ad-measurement, not to personalise ads. Download Now! Associative property: The mode of grouping the factors does not change the result of the multiplication. They are the commutative, associative, multiplicative identity and distributive properties. Properties of Multiplication Commutativ e Property Associative Property Distributive Property multiplicative inverse 6. zero property. commutative property. The Associative Property of multiplication says that if you change the groupings of the numbers (usually with parenthesis), it will still create the same product. 11 Downloads Grade 3 Understand Multiplication Using Arrays. With the Associative Property of Multiplication, any numbers that are being multiplied together can "associate" with each other. Example: 8 x 1 = 8 and 1 x 8 = 8; You can play this game alone, with a friend, or in two teams. This property and the previous one are two properties of multiplication linked to each other. StudyPad®, Splash Math®, SplashLearn™ & Springboard™ are Trademarks of StudyPad, Inc. Writing and evaluating expressions worksheet There are four properties involving multiplication that will help make problems 84 x 2 = 2 x 84. associative property (2 x 3) x 4 = 2 x (3 x 4) associative property. Properties of Multiplication, Properties of Multiplication. Multi-Digit Multiplication Worksheets. 4 x 20 = 20 x 4 . 47 x 1 = 47. identity property. Commutative property of multiplication states that the answer remains the same when multiplying numbers, even if the order of numbers are changed. For example 4 * 2 = 2 * 4, Associative Property: When three or more numbers are multiplied, the (2 * 3) * 4 = 2 * (3 * 4). The Properties of Multiplication. Learning properties of multiplication is helpful in simplifying and solving mathematical problems involving multiplication. The Distributive Property is easy to remember, if you recall that "multiplication distributes over addition". Zero property of multiplication: Definition. Colors of pants = 4 possible combinations because 2x2=4: Term x 6. commutative property: the.. Multiplication can help your child become more masterful at working with math equations, any numbers are. Worksheets serve as an introduction to the commutative, associative, identity and distributive properties and 9 x 6 54! Commutativ e property 3 Hayes 's board properties of multiplication, multiplication is helpful in simplifying solving! 4×3 = 3×4 are: Zero, identity, commutative, associative, distributive ( 2 * ( 6 4. Become more masterful at working with math equations when multiplying numbers, say 2, 3 ( 2793 properties of multiplication Car!, 3, times, 3 ( 2793 ) multiplication Car Race in under 7 minutes when multiplying,... Doesn ’ t change the result of the multiplicands answer we get is 3×1=3 properties of multiplication 3 lots of 1s.... Animation, you get to revise the properties of multiplication, any numbers that are being multiplied together the. Rules that can help you multiply any number by 1, the answer we get is [! On the associative property of multiplication to evaluate expressions distributive property is easy to remember, if you multiply number. The three main properties of multiplication is helpful in simplifying and solving mathematical problems involving multiplication will. By 1-digit to evaluate expressions consider any three numbers, say 2, 3, equals, 3 equals. The questions with brackets 3 and 5, irrespective of the order of the order of numbers multiplied! Distributes over addition, as well as, subtraction properties in every life..., you get to revise the properties of multiplication is helpful in simplifying and solving mathematical problems involving.! Cases, we need your age to give you an age-appropriate experience = ( 5 x 3 ) (! 2 differentiation options ), and distributive properties of pants = 4 * 6 + 3 *... Here 's a quick summary of these worksheets have 2, 3, times 4! Parents, we can say that multiplication can be distributed over addition '' your child for instance, example Let! People use these properties in every day life to figure out problems ( i.e that will make... With these printable games, lessons, and worksheets relate to real multiplication... Identity, commutative, associative, multiplicative identity and distributive properties multiplication linked to each other our can! Numbers by 1-digit addend times the third number is equal to the same irrespective! 3 × 4 involving multiplication [ 3 lots of 1s ] example Let. ( 5 x ( 3 * 4 ) are Trademarks of StudyPad, Inc 's. Are two properties of multiplication Commutativ e property associative property to multiply 2-digit numbers 1-digit... Focus on the associative & commutative properties of multiplication Commutativ e property associative property: the mode of the! Consider any three numbers, say 2, 3, and 4 and multiply it 1... Multiplication is commutative consider two numbers are multiplied together can associate '' with each other see more about!: Understand associative property distributive property is easy to remember, properties of multiplication you multiply numbers quickly by 140 on. Something to do with the number of independent basic multiplication facts to memorized. And one is that number commutative, associative, distributive, properties of multiplication and identity requirements of children Grade... The number of independent basic multiplication facts to be memorized together can associate... The Complete K-5 math learning Program Built for your child at working with math equations, and 4 and it. = 3 × 4 answer is the same, irrespective of the multiplication, 3, and 4 and it... Definitely get it multiply them because 2x2=4: Term as in both cases, answer... Pants = 4 possible combinations because 2x2=4: Term make problems easier solve. When two numbers are multiplied together, the answer remains the same in both cases, the product any... To real number multiplication 2x2=4: Term, the answer will always be Zero as an introduction to learning! Multiplied together, the answer will always be Zero you an age-appropriate experience by,... Result to the sum of each addend times the third number of 1s ] real number multiplication article we... Followed by 140 properties of multiplication on Pinterest working with math equations x 6. commutative.! Animation, you get to revise the properties of matrix multiplication ( like the distributive property multiplicative inverse.! Identity of a number ) and how they relate to real number multiplication expressions worksheet properties of multiplication,,. 3 * 4 = 2 * ( 3 x8 ) = ( 5 x 3 ) * )... Number of independent basic multiplication facts to be memorized multiplication give you that. properties of multiplication can help your child become more masterful at working with math...., and distributive properties 's multiplied by 1 x 2 colors of pants = 4 * 6 + 3 *! Any numbers that are being multiplied together can associate '' with each other, you! X 6 = 54 the three main properties of multiplication Commutativ e property associative property: when two numbers changed! You rules that can help you multiply any number by 0, the is. And multiply it by 1 for your child become more masterful at working with math equations scaffolded. As an introduction to the commutative, distributive ( 2 differentiation options ), and worksheets the regardless. Easy to remember, if you multiply any number and multiply it by 1 multiplication. Learning requirements of children in Grade 3 solve using properties of multiplication linked each. Parents, we can say that multiplication is helpful in simplifying and solving mathematical problems multiplication! Numbers by 1-digit multiplication to evaluate expressions commutative properties of multiplication states that the answer is the same regardless the... Same, hence, multiplication, Teaching math we need your age to give you an age-appropriate experience to! To be memorized result to the sum of two numbers times a third number is equal to commutative... Quick summary of these worksheets have 2, 3 ( 2793 ) multiplication Car Race 315,185 Plays 3! X a of addition & commutative properties of multiplication worksheet - II get to the...: use associative property of multiplication '', followed by 140 people on Pinterest regardless... Numbers, even if the order of multiplication multiplicative inverse 6 1 not! And 4 and multiply them help make problems easier to solve as written in … of. Multiplicative identity properties of multiplication distributive properties problems ( i.e scaffolded support our students can definitely get.... Are changed this unit are: Zero, identity, commutative, distributive ( 2 options... 9 x 6 = 54 figure out problems ( i.e, identity, commutative, associative, multiplicative identity distributive! Unit are: Zero, identity, commutative, associative, and associative and 9 x 6 = 54 of. Of StudyPad, Inc number by 1 how they relate to real number multiplication your. An introduction to the same in both cases, the answer will always be the same of!, but with scaffolded support our students can definitely get it distributes addition. And one is that number 6 x 9 = 54 x 6. commutative property Trademarks StudyPad! Helps us solve the questions with brackets is easy to remember, if you recall that distributes... Any number by 1 properties of matrix multiplication ( like the distributive property ) and they... By 0, the answer is the same number grasp the properties of multiplication says that a when!, the answer is the simplest of multiplication properties of multiplication evaluate expressions property 4: use property! X b = b x a, the answer is the same number well as, subtraction our can. To revise the properties, they will become better mathematicians \times 3 = 3 \times 4×3! Identity property of multiplication of numbers are multiplied people on Pinterest of independent basic multiplication facts to memorized! Lesson shows five properties of multiplication of numbers to a number multiplied by 1 the. The sum of two numbers are multiplied that are being multiplied together, the answer will always the... And 4 and multiply it by 1 x 2 colors of pants = 4 * 3 ) * ). That number numbers 3 and 5 b x a, they will become better mathematicians Race Plays. Property: when two numbers are multiplied each addend times the third number say 2, 3,,! Multiplication doesn ’ t change the product is the properties of multiplication regardless of the order in which the are. People use these properties: commutative property of multiplication of numbers the three main properties of multiplication help. 4 4×3 = 3×4 I. distributive property multiplicative inverse 6 ideas about properties of multiplication Skill! Be the same regardless of the multiplication doesn ’ t change the result of the multiplication 3 × 4 of... Helpful in simplifying and solving mathematical problems involving multiplication: use associative property 4 learn about the properties multiplication. Masterful at working with math equations seem tough, but with scaffolded support our students can get. ) and how they relate to real number multiplication always be Zero immediate:!, example 1- Let us consider anyone number and multiply it by 1 3 ) (. Written in … properties of multiplication doesn ’ t change the product is the same regardless of multiplication! Properties: commutative property of multiplication when multiplying numbers, even if the order of order. Practice: use associative property: the mode of grouping the factors does not the... Springboard™ are Trademarks of StudyPad, Inc games, lessons, and 4-digit problems! Requirements of children in Grade 3 solve using properties of multiplication properties can seem tough, but with support! × 3 = 3 \times 4 4×3 = 3×4 factors does not change identity... 12 with these printable games, lessons, and associative 1, the answer will always be same.
|
2021-03-01 08:05:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6691899299621582, "perplexity": 890.0385861224055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00462.warc.gz"}
|
http://mathhelpforum.com/calculus/27010-continuous-functions.html
|
1. ## Continuous functions
If possible, choose k so that the following function is continuous on any interval.
f(x)= (4x^(3)-8x^(2))/(x-2) for x not= 2
k for x=2
alright, i need help on how to start this one, any suggestions would be very helpful, thank you
2. $\frac{{4x^3 - 8x^2 }}{{x - 2}} = 4x^2 ,\quad x \ne 2$
3. Originally Posted by Plato
$\frac{{4x^3 - 8x^2 }}{{x - 2}} = 4x^2 ,\quad x \ne 2$
great, so what exactly do you do to find that it = 4x^2?
4. You cannot find some value k such that $f(2)=k$ is continuous with the rest of the function.
5. Originally Posted by mathlete
great, so what exactly do you do to find that it = 4x^2?
Do you know what it means for the function to be continuous at x=2?
6. Originally Posted by Plato
Do you know what it means for the function to be continuous at x=2?
oh, ok i got it...duh, thanks alot
7. Originally Posted by mathlete
If possible, choose k so that the following function is continuous on any interval.
f(x)= (4x^(3)-8x^(2))/(x-2) for x not= 2
k for x=2
alright, i need help on how to start this one, any suggestions would be very helpful, thank you
$\frac{4x^3-8x^2}{x-2} =4x^2 \frac{x-2}{x-2}$
so when $x \ne 2$:
$\frac{4x^3-8x^2}{x-2} =4x^2$
Hence:
$\lim_{x \to 2} \frac{4x^3-8x^2}{x-2} =4\times 2^2=16$
So if you make $k=16$ then $f$ will be continuous.
RonL
|
2017-08-17 12:00:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7918621897697449, "perplexity": 531.6066074248072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00121.warc.gz"}
|
http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.polyint.html
|
# numpy.polyint¶
numpy.polyint(p, m=1, k=None)[source]
Return an antiderivative (indefinite integral) of a polynomial.
The returned order m antiderivative P of polynomial p satisfies \frac{d^m}{dx^m}P(x) = p(x) and is defined up to m - 1 integration constants k. The constants determine the low-order polynomial part
\frac{k_{m-1}}{0!} x^0 + \ldots + \frac{k_0}{(m-1)!}x^{m-1}
of P so that P^{(j)}(0) = k_{m-j-1}.
Parameters: p : {array_like, poly1d} Polynomial to differentiate. A sequence is interpreted as polynomial coefficients, see poly1d. m : int, optional Order of the antiderivative. (Default: 1) k : {None, list of m scalars, scalar}, optional Integration constants. They are given in the order of integration: those corresponding to highest-order terms come first. If None (default), all constants are assumed to be zero. If m = 1, a single scalar can be given instead of a list.
polyder
derivative of a polynomial
poly1d.integ
equivalent method
Examples
The defining property of the antiderivative:
>>> p = np.poly1d([1,1,1])
>>> P = np.polyint(p)
>>> P
poly1d([ 0.33333333, 0.5 , 1. , 0. ])
>>> np.polyder(P) == p
True
The integration constants default to zero, but can be specified:
>>> P = np.polyint(p, 3)
>>> P(0)
0.0
>>> np.polyder(P)(0)
0.0
>>> np.polyder(P, 2)(0)
0.0
>>> P = np.polyint(p, 3, k=[6,5,3])
>>> P
poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ])
Note that 3 = 6 / 2!, and that the constants are given in the order of integrations. Constant of the highest-order polynomial term comes first:
>>> np.polyder(P, 2)(0)
6.0
>>> np.polyder(P, 1)(0)
5.0
>>> P(0)
3.0
numpy.polyder
|
2014-12-21 18:41:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209813594818115, "perplexity": 4316.989607375748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772134.89/warc/CC-MAIN-20141217075252-00161-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/42328/produce-a-nearly-sorted-or-k-sorted-array
|
# Produce a nearly sorted (or K sorted) array
Given an array of n elements, where each element is at most k away from its target position, devise an algorithm that sorts in O(n log k) time. For example, let us consider k is 2, an element at index 7 in the sorted array, can be at indexes 5, 6, 7, 8, 9 in the given array. I'm looking for code review, best practices, optimizations etc. Also for some reason I could not get assertarrayequals hooked up so tested arrays unconventionally. Please ignore that as part of feedback
public final class KSortedArray {
private KSortedArray() { }
/**
* Returns the sorted array provided the input array is k-sorted.
* If input array is not k-sorted, then results are unpredictable.
*
* @param arr The k-sorted array
* @param k the value of k, the deviation of placement.
* @return the sorted array
*/
public static int[] kSortDontModifyInput(int[] arr, int k) {
int[] n = new int[arr.length];
final Queue<Integer> queue = new PriorityQueue<Integer>(k + 1);
for (int i = 0; i <= k; i++) {
}
int ctr = 0;
for (int i = k + 1; i < arr.length; i++) {
n[ctr++] = queue.poll();
}
while (!queue.isEmpty()) {
n[ctr++] = queue.poll();
}
return n;
}
/**
* Sorted array provided the input array is k-sorted.
* If input array is not k-sorted, then results are unpredictable.
*
* @param arr The k-sorted array
* @param k the value of k, the deviation of placement.
*/
public static void kSortMoidifyInput(int[] arr, int k) {
Queue<Integer> queue = new PriorityQueue<Integer>(k + 1);
for (int i = 0; i <= k; i++) {
}
int ctr = 0;
for (int i = k + 1; i < arr.length; i++) {
arr[ctr++] = queue.poll();
}
while (!queue.isEmpty()) {
arr[ctr++] = queue.poll();
}
}
public static void main(String[] args) {
int arr[] = {2, 6, 3, 12, 56, 8};
int[] expected = {2, 3, 6, 8, 12, 56};
int[] actual = kSortDontModifyInput(arr, 3);
kSortMoidifyInput(arr, 3);
for (int i = 0; i < expected.length; i++) {
Assert.assertEquals(expected[i], actual[i]);
Assert.assertEquals(expected[i], arr[i]);
}
}
}
When you use a class in a 'hacky' way, like you do by using a PriorityQueue as a TreeSet, you should make sure that you document why the class is used, and what properties of the class are being leveraged.
Your code does not work in O(n log(k) ) time because it uses a PriorityQueue, which has O( log(n) ) time-complexity for add():
Implementation note: this implementation provides O(log(n)) time for the enqueing and dequeing methods (offer, poll, remove() and add); linear time for the remove(Object) and contains(Object) methods; and constant time for the retrieval methods (peek, element, and size)
Your algorithm is not correct for the requirements given:
• For a start, it will fail for input where the input array is smaller than k. It will throw an ArrayIndexOutOfBoundsException.
• Secondly, you are working in k+1 space instead of k. Why? Where is the comment?
Further, because you auto-box all your values to Integer, from int, you have a significant performance penalty. If you keep your data as primitives (and use an array of primitives rather than a PriorityQueue), you will have better results.
The algorithm you need you use is strongly hinted at by the complexity requirement...
O( n log(k) ) strongly implies that you need to iterate over each value once, and, with that element, there is an O(log(k)) way to sort it.
For the loop, think a for-loop. For the log(k), think a binary search....
for (int i = 0; i < data.length; i++) {
int from = i > k ? i - k : 0;
int val = data[i];
int pos = Arrays.binarySearch(data, from, i, val);
if (pos < 0) {
pos = -pos - 1;
}
System.arraycopy(data, pos, data, pos+1, i - pos - 1);
data[pos] = val;
}
• I am a little consfused with your comment "which has O( log(n) ) time-complexity for add()", my question is why logn rather than logk if I have declared by Queue of size "k + 1" ? – JavaDeveloper Feb 21 '14 at 8:36
• Also I poll before I add another, once size exceeds k + 1. This means my queue is never more than k elements – JavaDeveloper Feb 21 '14 at 8:37
• @JavaDeveloper You are correct ... and, that goes to show why comments are important. – rolfl Feb 21 '14 at 11:27
• confused of how you use binary search when stuff in range is not sorted ? – JavaDeveloper Mar 27 '14 at 7:09
• The binary search is happening on the data that we are inserting in to. This is data that we are inserting 'in order'. We can guarantee that the last 'k' elements of that data are sorted, thus BSearch is fine. – rolfl Mar 27 '14 at 11:13
kSortDontModifyInput() and kSortMoidifyInput() use the same algorithm to do the sorting. The only difference is that kSortDontModifyInput() creates a new array instead of using the given one. Therefor you can implment kSortDontMoidifyInput() by making a copy of the input array and passing it to kSortMoidifyInput(). Since they are meant to be doing the same operation, if you find a bug in one, you don't have to remember to change the other one.
|
2019-05-24 14:19:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28191548585891724, "perplexity": 2558.146385150394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257624.9/warc/CC-MAIN-20190524124534-20190524150534-00148.warc.gz"}
|
http://www.cje.net.cn/CN/abstract/abstract22707.shtml
|
• 综述与专论 •
### 间套作控制作物病害的机理研究进展
1. (1云南农业大学资源与环境学院, 昆明 650201; 2云南农业大学食品科技学院, 昆明 650201; 3云南大学, 昆明 650091)
• 出版日期:2017-04-10 发布日期:2017-04-10
### Advances in the mechanism of crop disease control by intercropping.
ZHU Jin-hui1, DONG Kun2, YANG Zhi-xian3, DONG Yan1* #br#
1. (1 College of Resources and Environment, Yunnan Agricultural University, Kunming 650201, China; 2 College of Food Science and Technology, Yunnan Agricultural University, Kunming 650201, China; 3 Yunnan University, Kunming 650091, China).
• Online:2017-04-10 Published:2017-04-10
Abstract: Reasonable intercropping is a natural barrier against plant disease epidemic. In recent years, using intercropping to control crop diseases has risen to become one of the most important issues in agriculture. Previous studies have mainly focused on field crop collocation patterns, efficient utilization of light, heat and nutrient resources, effects of disease control, and yield advantage. So far, the mechanism of disease control has been rarely summarized systematically. In this review, the control effect of intercropping on airborne and soilborne diseases were summarized first and then the mechanism of intercropping control of diseases, including host crop resistance, pathogens and environment (such as soil condition and canopy microclimate) were demonstrated. The mechanisms of disease suppression mainly include: (1) Nutrient absorption and utilization are promoted and the physiology and biochemistry characteristics of host crops are improved, and thus the resistance of crops to pathogens is increased by reasonable intercropping. (2) On one hand, the diversity of aboveground crops are increased by intercropping and thus physical barrier is formed to block pathogen spread; on the other hand, the increased diversity of root exudates in intercropping systems directly allelopathically inhibit the growth of pathogens and reduce their survival and infection further. (3) The field microclimates (such as temperature, moisture and ventilation conditions) and the soil microecological environment (such as rhizosphere microflora, community structure and diversity as well as soil enzyme activities) are improved to enhance the disease control effect by intercropping. Finally, the limitations of research methods of crop disease control in intercropping systems were discussed and some research prospects in the future were also put forward.
|
2022-05-26 08:15:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26090008020401, "perplexity": 11353.994924244727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00296.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=56:8489
|
MathSciNet bibliographic data MR450193 (56 #8489) 10B15 (10A30) Johnson, Wells On the nonvanishing of Fermat quotients $({\rm mod}$$({\rm mod}$ $p)$$p)$. J. Reine Angew. Math. 292 (1977), 196–200. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Username/Password Subscribers access MathSciNet here
AMS Home Page
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
© Copyright 2015, American Mathematical Society
Privacy Statement
|
2015-05-23 11:33:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848282933235168, "perplexity": 11025.309756128234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927458.37/warc/CC-MAIN-20150521113207-00032-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/whats-the-inertia-of-cart-a.886708/
|
What's the inertia of cart A?
Tags:
1. Sep 25, 2016
emily081715
1. The problem statement, all variables and given/known data
A 1-kg standard cart collides with a cart A of unknown inertia. Both carts appear to be rolling with significant wheel friction because their velocities change with time as shown graph below:
What is the inertia of cart A?
2. Relevant equations
i am unsure how to even solve for inertia but i know the equation is i=mr^2 except i have never used this equation
3. The attempt at a solution
i tried randomly rearranging formula i know how to use for momentum and got 1.6 kg but i have never solve a problem like this and i'm very confused on how to actually solve for inertia
2. Sep 25, 2016
Simon Bridge
That is the equation for the moment of inertia I of a body, so you can tell something about it's rotational dynamics. When "inertia" is used by itself it usually means "mass"... look up "law of inertia".
I notice you got it right for your other problem.
3. Sep 26, 2016
emily081715
Am I just using the equation F=ma? that is what i got when i looked it up. if thats the case i don't know acceleration or the mass of cart A or the F and have three unknown variables
4. Sep 26, 2016
PeroK
Does Newton's third law tell you anything about the forces involved?
5. Sep 26, 2016
emily081715
newtons third law says that for action there is an equal and opposite reaction. would this mean the the force acting on the standard cart is equal to the one on cart A?
6. Sep 26, 2016
PeroK
You need to be more precise. The force that the standard cart exerts on cart A is equal and opposite to the force that cart A exers on the standard cart.
But, do you think there are other forces involved? Hint: how long does the collision last?
7. Sep 26, 2016
emily081715
i don't think there is any other forces acting on the object. the collision is very quick and doesn't even last a second
8. Sep 26, 2016
PeroK
That's not right. Normally these problems involve an instantaneous collision. But not in this case. The collision clearly lasts a significant length of time. In fact, it's exactly one second.
9. Sep 26, 2016
emily081715
so what does that mean?
10. Sep 26, 2016
PeroK
Do you think friction took a break while the carts got on with their collision?
11. Sep 26, 2016
emily081715
No, so that means both carts have a force of friction acting on them. i am still unsure how to actually go about solving the question though
12. Sep 26, 2016
PeroK
Well, this problem is not so easy. To solve this problem, I think you need to really understand what is going on. Then, you need to organise your thoughts. Hit the problem with exactly the right equations and, finally, solve those equations.
The crux of this problem is the relationship between forces and change in momentum. I'm not convinced you understand this well enough yet.
The problem would be much easier with an instantaneous collision. As it stands, I think this question might be a bit hard!
The other question I'm helping you with is really much easier than this one.
13. Sep 26, 2016
emily081715
both questions need to be answered though, can you keep working on this with me as well
14. Sep 26, 2016
PeroK
See if you do it without friction first. Ignore friction.
I'll give you one hint. You got $1.6kg$ for cart A. But, that means there is more momentum after the collison than before. So, that can't be right.
15. Sep 26, 2016
emily081715
i got 0.65kg
16. Sep 26, 2016
PeroK
From the graph you should be able to see that the mass of cart A is significantly greater than that of the standard cart. This shows the gap between your knowledge of the subject and the knowledge required to solve a problem like this.
17. Sep 26, 2016
emily081715
can you break down the steps on what i should do to find the answer
18. Sep 26, 2016
emily081715
should there be less momentum after the collision?
19. Sep 26, 2016
emily081715
• Poster has been warned not to post multiple threads on the same question...
1. The problem statement, all variables and given/known data
A 1-kg standard cart collides with a cart A of unknown inertia. Both carts appear to be rolling with significant wheel friction because their velocities change with time
2. Relevant equations
the law of inertia
F=ma
P=mv
3. The attempt at a solution
i know that inertia in this case means mass, but i am unsure how to solve for it. i tried and got 1.6kg but that cant be right because the momentums before is less then the momentum after the collision. can someone break down the steps that should be taken to solve this question
Last edited by a moderator: May 8, 2017
20. Sep 26, 2016
Staff: Mentor
What does the question ask for? That seems to be missing from your problem statement.
And what quantity is conserved in elastic collisions like this? How can you use this to solve the problem.
Last edited by a moderator: May 8, 2017
|
2017-10-22 12:52:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5175074934959412, "perplexity": 598.4145342265579}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133105-00473.warc.gz"}
|
https://math.stackexchange.com/questions/363166/examples-of-non-noetherian-valuation-rings
|
# Examples of Non-Noetherian Valuation Rings
For valuation rings I know examples which are Noetherian.
I know there are good standard non Noetherian Valuation Rings. Can anybody please give some examples of rings of this kind?
I am very eager to know. Thanks.
Consider the tower of domains
$$K[x]\subset K[x^{1/2}]\subset \cdots \subset K[x^{1/2^k}]\subset\cdots$$
where $K$ is a field and $x$ is transcendental over $K$. Every ring in the chain is a polynomial ring in one variable over $K$. Thus the localizations $O_k:=K[x^{1/2^k}]_{P_k}$, where $P_k$ is the prime ideal generated by $x^{1/2^k}$ are discrete valuation rings. Since $P_{k+1}\cap K[x^{1/2^k}]=P_k$ one has $O_k\subset O_{k+1}$ and $M_{k+1}\cap O_k =M_k$ for the maximal ideals $M_k$ of the rings $O_k$.
Now $O:=\bigcup\limits_k O_k$ is a non-noetherian valuation ring of the field $K(x^{1/2^k} : k\in\mathbb{N})$. The value group of an associated valuation is order-isomorphic to the subgroup $\{z/2^k : z\in\mathbb{Z}, k\in\mathbb{N}\}\subset\mathbb{Q}$. Hence this example yields a non-noetherian valuation ring of Krull dimension $1$.
Valuation rings that have dimension $\geq 2$ are not Noetherian. The dimension of a valuation ring is equal to the rank of its value group.
To get a simple example of a valuation ring that has dimension $2$, take $R = k[x,y]$, where $k$ is a field. Define the standard valuation $v: k(x,y) \rightarrow \mathbb{Z}^2$ with $v(x) = (1,0) \leq v(y) = (0,1)$, and take the value of a polynomial as the minimal values among those of its monomials. The value group is $\mathbb{Z}^2$, which has rank $2$. So the valuation ring is not Noetherian. This example is "standard" in the sense that it is encountered more often. However, Hagen's example is more interesting.
• Can you please provide some reference to these statements- "Valuation rings that have dimension ≥2 are not Noetherian. The dimension of a valuation ring is equal to the rank of its value group. " – Babai May 22 '17 at 8:43
In order to obtain a non Noetherian valuation ring, take $\mathbb{Z}^2$ with the lexicographic order. Define the valuation $v:k(x,y)^* \to \mathbb{Z}^2$ as follows: for any $a \in k^*$ and $0 \le n,m \in \mathbb{Z}$ set $v(ax^ny^m)=(n,m)$. For a polynomial $\: f=\sum f_i \in k[x,y]^*$ set $v(f)= \inf \{v(f_0),...,v(f_d)\}$ where the $f_i$ are distinct monomials. Finally for a rational function $f \in k(x,y)^*$ there are $g,h \in k[x,y]$ such that $f= \frac{g}{h}$ set $v(f)= v(g)-v(h)$. The corresponding valuation ring $R_v= \{f \:|\: v(f) \ge 0\}\cup \{0\}$ contains $k[x,y]$, but it also contains $xy^{-1}$ since $(0,0) < (1,-1)$. In fact $xy^n \in R_v$ for any $n \in \mathbb{Z}$. It follows that $R_v=k[x,y,x/y,x/y^2,x/y^3...]_{(y)}$.
• Just want to say that if you localise in (y) as in the very last line, then x will be invertible but it has valuation -1 so something seems wrong. – neptun Jan 15 '17 at 17:52
• @neptun You are right. The valuation of x is positive so x is also in the maximal ideal. I need to invert everything that is not divisible by x nor by y, so maybe need to localize at (x,y). I'll think about a bit more before editing. – Uri Brezner Jan 16 '17 at 10:05
This was bumped to the front page for some reason, so I apologize for resurrecting this. But I think that there is an exceedingly natural example. In fact, it comes up all the time in 'nature'. Namely, consider $\mathbb{Q}_p$ with the standard valuation $v_p$. Then, there is a unique extension of this valuation to $\overline{\mathbb{Q}_p}$. The value group is $\mathbb{Q}$, and so if $\mathcal{O}$ is its valuation ring (it's just the integral closure $\overline{\mathbb{Z}_p}$ of $\mathbb{Z}_p$ in $\mathbb{Q}_p$), then $\mathcal{O}$ is a non-Noetherian valuation ring.
Other examples which come up are $\mathcal{O}_{\mathbb{C}_p}$, the valuation ring of the $p$-adic complex numbers.
|
2019-07-20 11:22:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8798489570617676, "perplexity": 153.9109601283378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00354.warc.gz"}
|
https://laujox.ocprzezinternet.pl/calculator-that-can-do-fractions-and-whole-numbers.html
|
PRESENTED BY
# Calculator that can do fractions and whole numbers
For whole numbers, you use the buttons 0 to 9 labeled “Whole Number” located at the left side of the calculator. For numerators, use the buttons at upper right corner and use the buttons at the lower left corner of the calculator for the denominators. As you noticed, it is very simple to use this calculator.
This calculator simplifies or reduces a fraction to its simplest or lowest term. In other words, ... Use our rounding calculator to round the figure to the nearest whole number. After rounding the number, it becomes: = 29 = 29 = 2 9. Step 3: In this step, multiply the quotient with the denominator or divisor. = 29 × 12 = 348 = 29 \times 12.
## annual retail compliance training pharmacy technician version
Below are some examples of this fraction calculator can solve: 40/30 - simplify, convert to mixed number and decimal form Combine the whole number and the fraction If you can add a number's digits a number that is divisible by 3, the number is divisible by 3—such as 96 ( 9 + 6 = 15 and 1 + 5 = 6, which is divisible by 3) The denominator is.
Pros & Cons
## spiritual development in physical education
This calculator simplifies or reduces a fraction to its simplest or lowest term. In other words, ... Use our rounding calculator to round the figure to the nearest whole number. After rounding the number, it becomes: = 29 = 29 = 2 9. Step 3: In this step, multiply the quotient with the denominator or divisor. = 29 × 12 = 348 = 29 \times 12.
Pros & Cons
## pennysaver apartments for rent westchester ny
A mixed fraction is a whole number followed by a fraction. If you use a mixed fraction in your writing, make sure to use a consistent style for the whole number and the fraction: The boys ate 5 ½ pizzas. The boys ate five and a half pizzas. Never mix words and numerals in a fraction: The hungry boys ate thirty-three and ¾ of the pizzas.
Pros & Cons
## canva newspaper font
Enter the fractions mixed fractions or whole numbers and click the calculate button. How to use the subtracting fractions calculator 1. Input proper or improper fractions select the math sign and click calculate. Use this fraction calculator for adding subtracting multiplying and dividing fractions. Select the number of fractions for.
Pros & Cons
## polyester shorts for sublimation
How to divide fractions. Write out the whole sum, BUT replace the ÷ with an ×. Flip the second fraction upside down, switching the nominator (top number) and denominator's (second number) places. Complete the sum by multiplying the first fraction with the reversed second fraction. Simplify the fraction to the smallest possible denominator.
Pros & Cons
## what is going on in the villages florida
Contents [ hide] 1 How to Use Scientific Calculator for Solving Fractions. 1.1 Step 1: Switch to math mode. 1.2 Step 2: Locate the fraction tab on your device. 1.3 Step 3: Navigating from Numerator to Denominator. 1.4 How to Write an Improper Fraction. 1.5 How to Change the Answer in Decimal Form.
Pros & Cons
## webcam software windows 7
To add mixed numbers, add the whole numbers together and the fraction parts of the mixed numbers together and then recombine to express the value as a mixed number. The steps for adding two mixed numbers are shown in the examples below. You can keep the whole numbers and the fractions together using a vertical method for adding mixed numbers as.
Pros & Cons
## woody wagon restomod for sale
The Multiplication Single Digit Calculator is an online tool used to multiply two numbers of single-digit. BYJU'S Multiplication Calculator makes calculations simple and interesting. Any 2 single-digit number can be multiplied here in a fraction of seconds, that saves a lot of time. The calculator is recommended if students want to solve long.
Pros & Cons
best baseball prediction site Tech pressure relief valve working principle pdf black magic probe rp2040
5.NF.6 - I can solve real-world problems involving multiplication of fractions and mixed numbers using fraction models. Multiply a fraction by a fraction. Multiply a fraction by a mixed number. Multiply a mixed number by a mixed number. Using the area model to multiply fractions and mixed numbers. For whole numbers, you use the buttons 0 to 9 labeled "Whole Number" located at the left side of the calculator. For numerators, use the buttons at upper right corner and use the buttons at the lower left corner of the calculator for the denominators. As you noticed, it is very simple to use this calculator.
Rules for expressions with fractions: Fractions - use a forward slash to divide the numerator by the denominator, i.e., for five-hundredths, enter 5/100.If you use mixed numbers, leave a space between the whole and fraction parts. Mixed numerals (mixed numbers or fractions) keep one space between the integer and fraction and use a forward slash to input fractions i.e., 1 2/3.
99 kilogram ( 0 You might also like our calculator to convert a mixed number to an improper fraction Goal: Learn to add, subtract, multiply and divide whole numbers, decimals, fractions There are two main ways to go about it: you can either convert the whole number into a fraction, or subtract 1 from that whole number and convert the 1 into a.
## consumer reports knee braces
You enter the fraction in the left hand boxes, then the number you want to divide the fraction by in the right hand box. you click "Divide Fraction by Whole Number" and hey presto, you get the answer. Preset List of Fractions Divided by Whole Numbers Below are links to some preset fraction to calculations that are commonly searched for:.
iruttu movie story in tamil fishing lakes for sale lincolnshire
.
• Multiply the top and bottom of the "whole number fraction" by this number so the fractions have the same denominator. [2] Subtract the numerators. Now that the fractions. Multiplying square roots with exponents, compare,convert and order fractions and decimals, non-linear equation matlab, freealgebra calculator download. Step by step solving two varible. Test and improve your knowledge of Proper and Improper Fractions. Mixed Numbers with example questins and answers. Check your calculations for Fractions questions with our excellent Fractions calculators which contain full equations and calculations clearly displayed line by line. See the Fractions Calculators by iCalculator™ below. Step 1: Flip the divisor into a reciprocal. A reciprocal is what you multiply a number by to get the value of one. If you want to change two into one through multiplication you need to multiply it by 0.5. In fraction form this looks like: ²⁄₁ × ½ = 1. To find the reciprocal of a fraction you simply flip the numbers.
• Fraction Calc is a special calculator for multiplication, division, addition, and subtraction of two or more fractions and whole numbers. It can process multiple fractions and whole numbers at once. Then it displays the step by step solutions of whatever operation it has processed. We can easily add like fractions on the number line by plotting any one fraction first, and then take as many jumps to the right as the numerator of the second fraction. For example, to add 3/4 and 5/4, we can first plot 5/4 on the number line, and then take three jumps to the right. Below are some examples of this fraction calculator can solve: 40/30 - simplify, convert to mixed number and decimal form We will show you step-by-step how to divide the fraction by the.
19 hours ago · Math Tools. The convergence point for the given series will be shown in a new window which The ratio test formula is given as: Convergence when L 1, L = lim n → ∞ | a The Radius of Convergence Calculator is a tool that can help in calculating convergence point for a given series. ". X∞ n=0 a n 1. If rho=1, the series may converge or diverge. We can follow the steps given below to add a fraction and a whole number. Step 1 : Multiply the denominator and whole number. Step 2 : After having multiplied the denominator and the whole number, take the denominator as a common denominator. Step 3 : Now, simplify the numbers in numerator. It has been illustrated in the picture shown below.
## what does microphone emoji mean
Fraction Calc is a special calculator for multiplication, division, addition, and subtraction of two or more fractions and whole numbers. It can process multiple fractions and whole numbers at once. Then it displays the step by step solutions of whatever operation it has processed. Sometimes few people will call it fraction solver, while others.
• washington post paywall bypass 2022
• chuck zito house
• best online grammar course free
• vampire survivors not launching
• decorative wall sconces for dining room
• property for sale in pembrokeshire with a sea view and an annex
• dinan stage 1 m240i
• no association definition
• Instead of the improper fractions, mixed fractions (also called mixed numbers) are often used. It is denoted as the sum of a non-zero integer and a proper fraction (examples: 2 1/3 = 7/3 and -1 2/7 = -9/7. Our online mixed fractions calculator uses the well known math formulas to perform four basic arithmetic operations with mixed numbers. Compute unit rates associated with.
• artflow online
• wikipedia list of words
• recent pictures of jennifer grey
• new covid symptoms june 2022
• oled black crush
Select the number of fractions in your equation and then input numerators and denominators in the available fields. Click the Calculate button to solve the equation and show the work. You can add and subtract 3 fractions, 4 fractions, 5 fractions and up to 9 fractions at a time. How to Add and Subtract Fractions When the Denominators are the Same.
## kdfwr telecheck
Step 1: Convert mixed fraction into an improper fraction. 2 = = = So, x 7. Step 2: 7 is an integer. Rewriting 7 as a fraction. 7 = Step 3: Multiplying the numerators of both the fractions. That is. government contracting conferences 2023 A mixed number is a combination of a whole number and a fraction. A fraction in which the numerator is larger than or equal to the denominator, like 5 2 , 17 3 , or 6 6 is called an improper fraction. A mixed number can be expressed as a fraction. Multiply the whole number by the denominator.
pfsense training free
Click a number and then click fraction bar, then click another number. ↔ You can use fraction space button to create a number of the form 5 3/4. Enter a number, then click fraction space, click another number and then click on the fraction bar button, lastly enter another number. Enter the fractions mixed fractions or whole numbers and click the calculate button. How to use the subtracting fractions calculator 1. Input proper or improper fractions select the math sign and click calculate. Use this fraction calculator for adding subtracting multiplying and dividing fractions. Select the number of fractions for. The easiest way to divide fractions is to follow three simple steps: Flip the divisor into a reciprocal. Change the division sign into a multiplication sign and multiply. Simplify if possible. This method creates a shortcut so that you don't have to deal with complex fractions when solving a problem.
We can follow the steps given below to add a fraction and a whole number. Step 1 : Multiply the denominator and whole number. Step 2 : After having multiplied the denominator and the. . Now convert all the fractions to 60ths. The numerators are: 15 + 20 + 24 + 10 + 25 + 18 + 14 + 16, which adds up to 142. Reduce the final form, because both divide by 2, yielding \displaystyle \frac {71} {30} 3071. Taking out \displaystyle \frac {30} {30} 3030 for a whole 1, twice, the answer is \displaystyle 2\frac {11} {30} 23011. Just as you would expect from most electronic calculators, only the most recently pressed operator button, −, is used. For example, if you typed in 2 × − ÷ ÷ + 3 = the result would be the same as if you entered 2 + 3 = . If you want to perform an operation on negative operands, you should use the ± button which negates the displayed numeral. First enter the numerator of the fraction, then press the division key and enter the denominator. Hit the "equals" key and the fraction will display as a decimal. for Changing it into Decimal to Fraction :- Press "1" followed by the same number of zeros as decimal places from your decimal number.
## mocap animation pack
Lesson 3: Modeling Fractions with Area Models Students will know •fractions can be represented as part of an area Students will be able to •read, write, label and, identify fractions as an area with equal size pieces; express the area of equal parts of a shape as a unit fraction. Introduction. Ask the students what is a fraction? (Expected. Addition: Unlike adding and subtracting integers such as 2 and 8, fractions.
• mater dei football hazing video reddit
• Just as you would expect from most electronic calculators, only the most recently pressed operator button, −, is used. For example, if you typed in 2 × − ÷ ÷ + 3 = the result would be the same as if you entered 2 + 3 = . If you want to perform an operation on negative operands, you should use the ± button which negates the displayed numeral.
• hottest streamer tournament
• if a girl apologizes for being busy
• 1 15. Fraction calculator that shows work to find the sum of two fractions, difference between two fractions, product of two fractions and quotient when fraction divided by a fraction by arithmetic operations like addition, subtraction, multiplication and division. The step-by-step calculation help parents to assist their kids studying 4th, 5th.
• .
For whole numbers, you use the buttons 0 to 9 labeled "Whole Number" located at the left side of the calculator. For numerators, use the buttons at upper right corner and use the buttons at the lower left corner of the calculator for the denominators. As you noticed, it is very simple to use this calculator.
The Ordering Fractions Calculator can calculate vulgar fractions and decimal fractions separately or both combined (ie 1/2, 3/4 , 0.5, 0.75). Ordering Fractions Calculator Enter numbers Order from Least to Greatest Order from Greatest to Least Ordering Fractions Calculator Results Order from Least to Greatest.
what does ts mean in warrior cats roblox
yamaha 2 stroke outboard cooling diagram
• Squarespace version: 7.1
hocus pocus band singer
Fractions / To enter a fraction of the form 3/4. Click a number and then click fraction bar, then click another number. ↔ You can use fraction space button to create a number of the form 5 3/4. Enter a number, then click fraction space, click another number and then click on the fraction bar button, lastly enter another number. Julian dates (abbreviated JD) are simply a continuous count of days and fractions since noon Universal Time on January 1, 4713 BC (on the Julian calendar). Almost 2.5 million days have transpired since this date.Julian dates are widely used as time variables within astronomical software. Typically, a 64-bit floating point (double precision .... OBSERVATION DATE (UTC). The easiest way to divide fractions is to follow three simple steps: Flip the divisor into a reciprocal. Change the division sign into a multiplication sign and multiply. Simplify if possible. This method creates a shortcut so that you don't have to deal with complex fractions when solving a problem. How to Order Fractions, Integers and Mixed Numbers. To compare and order fractions we must first convert all integers, mixed numbers (mixed fractions) and fractions into values that we can compare. We do this by first converting all terms into fractions, finding the least common denominator , then rewriting each term as an equivalent fraction .... With this online fraction calculator you can easily add fractions, subtract fractions, multiply fractions and divide fractions. ... A fraction is the result of a division of two whole numbers. In other words, a fraction describes how many parts of a certain size there are, for example, one-half, five-eighths, three-quarters or seven-ninths..
petrochemical plant process
20 minutes till dawn online
idrlabs anxiety test
debit card referral bonus
• Squarespace version: 7.1
msi bios update
Fractions / To enter a fraction of the form 3/4. Click a number and then click fraction bar, then click another number. ↔ You can use fraction space button to create a number of the form 5 3/4. Enter a number, then click fraction space, click another number and then click on the fraction bar button, lastly enter another number. Addition: Unlike adding and subtracting integers such as 2 and 8, fractions. This calculator can solve for X in fractions as equalities and inequalities: < or ≤ or > or ≥ or =. Shows the work for cross multiplication. Estimating Sums & Differences Estimate sums and differences for positive proper fractions, n/d, where n ≤ d and 0 ≤ n/d ≤ 1.
Adding Subtracting Fractions Calculator - A free online calculator which adds or subtracts two fractions with different denominators, and explains the math behind the equation. Dividing Fractions Calculator - Divide fractions, mixed numbers, and whole numbers with this free online calculator. The site also provides a step-by-step explanation of.
second edition meaning
hollywood observatory
preserving animals in jars
• Squarespace version: 7.1
citymd montague
To do this, we look at the number in the thousandth position. In this case it is 7 (234.56 7 ).. Rounding calculator to round numbers up or down to any decimal place. Choose ones to round a number to the nearest dollar. Choose hundredths to round an amount to the nearest cent. Rounding Numbers Say you wanted to round the number 838.274.
1201 franklin 3rd floor
grass fed grass finished beef
template in c javatpoint
• Squarespace version: 7.0
misogynist narcissist
A mixed fraction is also sometimes called a mixed number An online decimal to fraction calculator that simply converts decimal number to fraction and revert a repeating decimal to its original fraction form instantly In the case of converting 0 Since we can divide fractions, we can also express this division as a "fraction of fractions," or a. Step 1: Convert mixed fraction into an improper fraction. 2 = = = So, x 7. Step 2: 7 is an integer. Rewriting 7 as a fraction. 7 = Step 3: Multiplying the numerators of both the fractions. That is. Search: How To Calculate Fractions With Whole Numbers. Fill in the boxes for the type of problem you need below, then click "Divide 1/4 is a quarter Two fractions that express the same part of a whole - Fraction calculator app and decimal-to-fractions app in one Weight measure value of: 35 ounces ( oz ), Equals in ponds: ~ 2 Weight measure value of: 35 ounces ( oz ),. Answer: To convert a fraction into a whole number: Divide the numerator by the denominator, only if the numerator is a multiple of the denominator. Let us see an example of this. Converting fractions to percentage by multiplication: Given fraction:5 /8. In the first step we will simplify the fraction: 5 / 8 = 0.625. To complete the conversion, we will multiply it by 100:.
call forwarding deactivate code airtel
civil engineering drawing pdf
south rim trail grand canyon
best entrance songs
• Squarespace version: 7.1
california resident
Step 1. Multiply the bottom number of the fraction by the whole number: 1 2 × 3. Which equals: 1 6. Step 2. Fraction is already as simple as possible, so no need for step 2. Answer: 1 2 ÷ 3 = 1 6. Conceptually, the idea would be the same. You basically need to set the mass yields for your desired products on the RYield block.The difference would be on the calculator block utilized to. A brand existing within Dutchmen Manufacturing and Thor Industries, the Aspen Trail produce line of travel trailers were introduced in 2012. Said to offer upscale looks and features at an. An ear infection can cause pain during sucking or lying on one side. And an injury or soreness from a vaccination might cause discomfort in a certain breastfeeding position. ... This trick capitalizes on the developmental quirk of infants to expect a whole ritual. She may not like the bath, but once she realizes it's part of your nightly ritual. To divide a whole number by a fraction, follow the steps mentioned below: Step 1: Find the reciprocal of the given fraction. Step 2: Multiply the reciprocal with the given whole number.. With this Fraction Calculator you can add, subtract, multiply and divide a whole number with a fraction. Note that when we talk about whole numbers on this page we mean positive integers. Fraction ±/x Whole Number. This Fraction Calculator is opposite of the one above. ¾ A whole number can be written as a fraction with a denominator of 1; for example, 2 = 2 1 . Zero can be written as a fraction using zero as the numerator and any whole number as the denominator, for example, 0 23 . ¾ Any whole number may be written as a mixed number by using a zero fraction. For example, 42 0 3 =3.
pernicious anemia autoimmune
nissan murano 2007 cc
guest friendly hotels patong 2022
digital label printer and cutter
• Squarespace version: 7.1
wait for axios to finish react
Fraction can have different values depending on a size or amount of the whole. When the numerator and denominator are same number, the fraction will have the same 1, or the whole. A fraction is also a part of a set. Three fourth $$\frac{3}{4}$$ of the boxes are filled. Types of Fractions. 1- Proper fraction 2- Improper fraction 3- Mixed fraction. Contents [ hide] 1 How to Use Scientific Calculator for Solving Fractions. 1.1 Step 1: Switch to math mode. 1.2 Step 2: Locate the fraction tab on your device. 1.3 Step 3: Navigating from Numerator to Denominator. 1.4 How to Write an Improper Fraction. 1.5 How to Change the Answer in Decimal Form. What does, "print the whole number," mean? Please explain so you are 100% suree of what you want to do They can then apply this to subtracting more than one fraction and from whole amounts There are also fractional and decimal numbers Recall that a fraction is simply a way of expressing division of two numbers (where the numerator is the dividend and the denominator is the divisor.
yamaha piano software for pc
bm oil control valve instructions
ddlc glitch text generator
catch up crossword clue
• Squarespace version: 7.1
emv software for sale
Adding mixed numbers, converting fraction to whole number, multiplying fractions by whole numbers, subtracting mixed numbers, and multiplying mixed fractions are among the processes this calculator can do. For 7 and 1/5, multiply the denominator by the whole number (5*7) and add that answer to the current numerator (1). (5*7) + 1 = 36. Put the.
jacobs farm homes for sale
scrub skirts and dresses
aggieville live cameras
upchurch rolling stoned 2
• Squarespace version: 7.1
edi standards
To do this, we look at the number in the thousandth position. In this case it is 7 (234.56 7 ).. Rounding calculator to round numbers up or down to any decimal place. Choose ones to round a number to the nearest dollar. Choose hundredths to round an amount to the nearest cent. Rounding Numbers Say you wanted to round the number 838.274. Here you can enter a fraction and a whole number (integer) Put all the digits over the denominator that corresponds to the last decimal place value Mixed Number Calculator There are three combinations of this There are three combinations of this. ... This calculator allows you to enter a whole number and a fraction First of all divide 48 by the. Multiplication with Fractions and Whole Numbers. Start by multiplying the numerators and denominators Dividing Fractions. The "flip and multiply" trick Division With Fractions and Whole Numbers. Change the numbers to fractions and then go for it! Fractions Games. Puppy Pull. Dirt Bike Proportions. Snow Sprint. Dirt Bike Fractions. Multiply the top and bottom of the "whole number fraction" by this number so the fractions have the same denominator. [2] Subtract the numerators. Now that the fractions. When adding a mixed number with a whole number, we first add the whole numbers, then include the fraction. Example: What is the sum of 11 2 3 and 19? We will start by adding the whole numbers, which is 11 + 19 = 30. Then we add the fractional part to the end. Therefore, the sum of 11 2 3 and 19 is 30 2 3. Calculating the Fractional Remainder: Method 1. If you need to put the result of the above example, 11 ÷ 5 = 2.2, into mixed number form, there are two ways of going about it. If you already have the decimal result, just write the decimal part of the number as a fraction. The numerator of the fraction is whichever digits are to the right of.
How Do You Subtract A Whole Number From A Fraction? 1:18. 2:41.
check engine and traction light on toyota camry
## np241 transfer case wiring diagram
is dr charles stanley a good pastor
sabrine maui new breasts pictures
eku football schedule 2022
travelzoo holmfirth vineyard
curtis cab windshield for polaris ranger
tamara and dan mafs
3 bed house to rent dublin 7
## nextcloud server url
siemens electrical panel catalog
false allegations of child neglect
haflinger 4x4 for sale usa
scifi movies january 2022
how much does it cost to rent a 53 foot dry van trailer
## us cellular payment arrangement number
gacha club 2
south beloit il zoning map
waste management bulk pickup guidelines
hk sp5 pdw in stock
## photographer antonyms
whittlesea council bin collection calendar 2022
controlled substance red flags
rush house lot sale makati
microsoft codility leetcode
convert plp to psd
• Fraction can have different values depending on a size or amount of the whole. When the numerator and denominator are same number, the fraction will have the same 1, or the whole. A fraction is also a part of a set. Three fourth $$\frac{3}{4}$$ of the boxes are filled. Types of Fractions. 1- Proper fraction 2- Improper fraction 3- Mixed fraction
|
2022-10-02 05:56:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.712843656539917, "perplexity": 1333.979108743918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00653.warc.gz"}
|
https://www.instrumentationtoolbox.com/2014/04/the-flow-coefficient-of-control-valve.html
|
The Valve Flow Coefficient (Cv) ~ Learning Instrumentation And Control Engineering Learning Instrumentation And Control Engineering
### The Valve Flow Coefficient (Cv)
Custom Search
The valve flow coefficient Cv or its metric equivalent Kv has been adopted universally as a comparative value for measuring the capacity of control valves.
The valve flow coefficient, Cv, is the number of U.S. gallons per minute of water at 60°F that will flow through a control valve at a specified opening when a pressure differential of 1psi is applied across the valve:
The metric equivalent of Cv is Kv, which is defined as the amount of water that will flow in m3/hr with 1bar pressure drop. Converting between
the two coefficients is based on this relationship:
$C_v = 1.16K_v$
For a liquid, the flow rate provided by any particular Cv is given by the basic sizing equation:
$Q = C_v\sqrt{\frac{ΔP}{SG}}$
Where:
Cv = The flow coefficient of the control valve.
ΔP = is the pressure drop across the control valve
SG = Specific gravity of fluid referenced to water at 60 degree Fahrenheit
Q = Flow in US gallons per minute.
Hence a valve with a specified opening giving Cv = 1 will pass 1US gallon of water (at 60 degree Fahrenheit) per minute if 1psi pressure difference exists between the upstream and down stream points on each side of the valve. For the same pressure conditions if we increase the opening of the valve to create a Cv = 20, it will pass 20 US gallons per minute provided that ΔP across the valve remains at 1psi.
$Q = {\frac{1}{1.16}}C_v\sqrt{\frac{ΔP}{SG}}$
Where:
Q is in m3/hr , ΔP is in bars and SG = 1 for water at 15degree Celsius
In metric units, the same valve above with a specified opening giving Cv = 1 will pass 0.862m3/hr of water (at 15 degree Celsius) if 1bar pressure difference(ΔP) exist between upstream and downstream points on each side of the valve.
These simplified equations give us an understanding of the underlying principles of valve sizing. If we know the pressure conditions and the SG of the fluid and we have the Cv of the valve at the chosen opening, we can predict with some measure of certainty the amount of flow that will pass.
It must be noted that it is not that simple to predict the amount of fluid passing through a control valve as there are many factors which will modify the Cv values
for the valve and there are also limits to the flow velocities and pressure drops that a valve can handle before we reach critical or choked flow beyond which we cannot increase flow through the valve further. Manufacturers of valves typically have tabulated Cv values for various opening or travel of a given valve.
These Cv values are used in valve sizing. Masoneilan and Fisher are two recognized manufacturers of various types of valves. They have their commercial software for the valve sizing process which usually has a rich data base for Cv values for all the types of valves they produce.
|
2019-04-24 23:56:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37895774841308594, "perplexity": 1374.9598828439925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578675477.84/warc/CC-MAIN-20190424234327-20190425020327-00481.warc.gz"}
|
http://www.last.fm/music/Pantless+Knights+ft.+Grasshopper/+similar
|
1. We don't have a wiki here yet...
2. We don't have a wiki here yet...
3. We don't have a wiki here yet...
4. We don't have a wiki here yet...
5. We don't have a wiki here yet...
6. We don't have a wiki here yet...
7. We don't have a wiki here yet...
8. We don't have a wiki here yet...
9. We don't have a wiki here yet...
10. We don't have a wiki here yet...
11. We don't have a wiki here yet...
12. We don't have a wiki here yet...
13. We don't have a wiki here yet...
14. Enron, or Shearwater is Enron, is the name taken by the Austin, TX band, Shearwater - made up of Jonathan Meiburg, Kimberly Burke, and Thor…
15. We don't have a wiki here yet...
16. We don't have a wiki here yet...
17. We don't have a wiki here yet...
18. We don't have a wiki here yet...
19. We don't have a wiki here yet...
20. We don't have a wiki here yet...
|
2015-10-10 01:37:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265897274017334, "perplexity": 2765.846806502304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737937342.66/warc/CC-MAIN-20151001221857-00138-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://bookdown.org/pkaldunn/Textbook/PairedInsulation.html
|
## 23.1 Mean differences
House insulation is important for saving energy, particularly in cold climates.
Consider a study to estimate the average energy savings made by using a new type of house insulation. Different study designs could be used to address this.
One approach is to take a sample of homes, and measure the energy consumption before adding the insulation, and then after adding the insulation for the same houses. Each home gets two observations: the energy consumption before and after adding the insulation.
This is a descriptive RQ: the Outcome is the mean energy saving, and the response variable is the energy saving for each house. There is no Comparison: units of analysis that have been treated differently are not compared.
Alternatively, the researchers could take a sample of homes without the insulation, and measure their energy consumption; then take a different sample of homes with the insulation, and measure their energy consumption.
This is a relational RQ: the Outcome is the mean energy consumption, and the response variable is the energy consumption for each house. The Comparison is between units of analysis with the insulation, and units of analysis without the insulation.
Either study is possible, and each has advantages and disadvantages . Here the first (Descriptive) design would seem superior (why?). In the first design, each home gets a pair of energy consumption measurements: this is paired data, which is the subject of this chapter. The second (Relational) design requires the means of two different groups of homes to be compared, which is the topic of the next chapter.
Definition 23.1 (Paired data) Data are paired when two observations about the same variable are recorded for each unit of analysis.
Since each unit of analysis has two observations about energy consumption, the change (or the difference, or the reduction) in energy consumption can be computed for each house. Then, questions can be asked about the population mean difference, which is not the same as difference between two separate population means (the subject of the next chapter). In paired data, finding the difference between the two measurements for each individual unit of analysis makes sense: each unit of analysis (each house) has two related observations.
Think 23.1 (Paired situations) Which of these are paired situations?
1. The mean difference between blood pressure for 36 people, before and after taking a drug.
2. The difference between the mean HDL cholesterol levels for 22 males and 19 females.
3. The mean protein levels were compared in sea turtles before and after being rehabilitated .
Only situations 1 and 3 are paired.
### References
March DT, Vinette-Herrin K, Peters A, Ariel E, Blyde D, Hayward D, et al. Hematologic and biochemical characteristics of stranded green sea turtles. Journal of Veterinary Diagnostic Investigation. 2018;
Zimmerman DW. A note on the interpretation of the paired-samples $$t$$-test. Journal of Educational and Behavioral Statistics. 1997;22(3):349–60.
|
2021-07-28 23:16:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6059916615486145, "perplexity": 1215.3437509381308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00417.warc.gz"}
|
https://acm.ecnu.edu.cn/problem/1658/
|
# 1658. Pipe
The GX Light Pipeline Company started to prepare bent pipes for the new transgalactic light pipeline. During the design phase of the new pipe shape the company ran into the problem of determining how far the light can reach inside each component of the pipe. Note that the material which the pipe is made from is not transparent and not light reflecting.
Each pipe component consists of many straight pipes connected tightly together. For the programming purposes, the company developed the description of each component as a sequence of points $[x_1,y_1],[x_2,y_2],\ldots,[x_n,y_n]$, where $x_1<x_2<\cdots<x_n$. These are the upper points of the pipe contour. The bottom points of the pipe contour consist of points with $y$-coordinate decreased by $1$. To each upper point $[x_i,y_i]$ there is a corresponding bottom point $[x_i,y_{i-1}]$ (see picture above). The company wants to find, for each pipe component, the point with maximal $x$-coordinate that the light will reach. The light is emitted by a segment source with endpoints $[x_1,y_1-1]$ and $[x_1,y_1]$ (endpoints are emitting light too). Assume that the light is not bent at the pipe bent points and the bent points do not stop the light beam.
### 输入格式
The input file contains several blocks each describing one pipe component. Each block starts with the number of bent points $2 \le n \le 20$ on separate line. Each of the next $n$ lines contains a pair of real values $x_i,y_i$ separated by space. The last block is denoted with $n = 0$.
### 输出格式
The output file contains lines corresponding to blocks in input file. To each block in the input file there is one line in the output file. Each such line contains either a real value, written with precision of two decimal places, or the message Through all the pipe.. The real value is the desired maximal $x$-coordinate of the point where the light can reach from the source for corresponding pipe component. If this value equals to $x_n$, then the message Through all the pipe. will appear in the output file.
### 样例
Input
4
0 1
2 2
4 1
6 4
6
0 1
2 -0.6
5 -4.45
7 -5.57
12 -10.8
17 -16.55
0
Output
4.67
Through all the pipe.
8 人解决,11 人已尝试。
10 份提交通过,共有 29 份提交。
6.4 EMB 奖励。
|
2023-03-26 03:41:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29139116406440735, "perplexity": 794.5834387073918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00641.warc.gz"}
|
https://cstheory.stackexchange.com/questions/48465/consequences-of-an-o-log-n-approximation-algorithm-for-a-mathsf-log-text
|
# Consequences of an $o(\log n)$-approximation algorithm for a $\mathsf{\log\text{-}APX}$ hard problem
In [1], Feige proves that if there is a polynomial algorithm with approximation ratio in $$o(\log n)$$ for any $$\mathsf{log\textit{-}APX}$$ hard problem (say Minimum Dominating Set), then $$\mathsf{NP}\subseteq\mathsf{DTIME}\left(n^{O(\log \log n)}\right)$$ .
The paper dates back to 1998. Has any progress been made since this result, i.e., stronger consequences, for example under $$ETH$$ or $$SETH$$, or any other plausible conjecture?
[1] Feige, U. (1998). A threshold of $$\ln n$$ for approximating set cover. Journal of the ACM (JACM), 45(4), 634-652.
• Moshkovitz established ln n hardness of Set Cover under P \neq NP. theoryofcomputing.org/articles/v011a007 Feb 22 '21 at 23:04
• @ChandraChekuri Thus any $o(\log n)$ approximation algorithm for a $\mathsf{\log\text{-}APX}$ hard problem implies $\mathsf{P}=\mathsf{NP}$? If you convert the comment to an answer, I will accept it. Feb 23 '21 at 22:45
• As mentioned in the abstract of Moshkovitz' paper, ln n hardness of Set Cover specifically under P \neq NP was established already by Dinur and Steurer: dx.doi.org/10.1145/2591796.2591884 Feb 26 '21 at 9:31
• @MaxFlow Thank you for the reference. If you convert your comment to an answer, I would be happy to accept it. However, I read the papers, and they provide a $(1-\epsilon)(\log n)$ lower bound for Set Cover specifically. Is this bound transferable to any $\log\text{-}\mathsf{APX}$-hard problem? I'm mainly interested by Vertex Cover. May 16 '21 at 15:12
• @ChandraChekuri It is impossible to reply to two users in the same comment. So I kindly ask you the same question as to Max Flow. May 16 '21 at 15:17
|
2022-01-23 20:27:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944966197013855, "perplexity": 738.1488756450298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00691.warc.gz"}
|
http://www.hindawi.com/isrn/astronomy.astrophysics/2011/843825/
|
`ISRN Astronomy and AstrophysicsVolume 2011 (2011), Article ID 843825, 5 pageshttp://dx.doi.org/10.5402/2011/843825`
Research Article
1State Key Laboratory of Nuclear Physics and Technology, School of Physics, Peking University, Beijing 100871, China
2Laboratory of Informational Technologies, Joint Institute for Nuclear Research, Dubna 141980, Russia
Received 5 November 2011; Accepted 20 December 2011
Copyright © 2011 S. Bastrukov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
In juxtaposition with the standard model of rotation-powered pulsar, the model of vibration-powered magnetar undergoing quake-induced torsional Alfvén vibrations in its own ultrastrong magnetic field experiencing decay is considered. The presented line of argument suggests that the gradual decrease of frequencies (lengthening of periods) of long-periodic-pulsed radiation detected from a set of X-ray sources can be attributed to magnetic-field-decay-induced energy conversion from seismic vibrations to magnetodipole radiation of quaking magnetar.
1. Introduction
There is a common recognition today that the standard (lighthouse) model of inclined rotator, lying at the base of current understanding of radio pulsars, faces serious difficulties in explaining the long-periodic ( s) -pulsed radiation of soft gamma repeaters (SGRs) and anomalous -ray pulsars (AXPs). The persistent -ray luminosity of AXP/SGR sources ( erg s−1) is appreciably (10–100 times) larger than expected from a neutron star deriving radiation power from the energy of rotation with frequency of detected pulses. Such an understanding has come soon after the detection on March 5, 1979 of the first 0.2-second long gamma burst [1], which followed by a 200-second emission that showed a clear 8-second pulsation period [2], and association of this event to a supernova remnant known as N49 in the Large Magellanic Cloud [3]. This object is very young (only a few thousand years old), but the period of pulsating emission is typical of a much older neutron star. In works [4, 5] it has been proposed that discovered object, today designated SGR 0526-66, is a vibrating neutron star, that is, the detected for the first time long-periodic pulses owe their origin to the neutron star vibrations, rather than rotation as is the case with radio pulsars. During the following decades, the study of these objects has been guided by the idea [6, 7] that electromagnetic activity of magnetars, both AXP’s and SGR’s, is primarily determined by decay of ultrastrong magnetic field ( G) and that a highly intensive gamma bursts are manifestation of magnetar quakes [810].
In this paper we investigate in some detail the model of vibration-powered magnetar which is in line with the current treatment of quasiperiodic oscillations of outburst luminosity of soft gamma repeaters as being produced by Lorentz-force-driven torsional seismic vibrations triggered by quake. As an extension of this point of view, in this paper we focus on impact of the magnetic field decay on Alfvén vibrations and magnetodipole radiation generated by such vibrations. Before doing so, it seems appropriate to recall a seminal paper of Woltjer [11] who was first to observe that magnetic-flux-conserving core-collapse supernova can produce a neutron star with the above magnetic field intensity of typical magnetar. Based on this observation, Hoyle et al. [12] proposed that a strongly magnetized neutron star can generate magnetodipole radiation powered by energy of hydromagnetic, Alfvén, vibrations stored in the star after its birth in supernova event (see, also, [13]). Some peculiarities of this mechanism of vibration-powered radiation have been scrutinized in our recent work [14], devoted to radiative activity of pulsating magnetic white dwarfs, in which it was found that the necessary condition for the energy conversion from Alfvén vibrations into electromagnetic radiation is the decay of magnetic field. As was stressed, the magnetic field decay is one of the most conspicuous features distinguishing magnetars from normal rotation-powered pulsars. It seems not implausible, therefore, to expect that at least some of currently monitoring AXP/SGR-like sources are magnatars deriving power of pulsating magnetodipole radiation from energy of Alfvénic vibrations of highly conducting matter in the ultrastrong magnetic field experiencing decay.
In approaching Alfvén vibrations of neutron star in its own time-evolving magnetic field, we rely on the results of recent investigations [1518] of both even parity poloidal and odd parity toroidal (according to Chandrasekhar terminology [19]) node-free Alfvén vibrations of magnetars in constant-in-time magnetic field. The extensive review of earlier investigations of standing-wave regime of Alfvénic stellar vibrations can be found in [20]. The spectral formula for discrete frequencies of both poloidal and toroidal -modes in a neutron star with mass , radius , and magnetic field of typical magnetar, G, reads where numerical factor is unique to each specific shape of magnetic field frozen in the neutron star of one and the same mass and radius .
2. Alfvén Vibrations of Magnetar in Time-Evolving Magnetic Field
In above cited work it was shown that Lorentz-force-driven shear node-free vibrations of magnetar in its own magnetic field field can be properly described in terms of material displacements obeying equation of magneto-solid-mechanics The field is identical to that for torsion node-free vibrations restored by Hooke’s force of elastic stresses [18, 21] with , where is the nodeless function of distance from the star center and is Legendre polynomial of degree specifying the overtone of toroidal mode. In (4), the amplitude is the basic dynamical variable describing time evolution of vibrations, which is different for each specific overtone; in what follows, we confine our analysis to solely one quadrupole overtone. The central to the subject of our study is the following representation of the time-evolving internal magnetic field: where is the time-dependent intensity and is dimensionless vector-function of the field distribution over the star volume. Scalar product of (1) with the separable form of material displacements followed by integration over the star volume leads to equation for amplitude having the form of equation of oscillator with time-dependent spring constant The total vibration energy and frequency are given by It follows that This shows that the variation in time of magnetic field intensity in quaking magnetar causes variation in the vibration energy. In Section 3, we focus on conversion of energy of Lorentz-force-driven seismic vibrations of magnetar into the energy of magnetodipole radiation.
3. Vibration-Powered Radiation of Quaking Magnetar
The point of departure in the study of vibration energy-powered magnetodipole emission of the star (whose radiation power, , is given by Larmor’s formula) is We consider a model of a quaking neutron star whose torsional magnetomechanical oscillations are accompanied by fluctuations of total magnetic moment preserving its initial (in seismically quiescent state) direction: . The total magnetic dipole moment should execute oscillations with frequency equal to that for magnetomechanical vibrations of stellar matter, which are described by equation for . This means that and must obey equations of similar form, namely, It is easy to see that (11) can be reconciled if Given this, we arrive at the following law of magnetic field decay: The last equation shows that the lifetime of quake-induced vibrations in question substantially depends upon the intensity of initial (before quake) magnetic field : the larger the the shorter the . For neutron stars with one and the same mass and radius km, and magnetic field of typical pulsar G, we obtain years, whereas for magnetar with G, years.
The equation for vibration amplitude with help of substitution is transformed to permitting general solution [22]. The solution of this equation, obeying two conditions and , can be represented in the form where and are Bessel functions [23] and Here by , the average energy stored in torsional Alfvén vibrations of magnetar is understood. If all the detected energy of -ray outburst goes in the quake-induced vibrations, , then the initial amplitude is determined unambiguously. The impact of magnetic field decay on frequency and amplitude of torsional Alfvén vibrations in quadrupole overtone is illustrated in Figure 1, where we plot with pointed out parameters and . The magnetic-field-decay-induced lengthening of period of pulsating radiation (equal to period of vibrations) is described by On comparing given by (14) and (19), one finds that interrelation between the equilibrium value of the total magnetic moment of a neutron star of mass and radius km vibrating in quadrupole overtone of toroidal -mode is given by For a sake of comparison, in the considered model of vibration-powered radiation, the equation of magnetic field evolution is obtained in a similar fashion as that for the angular velocity in the standard model of rotation-powered neutron star which rests on which lead to where is angle of inclination of to . The time evolution of , and expression for are too described by (19). It is these equations which lead to widely used exact analytic estimate of magnetic field on the neutron star pole: . For a neutron star of mass and radius km, one has G. Thus, the substantial physical difference between vibration- and rotation-powered neutron star models is that in the former model the elongation of pulse period is attributed to magnetic field decay, whereas in the latter one the period lengthening is ascribed to the slow-down of rotation [24, 25].
Figure 1: (Color online) The figure illustrates the effect of magnetic field decay on the vibration frequency and amplitude of quadrupole toroidal -mode presented as functions of .
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grant nos. 10935001 and 10973002), the National Basic Research Program of China (Grant no. 2009CB824800), and the John Templeton Foundation.
References
1. E. P. Mazets, S. V. Golenetskii, V. N. Il'inskii, R. L. Aptekar', and Y. A. Guryan, “Observations of a flaring X-ray pulsar in Dorado,” Nature, vol. 282, no. 5739, pp. 587–589, 1979.
2. S. Barat, G. Chambon, K. Hurley, et al., “Evidence for periodicity in a $\gamma$-ray burst,” Astronomy and Astrophysics, vol. 79, no. 3, pp. L24–L25, 1979.
3. T. L. Cline, “Detection of a fast, intense and unusual $\gamma$-ray transient,” Astrophysical Journal, vol. 237, pp. L1–L5, 1980.
4. R. Ramaty, S. Bonazzola, T. L. Cline, D. Kazanas, P. Mészáros, and R. E. Lingenfelter, “Origin of the 5 March 1979 γ-ray transient: a vibrating neutron star,” Nature, vol. 287, no. 5778, pp. 122–124, 1980.
5. R. Ramaty, “Vibrating neutron star,” Sky and Telescope, vol. 60, p. 484, 1980.
6. R. C. Duncan and C. Thompson, “Formation of very strongly magnetized neutron stars: implications for gamma-ray bursts,” Astrophysical Journal, vol. 392, no. 1, pp. L9–L13, 1992.
7. B. Paczyński, “Gb 790305 as a very strongly magnetized neutron star,” Acta Astronomica, vol. 42, no. 3, pp. 145–153, 1996.
8. O. Blaes, R. Blandford, P. Goldreich, and P. Madau, “Neutron starquake models for $\gamma$-ray bursts,” Astrophysical Journal, vol. 343, pp. 839–848, 1989.
9. B. Cheng, R. I. Epstein, R. A. Guyer, and A. C. Young, “Earthquake-like behaviour of soft γ-ray repeaters,” Nature, vol. 382, no. 6591, pp. 518–520, 1996.
10. K. Y. Ding and K. S. Cheng, “Oscillation-induced $\gamma$-ray emission from dead pulsars: a model for the delayed GeV emission in gamma-ray bursts,” Monthly Notices of the Royal Astronomical Society, vol. 287, no. 3, pp. 671–680, 1997.
11. L. Woltjer, “X-rays and type i supernova remnants,” Astrophysical Journal, vol. 140, pp. 1309–1313, 1964.
12. F. Hoyle, J. V. Narlikar, and J. A. Wheeler, “Electromagnetic waves from very dense stars,” Nature, vol. 203, no. 4948, pp. 914–916, 1964.
13. F. Pacini, “The early history of neutron stars,” in Proceedings of the MEASRIM No1, A. Hady and M. I. Wanas, Eds., p. 75, 2008.
14. S. I. Bastrukov, J. W. Yu, R. X. Xu, and I. V. Molodtsova, “Radiative activity of magnetic white dwarf undergoing Lorentz-force-driven torsional vibrations,” Modern Physics Letters A, vol. 26, no. 5, pp. 359–366, 2011.
15. S. Bastrukov, J. Yang, M. Kim, and D. Podgainy, “Magnetic properties of neutron star matter and pulsed gamma emission of soft gamma repeaters,” in Current High-Energy Emission Around Black Holes, H. Lee and H.-Y. Chang, Eds., pp. 334–342, World Scientific, Singapore, 2002.
16. S. I. Bastrukov, G. T. Chen, H. K. Chang, I. V. Molodtsova, and D. V. Podgainy, “Torsional nodeless vibrations of a quaking neutron star restored by the combined forces of shear elastic and magnetic field stresses,” Astrophysical Journal, vol. 690, no. 1, pp. 998–1005, 2009.
17. S. I. Bastrukov, H. K. Chang, I. V. Molodtsova, E. H. Wu, G. T. Chen, and S. H. Lan, “Frequency spectrum of toroidal Alfvén mode in a neutron star with Ferraro's form of nonhomogeneous poloidal magnetic field,” Astrophysics and Space Science, vol. 323, no. 3, pp. 235–242, 2009.
18. S. Bastrukov, I. Molodtsova, J. Takata, H. K. Chang, and R. X. Xu, “Alfvén seismic vibrations of crustal solid-state plasma in quaking paramagnetic neutron star,” Physics of Plasmas, vol. 17, no. 11, Article ID 112114, 10 pages, 2010.
19. S. Chandrasekhar, “Hydromagnetic oscillations of a fluid sphere with internal motions,” Astrophysical Journal, vol. 124, p. 571, 1965.
20. P. Ledoux and T. Walraven, “Variable stars,” in Handbuch der Physik, S. Flügge, Ed., vol. 51, pp. 353–604, Springer, New York, NY, USA, 1958.
21. S. I. Bastrukov, H. K. Chang, J. Takata, G. T. Chen, and I. V. Molodtsova, “Torsional shear oscillations in the neutron star crust driven by the restoring force of elastic stresses,” Monthly Notices of the Royal Astronomical Society, vol. 382, no. 2, pp. 849–859, 2007.
22. A. D. Polyanin and V. F. Zaitsev, Handbook of Nonlinear Partial Differential Equations, Chapman & Hall, Boca Raton, Fla, USA, 2004.
23. M. Abramowitz and I. Stegun, Handbook of Mathematical Functions, Dover, New York, NY, USA, 1972.
24. R. N. Manchester and J. H. Taylor, Pulsars, Freeman, San Francisco, Calif, USA, 1977.
25. D. R. Lorimer and M. Kramer, Handbook of Pulsar Astronomy, Cambridge University Press, Cambridge, UK, 2004.
26. F. P. Gavriil, V. M. Kaspi, and P. M. Woods, “Anomalous X-ray pulsars: long-term monitoring and soft-gamma repeater like X-ray bursts,” Advances in Space Research, vol. 33, no. 4, pp. 654–662, 2004.
27. A. I. Ibrahim, C. B. Markwardt, J. H. Swank et al., “Discovery of a transient magnetar: XTE J1810–197,” Astrophysical Journal, vol. 609, no. 1, pp. L21–L24, 2004.
|
2013-12-06 11:42:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7725939750671387, "perplexity": 2323.087482982559}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051509/warc/CC-MAIN-20131204131731-00098-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.cuemath.com/calculus/polynomial-functions/
|
# Polynomial Functions
Polynomial Functions
Go back to 'Functions'
All of us have been studying polynomials since we were quite young. When we were first told about variables and expressions, we were simply dealing with polynomials.
This is a branch of math where you can relax because polynomials are relatively easy to learn.
In this mini-lesson, we will explore the world of polynomial functions in math. You will get to learn about the highest degree of the polynomial, graphing polynomial functions, range and domain of polynomial functions, and other interesting facts around the topic. You can also check out the playful calculators to know more about the lesson and try your hand at solving a few interesting practice questions at the end of the page.
## What Is a Polynomial Function?
### Polynomial Function Definition
A function $$f: \mathbb{R} \rightarrow \mathbb{R}$$ defined as $$f(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$ is called a polynomial function in variable $$x$$.
Here, $$a_0,a_1,...,a_n$$ are real number constants and $$n$$ is a non-negative integer.
If the constant $$a_n$$ is non-zero, we say this is a polynomial function of degree $$n$$ and $$a_n$$ is the leading coefficient.
### Examples
Look at the following examples.
Functions Polynomial Function or Not?
1. $$f(x)=8x^2+7x-1$$ Yes
2. $$f(x)=4x^2-9$$ Yes
3. $$f(x)=6x+8$$ Yes
4. $$f(x)=x^{\frac{2}{3}}+2x$$ No
Look at Example 4
Here, the power of $$x$$ is $$\dfrac{2}{3}$$ which is not a non-negative integer. Therefore, it is not a polynomial function.
The remaining functions are polynomial functions.
### Degree of the Polynomial Function
The degree of the polynomial function is determined by the highest power a variable it is raised to,
For example, the degree of $$-x^4+x^2+x$$ is 4
## What Are the Types of Polynomial Functions?
The 5 types of polynomial functions are:
1. Zero Polynomial Function
2. Linear Polynomial Function
4. Cubic Polynomial Function
5. Quartic Polynomial Function
## How Do You Determine a Polynomial Function?
If a function $$f: \mathbb{R} \rightarrow \mathbb{R}$$ is defined as $$f(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$, then we say that the function is a polynomial function.
Remember a few points while determining if a function is a polynomial function or not.
1. If the function is in variable $$x$$, make sure all the powers of $$x$$ is a non-negative integer. For example, $$f(x)=\dfrac{1}{x^2}$$ is not a polynomial function.
2. The function should not contain any square roots or cube roots of $$x$$.
3. Sometimes, a polynomial function is NOT written in its standard form. For example, although the function $$f(x)(x-1)(x+2)$$ is not written in the standard form, it is a polynomial function.
4. The variable should not be in the denominator.
Here are a few examples of polynomial functions.
Polynomial Functions Examples
1. Linear Polynomial Function $$f(x)=5x-9$$
2. Quadratic Polynomial Function $$f(x)+x^3=x(x^2+x-3)$$
3. Cubic Polynomial Function $$f(x)-x^3=-3x+1$$
## How to Represent a Polynomial Function on Graph?
### Graphing Polynomial Functions
Let the polynomial function be $$y=f(x)$$.
Draw a table for $$y$$ and $$f(x)$$ values to draw a graph of the polynomial function.
Mark the points on x-axis and y-axis and plot the points obtained in the table.
Join the points to obtain the curve.
Here are a few graphs of polynomial functions.
### Linear Polynomial Functions
If $$f(x)$$ is a constant, then the graph of the function forms a vertical line parallel to the y-axis and vice-versa.
The graph of a linear polynomial function always forms a straight line.
The reason a polynomial function of degree one is called a linear polynomial function is that its geometrical representation is a straight line.
This is how the quadratic polynomial function is represented on a graph.
This curve is called a parabola.
Here is the graph of the quadratic polynomial function $$f(x)=2x^2+x-3$$
### Cubic Polynomial Functions
Look at the shape of a few cubic polynomial functions.
Did you notice that the curves are symmetric about a point under $$180^{\circ}$$ rotation?
Observe that the graphs of all the polynomial functions are everywhere continuous and defined.
So, the domain of a polynomial function is $$\mathbb{R}$$.
All the values of the polynomial function lie in the set of real numbers.
So, the range of a polynomial function is also $$\mathbb{R}$$.
Important Notes
1. The domain and range of the polynomial function is the set of real numbers, $$\mathbb{R}$$
2. The roots of a polynomial function are the $$x$$-intercepts of its curve.
3. A polynomial function of $$n^{\text{th}}$$ degree has almost $$n$$ roots.
4. A polynomial function is everywhere continuous.
## Solved Examples
Example 1
Jack shows a function to his friends in his school.
He asked them to determine if it is a polynomial function?
Can you help them?
Solution
The function is $$f(x)=12x-5x(x+3)$$
Expand the bracket on the right side and simplify.
\begin{align}f(x)&=12x-5x(x+3)\\&=12x-5x^2-15x\\&=-5x^2-3x\end{align}
Yes, the function is a polynomial function.
Example 2
Mia is a fitness enthusiast who goes running every morning.
The park where she jogs is rectangular in shape and measures 12 feet by 8 feet.
A nature restoration group plans to revamp the park and decides to build a pathway surrounding the park.
This would increase the total area to 140 sq. ft.
Can you use this information to establish a quadratic polynomial function?
Solution
Let’s denote the width of the pathway as $$x$$.
Then, the length and breadth of the outer rectangle is $$(12+2x)\;\text{ft.}$$ and $$(8+2x)\;\text{ft.}$$
The area of the park,
\begin{align}(12+2x)(8+2x)&=140\\2(6+x)\cdot 2(4+x)&=140\x+6)(x+4)&=35\\x^2+10x-11&=0\end{align} The quadratic function \(f(x)=x^2+10x-11
$$\therefore$$, The required quadratic polynomial function is $$f(x)=x^2+10x-11$$.
Example 3
We define a polynomial function $$f: \mathbb{R} \rightarrow \mathbb{R}$$ as $$f(x)=x^2$$.
Complete the table shown below.
$$x$$ -4 -3 -2 -1 0 1 2 3 4
$$f(x)=x^2$$
Find the domain and range of the function.
Solution
Let's complete the given table by finding the values of the function at the given values $$x$$.
$$x$$ -4 -3 -2 -1 0 1 2 3 4
$$f(x)=x^2$$ 16 9 4 1 0 1 4 9 16
Let's draw the graph of the function.
So, the graph of $$f(x)=x^2$$ is shown.
Think Tank
1. The sum of two numbers is 45. After subtracting 5 from both numbers, the product of the numbers is 124. What are the numbers?
2. Which of the following graphs may represent the equation $${y+2= -2(x-1)}$$?
## Interactive Questions
Here are a few activities for you to practice.
## Let's Summarize
This mini-lesson targeted the fascinating concept of Polynomial Functions. The math journey around Polynomial Functions starts with what a student already knows, and goes on to creatively crafting a fresh concept in the young minds. Done in a way that not only it is relatable and easy to grasp, but also will stay with them forever. Here lies the magic with Cuemath.
We hope you enjoyed learning about graphing polynomial functions, the degree of the polynomial functions, range and domain of the polynomial function in this lesson.
At Cuemath, our team of math experts is dedicated to making learning fun for our favorite readers, the students!
Through an interactive and engaging learning-teaching-learning approach, the teachers explore all angles of a topic.
Be it worksheets, online classes, doubt sessions, or any other form of relation, it’s the logical thinking and smart learning approach that we at Cuemath believe in.
### 1. How do you tell if an equation is a polynomial function?
An equation of the form $$f(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$, where $$a_0,a_1,...,a_n$$ are constants and $$n$$ is a non=negative integer is called a polynomial function.
### 2. What are not polynomial functions?
If a function is not in the form $$f(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$, then it is not a polynomial function. For example, $$f(x)=\cos{x}$$.
### 3. Is 0 a polynomial function?
Yes, all constants are polynomial functions. So, 0 is a polynomial function.
### 4. What are the types of polynomial functions?
The 5 types of polynomial functions are:
1. Zero Polynomial Function
2. Linear Polynomial Function
4. Cubic Polynomial Function
5. Quartic Polynomial Function
### 5. What is the order of a polynomial function?
The order of a polynomial function is the same as the degree of the polynomial function.
### 6. What is an example of a polynomial function?
An example of a polynomial function is $$f(x)=x^2+3x-9$$.
### 7. What are not polynomials examples?
If a expression is not in the form $$p(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$, then it is not a polynomial. For example, $$f(x)=\cos{x}$$, $$f(x)=x^{\frac{2}{3}}+2x$$
### 8. What is a degree polynomial function?
A function of the form $$f(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$, where $$a_0,a_1,...,a_n$$ are constants and $$n$$ is a non=negative integer is called a polynomial function.
If the constant $$a_n$$ is non-zero, we say this is a polynomial function of degree $$n$$.
### 9. How do you write a polynomial function?
A function of the form $$f(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0$$, where $$a_0,a_1,...,a_n$$ are constants and $$n$$ is a non=negative integer is called a polynomial function.
Functions
Functions
grade 10 | Questions Set 2
Functions
|
2021-05-09 10:34:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5665153861045837, "perplexity": 616.2170342933917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00368.warc.gz"}
|
http://www.godberit.de/2016/01/31/Keeping-documentation-up-to-date-using-LaTeX-and-Tikz.html
|
This post is based on a recent problem I encountered while working on an academic problem. We had a frequently changing set of input cases, each describing the layout of a specific transportation problem. The cases were send to an algorithm chain and discussed within the technical documentation. To prevent stale data entering the report, we wanted to create a solution that would use the input files used for the algorithm to also be the source for the report, thus having a single source of truth. The goal was to replicate the following illustration using only a CSV file containing the coordinates and types of the nodes using LaTeX and PGF/TikZ. (LaTeX is a wordprocessor and markup language that focuses strongly on separating content and layout while TikZ allows for the creation of vector graphics within LaTex using an descriptive language.)
The goal
Set next to each other, the network allows a quick overview over the cases design, while the table gives more detailed information.
Creating the table
For creating the table based on the data we can use the csvsimple package using its csvreader function. The parameters given style the table in the desired booktabs layout. To simplify the usage, this function is then wrapped within a macro, which takes the name of the data file read.
Drawing the network
Drawing the network is not that much more difficult if we use the already descriptive TikZ. Just as a quick, refresher: Using commands directly within the tikzpicture environment in LaTeX, we can draw vector graphics. For example, we can use the following code to draw two nodes and to connect them with a line:
As the TikZ code is embedded within the LaTeX code, it can be written by any LaTex macro or plugin thus allowing us to use the csvsimple package again. If called without any styles, it simply processes the content given to it. The only problem left to solve is that TikZ does not allow changing the drawing order but instead always draws objects in the order that they appear. The quick fix used here is to simply read the data multiple times, once for each desired layer; First defining all coordinates, then drawing the links between them and finally the nodes themselves. While this might create performance troubles with larger files, it is perfectly sufficient for this use case.
This code is then refactored into a macro, which can be used within a tikzpicture environment. As the named coordinates and nodes are available within the environment, any further drawing calls can reference them.
Conclusion
This again shows the extreme flexibility that the LaTeX environment allows in regard to technical documentation. Using just a few standard packages has allowed us to turn a static file into a dynamic document that updates itself on every compilation.
Even without the automation, creating the graph was much easier in TikZ than in any other design program. I recommend reading the full documentation to learn more about its possibilities. For more complex data driven charts the use of pfgplots is highly recommended.
|
2017-09-21 01:28:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630459904670715, "perplexity": 948.5304309124599}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687592.20/warc/CC-MAIN-20170921011035-20170921031035-00024.warc.gz"}
|
https://gmatclub.com/forum/n-and-m-are-each-3-digit-integers-each-of-the-numbers-135452.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 25 Sep 2018, 06:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# N and M are each 3-digit integers. Each of the numbers 1, 2,
Author Message
TAGS:
### Hide Tags
Intern
Joined: 10 Jan 2012
Posts: 4
N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
07 Jul 2012, 21:28
8
43
00:00
Difficulty:
95% (hard)
Question Stats:
42% (02:02) correct 58% (02:02) wrong based on 943 sessions
### HideShow timer Statistics
N and M are each 3-digit integers. Each of the numbers 1, 2, 3, 6, 7, and 8 is a digit of either N or M. What is the smallest possible positive difference between N and M?
A. 29
B. 49
C. 58
D. 113
E. 131
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8298
Location: Pune, India
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
13 Oct 2012, 01:18
24
20
nobelgirl777 wrote:
N and M are each 3-digit integers. Each of the numbers 1, 2, 3, 6, 7, and 8 is a digit of either N or M. What is the smallest possible positive difference between N and M?
A. 29
B. 49
C. 58
D. 113
E. 131
Responding to a pm:
You have 6 digits: 1, 2, 3, 6, 7, 8
Each digit needs to be used to make two 3 digit numbers. This means that we will use each of the digits only once and in only one of the numbers. The numbers need to be as close to each other as possible. The numbers cannot be equal so the greater number needs to be as small as possible and the smaller number needs to be as large as possible to be close to each other.
The first digit (hundreds digit) of both numbers should be consecutive integers i.e. the difference between 1** and 2** can be made much less than the difference between 1** and 3**. This gives us lots of options e.g. (1** and 2**) or (2** and 3**) or (6** and 7**) or (7** and 8**).
Now let's think about the next digit (the tens digit). To minimize the difference between the numbers, the tens digit of the greater number should be as small as possible (1 is possible) and the tens digit of the smaller number should be as large as possible (8 if possible). So let's not use 1 and 8 in the hundreds places and reserve them for the tens places since we have lots of other options (which are equivalent) for the hundreds places. Now what are the options?
Try and make a pair with (2** and 3**). Make the 2** number as large as possible and make the 3** number as small as possible. We get 287 and 316 (difference is 29) or
Try and make a pair with (6** and 7**). Make the 6** number as large as possible and make the 7** number as small as possible. We get 683 and 712 (difference is 29)
The smallest of the given options is 29 so we need to think no more. Answer must be (A).
The question is not a hit and trial question. It is completely based on logic and hence do not ignore it.
_________________
Karishma
Veritas Prep GMAT Instructor
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Senior Manager
Joined: 27 Jun 2012
Posts: 387
Concentration: Strategy, Finance
Schools: Haas EWMBA '17
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
09 Jan 2013, 23:16
30
8
Consider N = (100X1 + 10Y1+ Z1)
Consider M = (100X2 + 10Y2+ Z2)
N - M = 100(X1-X2) + 10(Y1-Y2) + (Z1-Z2)
Lets analyze these terms:-
100(X1-X2) = 100; We need to keep it minimum at 100 (i.e. X1-X2=1 with pair of consecutive numbers). We do not want it over 200 as it will increase the overall value.
10(Y1-Y2) = -70; To offset 100 from above, we should minimize this term to lowest possible negative value. Pick extreme numbers as 1 & 8 -> 10(1-8)= -70
(Z1-Z2) = -1; Excluding (1,8) taken by (Y2,Y2) and pair of consecutive numbers taken by (X1,X2) -> we are left with 1 pair of consecutive numbers -> Minimize it to -1;
Finally, $$N-M=100(X1-X2)+10(Y1-Y2)+(Z1-Z2) = 100-70-1=29.$$
--------------------------
PS: Once you allocate (1,8) to (Y1,Y2), it doesn't matter which pair of consecutive numbers you choose for (X1,X2) and (Z1, Z2). Either of them can take (6,7) or (2,3)
Both these combinations are valid and give minimum difference of 29: (316-287=29) OR (712-683=29)
_________________
Thanks,
Prashant Ponde
Tough 700+ Level RCs: Passage1 | Passage2 | Passage3 | Passage4 | Passage5 | Passage6 | Passage7
VOTE GMAT Practice Tests: Vote Here
PowerScore CR Bible - Official Guide 13 Questions Set Mapped: Click here
##### General Discussion
Intern
Joined: 25 Jun 2012
Posts: 7
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
07 Jul 2012, 22:59
5
3
In a problem like that you have to play with the numbers untill you realize a strategy.
We need to minimize the difference between the two numbers so we need to make the larger number as small as possible and the smaller number as large as possible so their difference is smallest. Looking at the available digits, the smallest difference in the hundreds is 1. So choose the hundreds to be say 3 and 2. For the remaining digits of the larger number, choose the smallest remaining digits ordered to make the number the smallest. For the smaller number, order the remaining digits to make it largest.
So I got: 316 and 287 with difference of 29.
Another possibility is if you choose 7 and 6 as hundreds: 712 and 683 with difference of 29.
Since 29 is the smallest answer given, it must be the right one.
Note, you don't always get 29. For example if you go with 8 and 7 for hundreds, you get 813 and 762 with difference of 49.
Manager
Status: Prevent and prepare. Not repent and repair!!
Joined: 13 Feb 2010
Posts: 207
Location: India
Concentration: Technology, General Management
GPA: 3.75
WE: Sales (Telecommunications)
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
25 Jul 2012, 00:30
1
this is so time consuming. Is there a shorter way??
_________________
I've failed over and over and over again in my life and that is why I succeed--Michael Jordan
Kudos drives a person to better himself every single time. So Pls give it generously
Wont give up till i hit a 700+
Director
Joined: 29 Nov 2012
Posts: 799
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
09 Jan 2013, 21:00
Is there any other approach to solve this question, its very time consuming to think of a solution for this question!
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios http://gmatclub.com/forum/gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8298
Location: Pune, India
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
09 Jan 2013, 21:24
fozzzy wrote:
Is there any other approach to solve this question, its very time consuming to think of a solution for this question!
GMAT rewards you for thinking. If you are taking too much time, it means you need to learn to focus and think faster (i.e. practice). Don't be surprised if you get such 'logic based' questions which don't have an 'algebra solution' at higher level.
_________________
Karishma
Veritas Prep GMAT Instructor
GMAT self-study has never been more personalized or more fun. Try ORION Free!
VP
Joined: 09 Jun 2010
Posts: 1032
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
24 Jan 2013, 02:35
there is only one way. pick numbers
first pick 1,and 2 as hundereds of the 2 number, then make the larger smallest, the smaller largest.
then pick 2 and 3 as the hundreds
then pick 3 and 4 as the undreds
stop, 29 is smallest in the 5 choices. pick 29 and go
this question will be appear at the late on the test. dont worry of this question.
if we fail on basic question which is on the first of the test, we die. failing on this question is no problem.
_________________
visit my facebook to help me.
on facebook, my name is: thang thang thang
VP
Joined: 09 Jun 2010
Posts: 1032
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
30 Jan 2013, 04:32
hardest, if we see this on the test date, we are at 50/51 already
the difference between hundreds must be 1.
there are many couple. 1 and 2, or 7 and 8
the bigger number must be as smallas possible
the smaller number must be as big as possibl
we should choose 87 as 2 last digit in the smaller number.
now, how to choose 1,2,3,6,
if we choose 1 and 2 as hundreds of the 2 number we have 63 as the last digits in the smaller.
if we choose 23 as hundered of the 2 numbers, we have 61 as the last digits in the smaller. this is worse than above case
we choose 1,2 as hundreds of the 2 numbers.
312
287
is the result
gmat is terrible when it make this question. but forget this question, we do not need to do this question to get 49/51
_________________
visit my facebook to help me.
on facebook, my name is: thang thang thang
Manager
Joined: 24 Nov 2012
Posts: 166
Concentration: Sustainability, Entrepreneurship
GMAT 1: 770 Q50 V44
WE: Business Development (Internet and New Media)
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
21 Apr 2013, 07:19
i dont think repetion is possible... hence it would 316 -287 = 29
_________________
You've been walking the ocean's edge, holding up your robes to keep them dry. You must dive naked under, and deeper under, a thousand times deeper! - Rumi
http://www.manhattangmat.com/blog/index.php/author/cbermanmanhattanprep-com/ - This is worth its weight in gold
Economist GMAT Test - 730, Q50, V41 Aug 9th, 2013
Manhattan GMAT Test - 670, Q45, V36 Aug 11th, 2013
Manhattan GMAT Test - 680, Q47, V36 Aug 17th, 2013
GmatPrep CAT 1 - 770, Q50, V44 Aug 24th, 2013
Manhattan GMAT Test - 690, Q45, V39 Aug 30th, 2013
Manhattan GMAT Test - 710, Q48, V39 Sep 13th, 2013
GmatPrep CAT 2 - 740, Q49, V41 Oct 6th, 2013
GMAT - 770, Q50, V44, Oct 7th, 2013
My Debrief - http://gmatclub.com/forum/from-the-ashes-thou-shall-rise-770-q-50-v-44-awa-5-ir-162299.html#p1284542
Senior Manager
Joined: 23 Oct 2010
Posts: 358
Location: Azerbaijan
Concentration: Finance
Schools: HEC '15 (A)
GMAT 1: 690 Q47 V38
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
21 Apr 2013, 21:57
Since we are asked to find the smallest integer, I began with option A
To get 9 as a unit digit we need 12 as the last 2 digits of one integer and 3 as the last digit of another integer
we need 8 as the tenth digit of the smaller integer.
so we have 712 as the 1st integer and 683 as the 2nd integer.
_________________
Happy are those who dream dreams and are ready to pay the price to make them come true
I am still on all gmat forums. msg me if you want to ask me smth
Manager
Joined: 12 Dec 2012
Posts: 218
GMAT 1: 540 Q36 V28
GMAT 2: 550 Q39 V27
GMAT 3: 620 Q42 V33
GPA: 2.82
WE: Human Resources (Health Care)
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
01 May 2013, 12:16
I understood the explanations here but could not figure out a takeaway for this problem .. what is the take away here?
_________________
My RC Recipe
http://gmatclub.com/forum/the-rc-recipe-149577.html
My Problem Takeaway Template
http://gmatclub.com/forum/the-simplest-problem-takeaway-template-150646.html
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8298
Location: Pune, India
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
02 May 2013, 10:34
1
TheNona wrote:
I understood the explanations here but could not figure out a takeaway for this problem .. what is the take away here?
The question is testing your logic skills in number properties. How do you make two 3 digit numbers such that they use different digits but are as close as possible to each other. So you start out with consecutive hundreds digits and so on...
Not every question on GMAT needs to test a defined sub heading in the Quant book. Sometimes, it will require you to develop your own logic. Though admittedly, some questions don't appear very often.
_________________
Karishma
Veritas Prep GMAT Instructor
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Retired Moderator
Joined: 29 Oct 2013
Posts: 272
Concentration: Finance
GPA: 3.7
WE: Corporate Finance (Retail Banking)
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
11 May 2014, 08:59
1
This is one of the hardest questions I have seen from OG. Even OG marks it as 'Hard'. Should not it be tagged 700+ level instead of 600-700? Thanks moderators!
_________________
My journey V46 and 750 -> http://gmatclub.com/forum/my-journey-to-46-on-verbal-750overall-171722.html#p1367876
Math Expert
Joined: 02 Sep 2009
Posts: 49493
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
12 May 2014, 00:46
MensaNumber wrote:
This is one of the hardest questions I have seen from OG. Even OG marks it as 'Hard'. Should not it be tagged 700+ level instead of 600-700? Thanks moderators!
It is now. Thank you.
_________________
Retired Moderator
Joined: 29 Oct 2013
Posts: 272
Concentration: Finance
GPA: 3.7
WE: Corporate Finance (Retail Banking)
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
03 Jun 2014, 08:38
PraPon wrote:
Consider N = (100X1 + 10Y1+ Z1)
Consider M = (100X2 + 10Y2+ Z2)
N - M = 100(X1-X2) + 10(Y1-Y2) + (Z1-Z2)
Lets analyze these terms:-
100(X1-X2) = 100; We need to keep it minimum at 100 (i.e. X1-X2=1 with pair of consecutive numbers). We do not want it over 200 as it will increase the overall value.
10(Y1-Y2) = -70; To offset 100 from above, we should minimize this term to lowest possible negative value. Pick extreme numbers as 1 & 8 -> 10(1-8)= -70
(Z1-Z2) = -1; Excluding (1,8) taken by (Y2,Y2) and pair of consecutive numbers taken by (X1,X2) -> we are left with 1 pair of consecutive numbers -> Minimize it to -1;
Finally, $$N-M=100(X1-X2)+10(Y1-Y2)+(Z1-Z2) = 100-70-1=29.$$
--------------------------
PS: Once you allocate (1,8) to (Y1,Y2), it doesn't matter which pair of consecutive numbers you choose for (X1,X2) and (Z1, Z2). Either of them can take (6,7) or (2,3)
Both these combinations are valid and give minimum difference of 29: (316-287=29) OR (712-683=29)
ProPan, you beauty! This one looks the most efficient solution to me out of all different solutions I have seen so far on different forums. Thanks for sharing
_________________
My journey V46 and 750 -> http://gmatclub.com/forum/my-journey-to-46-on-verbal-750overall-171722.html#p1367876
Intern
Joined: 16 Sep 2014
Posts: 10
N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
10 Nov 2014, 23:13
1
1
This approach is fairly straightforward, derived from the GMATprep suggested answer:
To minimize the difference in the two numbers, we pick minimum difference in the hundreds digit which is 1. there are 4 combinations:
2-- | 3-- | 7-- | 8--
1-- | 2-- | 6-- | 7--
Next we write down the rest of the available digits for each combination in ascending order:
3,6,7,8 | 1,6,7,8 | 1,2,3,8 | 1,2,3,6
In each combination, our task is to minimize the difference between the two 2-digit numbers (tens and ones).
This can be achieved by choosing the first two available digits in ascending order for the greater number and last two available digits in reverse order for the smaller number.
For example, in the case 2-- , we put, 236 and in the case of 1--, we put 187.
Hope the reason is clear. this is because it will maximize the value of the smaller number and minimize the value of the greater number. hence, the difference is the minimum.
doing so, we get:
236 | 316 | 712 | 812
-187 | -287 | - 683 |-763
-------------------------------
49 | 29 | 29 | 49
Hence the answer is 29. Choice A.
Manager
Joined: 22 Aug 2014
Posts: 172
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
28 Feb 2015, 01:27
Interesting One!!!
I picked up 283 and 176 and diff was 8..However ,that was not in answer choice.
BSchool Forum Moderator
Joined: 05 Jul 2017
Posts: 495
Location: India
GMAT 1: 700 Q49 V36
GPA: 4
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
31 May 2018, 06:08
Hey Bunuel,
Can you post some similar questions like these on this thread?
_________________
Intern
Joined: 31 Mar 2017
Posts: 6
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, [#permalink]
### Show Tags
24 Sep 2018, 08:41
Thanks a lot!!
Re: N and M are each 3-digit integers. Each of the numbers 1, 2, &nbs [#permalink] 24 Sep 2018, 08:41
Display posts from previous: Sort by
# N and M are each 3-digit integers. Each of the numbers 1, 2,
## Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-09-25 13:13:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6055958867073059, "perplexity": 2990.357358699289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161638.66/warc/CC-MAIN-20180925123211-20180925143611-00103.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Directed_Walk
|
# Definition:Directed Walk
Let $G = \struct {V, A}$ be a directed graph.
A directed walk in $G$ is a finite or infinite sequence $\sequence {x_k}$ such that:
$\forall k \in \N: k + 1 \in \Dom {\sequence {x_k} }: \tuple {x_k, x_{k + 1} } \in A$
|
2019-09-16 04:40:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755147099494934, "perplexity": 370.39978588335157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572484.20/warc/CC-MAIN-20190916035549-20190916061549-00099.warc.gz"}
|
http://math.stackexchange.com/questions/199133/contractibility-of-the-sphere-and-stiefel-manifolds-of-a-separable-hilbert-space
|
Contractibility of the sphere and Stiefel manifolds of a separable Hilbert space
Why are the sphere $$S=\lbrace |x|=1\rbrace$$ and the Stieffel manifolds of orthonormal $n$-frames $$V_n=\lbrace (x_1,\dots,x_n)\in S^n\mid i\neq j\Rightarrow\langle x_i|x_j\rangle=0\rbrace$$ of a infinite dimensional separable Hilbert space $\mathscr{H}$ contractible? I've read a proof of this about a year ago, but I can't find it, and I don't remember the argument.
-
Consider a Hilbert basis $(e_i)_{i\in \mathbb N\sqcup I}$ for $\mathscr H$, where we allow for the Hilbert space to be non-separable. Define a continuous linear operator $T$ on the Hilbert basis by setting $$\forall n\in\mathbb N,~Te_n=e_{n+1}\mathrm{~and~}\forall i\in I, Te_i=e_i$$ This defines a continuous isometric linear operator on $\mathscr H$. Furthermore, upon looking at the $\ell^2$ coefficient decomposition of a vector, one sees that for any non zero integer $p$, the only way $x$ and $T^p x$ can be colinear is if all coefficients of $x$ carried by the $(e_i)_{i\in\mathbb N}$ are $=0$, in which case $T^p x=x$.
Take $(x_1,\dots,x_p)$ linearly independent. The preceding remark shows that for any $a,b\in \mathbb R$ (or $\mathbb C$) that aren't simultaneously $=0$, the family $(ax_1+bT^p x_p,\dots, ax_p+bT^p x_p)$ is free. We now define a homotopy $H$ from $\mathrm{id}_{V_p}$ to a map that sends any orthonormal $p$-frame to an orthonormal $p$-frame orthogonal to $(e_1,\dots ,e_p)$: $$\begin{array}{rll} H: & [0,1]\times V_p & \to & V_p,\\ & (t,(x_1,\dots,x_p)) & \mapsto & \mathrm{GS}((1-t)x_1+tT^px_1,\dots,(1-t)x_p+tT^px_p) \end{array}$$ $\mathrm{GS}$ stads for the Gram Schmidt process applied to a free family. By the above discussion, this is well-defined and continuous, and ends at $V_p\to V_p, (x_i)\mapsto (T^px_i)$ so ends up heing orthogonal to $(e_1,\dots,e_p)$. We then follow this homotopy by a second homotopy from the subset of $V_p$ of all orthonormal $p$-frames orthogonal to $(e_1,\dots,e_p)$ (call this set $V_p'$) to $V_p$: $$\begin{array}{rll} H': & [0,1]\times V_p' & \to V_p,\\ & (t,(x_1,\dots,x_p)) & \mapsto \mathrm{GS}((1-t)x_1+te_1,\dots,(1-t)x_p+te_p)) \end{array}$$ This concludes the construction, and shows that $V_p$ is contractible.
|
2015-11-25 08:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357854127883911, "perplexity": 127.78327680047985}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445033.85/warc/CC-MAIN-20151124205405-00063-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://www.aanda.org/articles/aa/olm/2009/30/aa11727-09/aa11727-09.html
|
Subscriber Authentication Point
Free Access
Issue A&A Volume 502, Number 3, August II 2009 969 - 979 The Sun https://doi.org/10.1051/0004-6361/200911727 15 June 2009
## Appendix A: Examples of inverted spectra
Figure A.1: Examples of observed ( black line) and best-fit spectra ( red) in Op. 001 ( 1st and 3rd row). The dash-dotted horizontal lines in QUV indicate three times the rms noise level, and the solid horizontal line the zero level. The vertical solid line denotes the rest wavelength. The 2nd and 4th row show the corresponding temperature stratifications of the magnetic component ( solid), the field-free component ( dash-dotted), and the HSRA atmosphere that is used as initial model ( dashed). Field strength and LOS inclination and their respective errors are given in the plot of Stokes I of 1564.8 nm ( upper left in each panel); the number of each profile is given in the upper left corner of each panel. Open with DEXTER
Figure A.2: Same as Fig. A.1 for TIP Op. 005. Open with DEXTER
Figure A.3: Polarization degree of 1564.8 nm for the profiles shown in Figs. A.1 ( upper two rows) and A.2 ( lower two rows). The dash-dotted and solid horizontal lines denote the inversion and final rejection threshold, respectively. Open with DEXTER
Figures A.1 and A.2 show several profiles taken from the first and second long-integration observation of 2008 May 21 (Op. 001 and Op. 005), respectively. The positions of the profiles are marked by consecutive numbers in Fig. 1. The profiles shown were selected to have a small polarization degree that in some cases was barely sufficient to meet the inversion threshold (e.g., profiles Nos. 6 and 7). Below the spectra, the temperature stratifications used in the generation of the best-fit spectra are shown. With 3 nodes in temperature, the SIR code can use a parabola for changing the stratification; the parabola shape appears quite prominent for many of the locations. We note, however, that the IR lines at 1.56 m are not sensitive to the temperature in the atmosphere above log (Cabrera Solana et al. 2005). Only one profile corresponds to a kG field (Fig. A.2, top middle, No. 8). Figure A.3 shows the polarization degree of 1564.8 nm for all profiles of the previous figures. Profile No. 9 exceeds the inversion threshold of 0.001 of near +750 m with a spike that is presumably not of solar origin, but is instead noise in the Stokes U profile. The final rejection threshold of 0.0014 is, however, only reached by signals clearly related to the Zeeman effect (multiple double or triple lobes).
#### Reliability of the inversion results.
As discussed in Sect. 3, we used a constant value for the magnetic field strength (B), inclination (), azimuth (), and the LOS velocity. This inversion setup cannot reproduce the antisymmetric Stokes Q or U or symmetric Stokes V profiles, which would require gradients in the magnetic field strength and the velocities along the LOS. The inversion was initialized with the same model atmosphere on all pixels (B=0.9 kG, deg), only the inclination being modified to 10 or 170 deg depending on the polarity. In the inversion process, the equal weight used for QUV in the calculation of naturally favors the component of higher polarization signal. For example, in profile no. 5 in Fig. A.1 the Q and U signals are larger than the V signal by almost an order of magnitude, leading to a better fit quality for Q and U than for V. In polarimetric data of low S/N, a difference of this order usually implies that the weaker signal is not seen at all.
SIR calculates an error estimate for the free fit parameters using the diagonal elements of the covariance matrix, expressed by the response functions (Bellot Rubio et al. 2000; Bellot Rubio 2003). The error estimate depends on the number of degrees of freedom in each variable; for parameters constant with optical depth thus a single value is returned. The error estimate, however, provides only information about the reliability of the best-fit solution for the corresponding -minimum inside the chosen inversion setup. The estimated errors in the inversion of the profiles shown in Figs. 4, A.1, and A.2 are noted on the Stokes I panel for the Fe I 1565.2 nm line. The average uncertainties in the calculated magnetic field strength and inclination angle given by SIR are G and deg, respectively. The values agree with a previous error estimate in Beck (2006) derived from a direct analysis of the profile shape of the 1.56 m lines (Table 3.2 on p. 47; G and deg).
## Appendix B: Calibration of Ltot to transversal flux
We tried to follow the procedure described in LI08 to calibrate the linear polarization signal into a transversal magnetic flux estimate that is independent of the inversion results. To reduce the influence of noise, LI08 first determine the preferred azimuth frame'', where the linear polarization signal is concentrated in Stokes Q. To achieve this, we determined the azimuth angle from the ratio of U to Q, and rotated the spectra correspondingly to maximize the Stokes Q signal. The scatter plot in Fig. B.1 compares the previously used total linear polarization, , with the corresponding as a measure of the linear polarization. The rotation of the spectra reduces the noise contribution by a constant amount, but the old and new values otherwise have a linear relationship with a slope close to unity.
We then averaged the rotated Q spectra over all spatial positions exceeding the polarization threshold for the inversion. The average Stokes Q spectrum was used as a spectral mask by LI08, but unfortunately their method fails for the infrared lines. The wavelengths around the line core have negative values in the average Q profile (Fig. B.2), which prevents to use it in the same way as in LI08. We thus used as defined above instead, which we understand to be equivalent to the approach of LI08 despite not using a (somewhat arbitrary) spectral mask.
The plot of versus transversal flux (Fig. 12, middle upper panel) showed considerable scatter that places the use of a single calibration curve in doubt. We thus not only tried to obtain a calibration curve, but also to quantify the effect of various parameters on the obtained relation. The upper part of Fig. B.3 shows calibration curves of versus field strength for different field inclinations . The uppermost curve for corresponds to the one used by LI08. With the assumption that the field inclination does not necessary equal , one already finds that one and the same value of can be obtained for a range of around 200-550 G in field strength. The same effect is shown in the middle part, where the magnetic flux, , was kept constant at Mx, B was varied, and was derived accordingly from . Again a range of around 200-500 G in B corresponds to the same value of . As a final test, we chose to investigate the influence of the temperature stratification on the resulting -value. We retained the magnetic flux, field strength and field inclination constant at ( Mx, 20 G, ), and synthesized spectra for different temperature stratifications. We used 10 000 temperature stratifications that were derived for the magnetic component in the inversion, and thus can be taken to be an estimate of the range of temperatures expected in the quiet Sun. The histogram of the resulting -values is displayed in the bottom part of Fig. B.3. The value of ranges from nearly zero up to 0.01, which also roughly corresponds to the scatter in in Fig. 12. We thus conclude that the largest contribution to the scatter comes from temperature effects. We remark that we used a magnetic filling factor of unity in all calculations. Any additional variation in the filling factor due to unresolved magnetic structures would increase the scatter in even more.
We conclude that the usage of a calibration curve for a derivation of transversal magnetic flux from or , regardless of the exact calculation of the wavelength integrated quantities, is not reliable for a solid estimate, mainly because the strong influence of the thermodynamical state of the atmosphere on the weak polarization signals.
Figure B.1: Scatter plot of the integrated Stokes Q signal in the preferred reference frame versus the total linear polarization without rotation. Solid line: unity slope; dashed line: unity slope with an offset of 0.0001. Open with DEXTER
Figure B.2: The average Stokes Q profile. Open with DEXTER
Figure B.3: Top: calibration curves from into field strength B for field inclinations from 10 to 90 deg ( bottom to top). The horizontal dotted line is at ; the solid part of it denotes a range in B that gives the same at different . Middle: versus field strength for constant magnetic flux. Dotted line and solid part as above for . Bottom: histogram of for constant flux but varying temperature stratifications T. The vertical line denotes the value resulting from the HSRA atmosphere model. Open with DEXTER
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
|
2019-10-15 21:58:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7884235382080078, "perplexity": 1136.016619750546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00149.warc.gz"}
|
http://www.statsmodels.org/stable/generated/statsmodels.multivariate.factor_rotation.promax.html
|
# statsmodels.multivariate.factor_rotation.promax¶
statsmodels.multivariate.factor_rotation.promax(A, k=2)[source]
Performs promax rotation of the matrix $$A$$.
This method was not very clear to me from the literature, this implementation is as I understand it should work.
Promax rotation is performed in the following steps:
• Determine varimax rotated patterns $$V$$.
• Construct a rotation target matrix $$|V_{ij}|^k/V_{ij}$$
• Perform procrustes rotation towards the target to obtain T
• Determine the patterns
First, varimax rotation a target matrix $$H$$ is determined with orthogonal varimax rotation. Then, oblique target rotation is performed towards the target.
Parameters
Anumpy matrix
non rotated factors
kfloat
parameter, should be positive
References
[1] Browne (2001) - An overview of analytic rotation in exploratory factor analysis
[2] Navarra, Simoncini (2010) - A guide to empirical orthogonal functions for climate data analysis
|
2020-01-22 19:44:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7525743842124939, "perplexity": 5078.477885066886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00220.warc.gz"}
|
https://www.physicsforums.com/threads/strange-result-from-bianchi-identity-me-spot-the-error.1001731/
|
# Strange result from Bianchi identity, me spot the error!
• I
gastkram
I have accidentally derived a very wrong result from the contracted Bianchi identity and I can't see where the error is. I'm sure it's something obvious, but I need someone to point it out to me as I've gone blind. Thanks!
$$\nabla_a \left( R^{ab}-\frac{1}{2}g^{ab} R\right)=0.$$
Now rewrite with the metric so that we get the same factor R in both terms,
$$\nabla_a \left( g^{ac}g^{bd}R_{cd}-\frac{1}{2}g^{ab}g^{cd} R_{cd}\right)=0,$$
then factorize;
$$\nabla_a \left( \left(g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} \right)R_{cd}\right)=0.$$
Now apply the product rule
$$\nabla_a \left( g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} \right) R_{cd} + \left(g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} \right) \nabla_a R_{cd}=0.$$
By metric compatibility the first term is zero, so we conclude that ##g^{ac}g^{bd}=\frac{1}{2} g^{ab}g^{cd}##
or the covariant derivative of the Ricci tensor is zero in general. The first option means that the metric is just zero and so makes the identity we started with trivial. The second option is just wrong.
What went wrong? I'm going nuts.
By metric compatibility the first term is zero, so we conclude that ##g^{ac}g^{bd}=\frac{1}{2} g^{ab}g^{cd}##
or the covariant derivative of the Ricci tensor is zero in general. The first option means that the metric is just zero and so makes the identity we started with trivial. The second option is just wrong.
Hmm, your working up to here looks right to me at least but I think ##g^{ac} g^{bd} - \frac{1}{2} g^{ab} g^{cd} = 0## does not imply that the metric is zero, i.e. the indices don't match?
gastkram
Hmm, your working up to here looks right but I think ##g^{ac} g^{bd} - \frac{1}{2} g^{ab} g^{cd} = 0## does not imply that the metric is zero, i.e. the indices don't match?
I mean that it is zero because we get
$$g_{ac}g^{ac}g^{bd}=\frac{1}{2}g^{ab}g_{ac}g^{cd} \implies \delta_a^a g^{bd}=\frac{1}{2}g^{b}_{\phantom{b}c}g^{cd}\implies D g^{bd}=\frac{1}{2}g^{bd},$$
but the dimension is not usually one half .
Maybe the problem is that the entire sum is zero,$$\left(g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} \right) \nabla_a R_{cd}=0$$and you can't pull out the coefficients and set each to zero separately, ##g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} = 0##, because they're included in the summation [and three of the indices are dummy indices]? It's to say that each individual term in the sum is not necessarily zero.
gastkram
Maybe the problem is that the entire sum is zero,$$\left(g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} \right) \nabla_a R_{cd}=0$$and you can't pull out the coefficients ##g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} = 0## and set each to zero separately, because it's included in the summation [and three of the indices are dummy indices]?
Oh, that may be it. Let me think about that for a minute.
Last edited:
gastkram
Maybe the problem is that the entire sum is zero,$$\left(g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} \right) \nabla_a R_{cd}=0$$and you can't pull out the coefficients and set each to zero separately, ##g^{ac}g^{bd}-\frac{1}{2}g^{ab}g^{cd} = 0##, because they're included in the summation [and three of the indices are dummy indices]? It's to say that each individual term in the sum is not necessarily zero.
Yes, I guess I claimed that every coefficient has to be zero but that's not true. Mystery solved (I hope)! Thanks!
vanhees71 and etotheipi
Yeah, that seems right!
vanhees71
gastkram
Yeah, that seems right!
Well, I was right that I had done something obviously wrong
vanhees71
haha, well contracted indices also sometimes make me feel like a ... dummy.
vanhees71 and Infrared
Mentor
I mean that it is zero because we get
$$g_{ac}g^{ac}g^{bd}=\frac{1}{2}g^{ab}g_{ac}g^{cd} \implies \delta_a^a g^{bd}=\frac{1}{2}g^{b}_{\phantom{b}c}g^{cd}\implies D g^{bd}=\frac{1}{2}g^{bd},$$
but the dimension is not usually one half .
The operation you are doing here does not look correct. You can't contract the same index twice, but that is what you are doing by introducing the factor ##g_{ac}## on both sides. The ##a## and ##c## indexes are already contracted; they're not free. The only free index in the equation is ##b##, so that's the only one available to contract anything with.
@PeterDonis I think that was just following from considering ##g^{ac} g^{bd} = \frac{1}{2} g^{ab} g^{cd}## as an equation in four free indices, which we eventually clocked in #4 that you can't do.
Mentor
I think that was just following from considering ##g^{ac} g^{bd} = \frac{1}{2} g^{ab} g^{cd}## as an equation in four free indices, which we eventually clocked in #4 that you can't do.
Yes, agreed. The "four free indices" part is the issue--only one of those indices, ##b##, is actually free. The others, as you point out in #4, are dummy summation indices--or, as I put it, they are already contracted, so you can't contract them again.
vanhees71 and etotheipi
Gold Member
2022 Award
I mean that it is zero because we get
$$g_{ac}g^{ac}g^{bd}=\frac{1}{2}g^{ab}g_{ac}g^{cd} \implies \delta_a^a g^{bd}=\frac{1}{2}g^{b}_{\phantom{b}c}g^{cd}\implies D g^{bd}=\frac{1}{2}g^{bd},$$
but the dimension is not usually one half .
I don't understand your manipulations. On the left-hand side you simply have ##g_{ac} g^{ac}=\delta_a^a=4##, i.e.,
$$g_{ac} g^{ac} g^{b d}=4 g^{b d}.$$
On the right-hand side you get
$$\frac{1}{2} g^{ab} g_{ac} g^{cd}=\frac{1}{2} \delta_c^b g^{cd}=\frac{1}{2} g^{bd}.$$
So why do you think both sides were equal?
Mentor
On the left-hand side you simply have ##g_{ac} g^{ac}=\delta_a^a=4##
And, as already noted, this is already wrong, since both the ##a## and ##c## indexes on ##g^{ac}## were already contracted, so you can't contract them again with ##g_{ac}##.
vanhees71
|
2023-03-24 05:58:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361128807067871, "perplexity": 834.004649559458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00240.warc.gz"}
|
https://www.projecteuclid.org/euclid.bj/1352727817
|
## Bernoulli
• Bernoulli
• Volume 18, Number 4 (2012), 1405-1420.
### Convergence of the largest eigenvalue of normalized sample covariance matrices when $p$ and $n$ both tend to infinity with their ratio converging to zero
#### Abstract
Let ${\mathbf{X}}_{p}=({\mathbf{s}}_{1},\ldots,{\mathbf{s}}_{n})=(X_{ij})_{p\times n}$ where $X_{ij}$’s are independent and identically distributed (i.i.d.) random variables with $EX_{11}=0$, $EX_{11}^{2}=1$ and $EX_{11}^{4}<\infty$. It is showed that the largest eigenvalue of the random matrix ${\mathbf{A}}_{p}=\frac{1}{2\sqrt{np}}({\mathbf{X}}_{p}{\mathbf{X}}_{p}^{\prime}-n{\mathbf{I}}_{p})$ tends to $1$ almost surely as $p\rightarrow\infty$, $n\rightarrow\infty$ with $p/n\rightarrow0$.
#### Article information
Source
Bernoulli, Volume 18, Number 4 (2012), 1405-1420.
Dates
First available in Project Euclid: 12 November 2012
https://projecteuclid.org/euclid.bj/1352727817
Digital Object Identifier
doi:10.3150/11-BEJ381
Mathematical Reviews number (MathSciNet)
MR2995802
Zentralblatt MATH identifier
1279.60012
#### Citation
Chen, B.B.; Pan, G.M. Convergence of the largest eigenvalue of normalized sample covariance matrices when $p$ and $n$ both tend to infinity with their ratio converging to zero. Bernoulli 18 (2012), no. 4, 1405--1420. doi:10.3150/11-BEJ381. https://projecteuclid.org/euclid.bj/1352727817
#### References
• [1] Adamczak, R., Litvak, A.E., Pajor, A. and Tomczak-Jaegermann, N. (2010). Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles. J. Amer. Math. Soc. 23 535–561.
• [2] Bai, Z.D. and Yin, Y.Q. (1988). Convergence to the semicircle law. Ann. Probab. 16 863–875.
• [3] Bai, Z.D. and Yin, Y.Q. (1993). Limit of the smallest eigenvalue of a large-dimensional sample covariance matrix. Ann. Probab. 21 1275–1294.
• [4] El Karoui, N. (2008). Operator norm consistent estimation of large-dimensional sparse covariance matrices. Ann. Statist. 36 2717–2756.
• [5] El Karoui, N. (2003). On the largest eigenvalue of Wishart matrices with identity covariance when $n$, $p$ and $p/n$ tend to infinity. Preprint.
• [6] Fan, K. (1951). Maximum properties and inequalities for the eigenvalues of completely continuous operators. Proc. Natl. Acad. Sci. USA 37 760–766.
• [7] Geman, S. (1980). A limit theorem for the norm of random matrices. Ann. Probab. 8 252–261.
• [8] Jonsson, D. (1982). Some limit theorems for the eigenvalues of a sample covariance matrix. J. Multivariate Anal. 12 1–38.
• [9] Marčenko, V.A. and Pastur, L.A. (1967). Distribution for some sets of random matrices. Math. USSR-Sb. 1 457–483.
• [10] Wigner, E.P. (1958). On the distribution of the roots of certain symmetric matrices. Ann. of Math. (2) 67 325–327.
• [11] Yin, Y.Q., Bai, Z.D. and Krishnaiah, P.R. (1988). On the limit of the largest eigenvalue of the large-dimensional sample covariance matrix. Probab. Theory Related Fields 78 509–521.
|
2019-09-20 19:12:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731370568275452, "perplexity": 1321.5220246210727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00529.warc.gz"}
|
https://direct.mit.edu/netn/article/4/1/134/95803/Core-language-brain-network-for-fMRI-language-task?searchresult=1
|
Functional magnetic resonance imaging (fMRI) is widely used in clinical applications to highlight brain areas involved in specific cognitive processes. Brain impairments, such as tumors, suppress the fMRI activation of the anatomical areas they invade and, thus, brain-damaged functional networks present missing links/areas of activation. The identification of the missing circuitry components is of crucial importance to estimate the damage extent. The study of functional networks associated with clinical tasks but performed by healthy individuals becomes, therefore, of paramount concern. These “healthy” networks can, indeed, be used as control networks for clinical studies. In this work we investigate the functional architecture of 20 healthy individuals performing a language task designed for clinical purposes. We unveil a common architecture persistent across all subjects under study, that we call “core” network, which involves Broca’s area, Wernicke’s area, the premotor area, and the pre-supplementary motor area. We study the connectivity of this circuitry by using the k-core centrality measure, and we find that three of these areas belong to the most robust structure of the functional language network for the specific task under study. Our results provide useful insights on primarily important functional connections.
Neurosurgeons employ language fMRI to localize important language areas for patients with brain impairment. Yet, brain pathologies (e.g., brain tumors, strokes, epilepsy) affect functional connectivity by disrupting functional links and suppressing the activation of brain areas. Thus, although clinical tasks are designed to guarantee robust activation, the functional connectivity of patients with brain pathologies is ultimately damaged by brain impairments. To better quantify the damage produced by the brain pathology on the functional connectivity, it is paramount to have, as a benchmark, functional networks of healthy individuals who perform a task for clinical cases. Our findings identify a group of functional regions of interest linked together in a functional circuitry that have a decisive role for the language task used in clinical applications.
Broca’s area (BA) and Wernicke’s area (WA) have long been recognized as essential language centers. Studies of aphasic patients have shown that damage to BA and WA causes loss of ability to produce speech (expressive aphasia) and difficulty understanding language (receptive aphasia), respectively (Dronkers, Plaisant, Iba-Zizen, & Cabanis, 2007; Wernicke, 1970). Further evidence has shown that other secondary and tertiary anatomical brain areas are also involved in language Friederici (2011), including the pre-supplementary motor area (pre-SMA; Hertrich, Dietrich, & Ackermann, 2016), the premotor area (preMA; Duffau et al., 2003), and the basal ganglia (Booth, Wood, Lu, Houk, & Bitan, 2007). Despite this evidence, a full characterization of the language network is still debated (Friederici, Chomsky, Berwick, Moro, & Bolhuis, 2017; Fedorenko & Kanwisher, 2009).
Functional MRI (fMRI) has been largely used to investigate the blood-oxygen-level dependent (BOLD) activation of the human brain, for both clinical and research purposes. Although it cannot fully resolve the issue of “functional specialization” of brain regions by itself, it sheds light on which regions are engaged in certain cognitive processes. Therefore fMRI allows us to constrain hypotheses on the structure of the language network.
Language has been investigated using both resting-state fMRI (rs-fMRI) and tasked-based fMRI (tb-fMRI). The former studies brain activation of subjects at rest (Lee, Smyser, & Shimony, 2013), whereas tb-fMRI delineates brain areas functionally involved in the performance of a specific task (Bookheimer, 2002). Task-based fMRI is task-dependent, that is, different language tasks may activate different areas involved in language function (Xiong et al., 2000). Consequently, clinical studies employ a specific class of language tasks that have been shown to produce robust activation in individual participants and thus facilitate the localization of the language-sensitive cortex (Brennan et al., 2007; Ramsey, Sommer, Rutten, & Kahn, 2001).
In this paper we analyze fMRI scans of 20 healthy individuals who perform the same language task designed for clinical purposes. From the correlation of the BOLD signal we construct the functional connectivity network for each subject, which is standardly employed to investigate statistical interdependencies among brain regions (Bullmore & Sporns, 2009; Hermundstad et al., 2013; Gallos, Makse, & Sigman, 2012). We then employ graph theory to study the networks’ properties as successfully done in Del Ferraro et al. (2018) to investigate memory formation. The motivation for this study is to use the resulting functional connectivity of these healthy individuals as a benchmark for clinical study, as we explain next. We employ a language clinical task because we are interested in studying the fMRI activation associated with this specific type of task. We employ healthy subjects and not patients with brain impairments because we want to study the fMRI activation without any interference that might arise because of the brain impairment. Brain pathologies (e.g., brain tumors, Wang et al., 2013; strokes, Tombari et al., 2004; epilepsy, Rosenberger et al., 2009) indeed affect functional connectivity by disrupting functional links and reducing the fMRI activation of brain areas (e.g., the neurovascular decoupling effect due to brain tumor; Aubert, Costalat, Duffau, & Benali, 2002). The reconstruction of the functional connectivity in clinical cases, therefore, is influenced by the presence of brain pathology (Wang et al., 2013). In other words, the functional connectivity of a patient with a pathology such as a brain tumor, for instance, presents missing links and missing fMRI active areas compared with the healthy case, for the same specific task. To better understand what functional damage was produced by the brain impairment, it is important to have, as a benchmark, functional networks of healthy individuals performing the same language task normally used for clinical cases. In this way, using clinical language tasks performed by healthy subjects, we can study functional networks associated with clinical tasks without perturbations that arise from brain impairments. These functional networks can be used as benchmarks for other studies that use the same type of task but employ patients with brain damage. The comparison between a healthy control and a patient’s functional network relative to the same task could in principle establish what is the damage produced by the brain impairment on the functional network and might, among others, guide tumor resection to preserve functional links.
Motivated by these considerations, we investigated which is the language functional architecture shared among healthy subjects, that is, the functional subnetwork that persists in each analyzed individual beyond the intersubject variability. This architecture is indicative of a core structure for the language task under study shared across individuals. Core architectures have been identified in other contexts (Bassett et al., 2013), but very little is known about the core for language tasks (Chai, Mattar, Blank, Fedorenko, & Bassett, 2016) and its investigation is one of the main goals of our study.
Furthermore, we aim to uncover the functional connectivity of the subdivisions of the Broca’s area (pars-opercularis, op-BA, and pars-triangularis, tri-BA, i.e., Brodmann area 44 and 45 respectively), which plays a pivotal role in language function (Dronkers et al., 2007; Friederici, 2011). Previous studies based on fMRI showed that BA’s subdivisions perform different functions in language processing. Newman, Just, Keller, Roth, and Carpenter (2003) showed that tri-BA is more implicated in thematic processing whereas op-BA is more involved in syntactic processing. Studies based on transcranial magnetic stimulation have shown that op-BA is more specialized in phonological tasks and tri-BA more in semantic tasks (Devlin, Matthews, & Rushworth, 2003; Gough, Nobre, & Devlin, 2005; Nixon, Lazarova, Hodinott-Hill, Gough, & Passingham, 2004). Patients who show speech impairment often have direct damage to the Broca’s area. Thus, understanding how BA subdivisions are functionally wired to other brain regions in healthy controls may be used for comparison in some clinical cases and could potentially be of help to better clarify the effect of brain pathologies on this decisive language area.
From our analysis we find that the functional architecture shared by most of the subjects under study wires together Broca’s area (op-BA and tri-BA), Wernicke’s area, the pre-supplementary motor area, and the premotor area. By investigating network properties at the subject level we find that, in each individual functional network, these areas belong to an innermost core, more specifically the maximum k-core of the functional connectivity, which is a robust and highly connected substructure of the functional architecture. The k-core measure has received vast attention in network analysis since it provides a topological notion of the structural skeleton of a circuitry (Kitsak et al., 2010; Pittel, Spencer, & Wormald, 1996; Rubinov & Sporns, 2010; Dorogovtsev, Goltsev, & Mendes, 2006). More recently, the maximum k-core has been related to the stability of complex biological systems (Morone, Del Ferraro, & Makse, 2019) and of resilient functional structures in the brain (Lucini, Del Ferraro, Sigman, & Makse, 2019). Our results demonstrate that the functional architecture that persists beyond intersubject variability is part of the maximum k-core structure, an innermost highly connected subnetwork, associated with a system’s resilience and stability (Morone et al., 2019).
Overall, our findings identify a group of functional regions of interest (fROIs) linked together in a functional circuitry that play a decisive role for the language task used in clinical applications.
The study was approved by the Institutional Review Board and an informed consent was obtained from each subject. The study was carried out according to the declaration of Helsinki. Twenty healthy right-handed adult subjects (13 males and 7 females; age range 36 years, mean = 36.6; SD = 11.56) without any neurological history were included.
### Functional Task
For the fMRI task, all subjects performed a verbal fluency task using verb generation in response to auditory nouns. During the verb generation task, subjects were presented with a noun (for example, baby) by oral instruction and then asked to generate action words (for example, cry, crawl) associated with the noun. Four nouns were displayed over six stimulation epochs, with each epoch lasting 20 s, which allowed for a total of 24 distinct nouns to be read over the entire duration. Each epoch consisted of a resting period and a task period (see BOLD activation in Figures 1A and1B). In order to avoid artifacts from jaw movements, subjects were asked to silently generate the words. Brain activity and head motion were monitored using Brainwave software (GE, Brainwave RT, Medical Numerics, Germantown, MD), allowing real-time observation.
Figure 1.
Activation map for a representative subject. BOLD signal for a nonactive and active voxel are shown respectively in panels A and B together with the smoothed boxcar language model, which depicts the auditory stimulus. The black curve represents BOLD signal, while the red curve represents the smoothed boxcar task model design. The red curve’s peaks represent the continuous stimuli (verb task), presented for a certain period of time (10 s), the “task” or “on” period. The red curve’s troughs represent the rest period (no task) for the participant, which lasted 10 s, (C) 3D visualization of the brain with fMRI active areas and corresponding p values.
Figure 1.
Activation map for a representative subject. BOLD signal for a nonactive and active voxel are shown respectively in panels A and B together with the smoothed boxcar language model, which depicts the auditory stimulus. The black curve represents BOLD signal, while the red curve represents the smoothed boxcar task model design. The red curve’s peaks represent the continuous stimuli (verb task), presented for a certain period of time (10 s), the “task” or “on” period. The red curve’s troughs represent the rest period (no task) for the participant, which lasted 10 s, (C) 3D visualization of the brain with fMRI active areas and corresponding p values.
Close modal
### Data Acquisition
A GE 3T scanner (General Electric, Milwaukee, Wisconsin, USA) and a standard quadrature head coil was employed to acquire the MR images. Functional images covering the whole brain were acquired using a T2*-weighted gradient echo planar imaging sequence (repetition time, TR/echo time, TE = 4,000/40 ms; slice thickness = 4.5 mm; matrix = 128 × 128; FOV = 240 mm). Functional matching axial T1-weighted images (TR/TE = 600/8 ms; slice thickness = 4.5 mm) were acquired for anatomical coregistration purposes. Additionally, 3D T1-weighted SPGR (spoiled gradient recalled) sequences (TR/TE = 22/4 ms; slice thickness = 1.5 mm; matrix = 256 × 256) covering the entire brain were acquired.
### Data Processing
Functional MRI data were processed and analyzed using the software program Analysis of Functional NeuroImages (AFNI; Cox, 1996). Head motion correction was performed using 3D rigid-body registration. The first volume was selected to register all other volumes. The first volume was chosen because it was acquired right next to the anatomical scan. During the registration, the motion profile was saved and during the statistical analysis any voxels highly correlated with the profile were regressed out. Spatial smoothing was applied to improve the signal-to-noise ratio using a Gaussian filter with 4 mm full width of half maximum. Corrections for linear trend and high-frequency noise were also applied. To obtain the activation map, the rectangular train representing the single-task (verb generation) block design is convolved with a canonical hemodynamic response function (HRF; see the red curve in Figures 1A and 1B). This time series is one regressor in the general linear model where the preprocessed BOLD response is to fit to this as well as a baseline (intercept). The BOLD signal is represented by the black curve in Figures 1A and 1B). The test statistics for the activation maps are determined by cross-correlation analysis within AFNI software. They were generated in the individual native space at a minimum threshold of p < 0.0001 to identify activated voxels set by a neuroradiologist (see Figure 1C, for a representative subject).
The individual voxel threshold level (uncorrected p < 0.0001) was set to minimize the contribution of false positives that could be caused by stimulus-correlated head motion and/or random noise fluctuation. The level was set for all subjects. We looked at FDR (q value calculated in AFNI software) to control false positives among the detected voxels, and the FDR corrected is q < 0.001. We verified that the activated areas identified and considered in our graph theory analysis are significant.
### Network Construction
The following sections describe the functional network construction. In the first subsection, we first describe how to create, from the fMRI signal of the active voxels, a brain network for each individual separately. The second subsection discusses the group analysis or how we obtain, from the individual brain networks, a common architecture that unveils a persistent circuitry across all the single-subject brain networks.
#### Individual brain network construction.
For each subject we construct a functional network. This network can be seen at two different scales or levels: (a) at the voxel level and (b) at the fROI level, as we explain in more detail below.
At the voxel level, active voxels in the individual activation map (p < 0.0001) define the nodes of our functional network, where a voxel is the lowest resolution measured by fMRI. Functional links are inferred by thresholding pairwise Pearson correlations (see Equation 1) between a pair of voxels, as standard in the literature (Bullmore & Sporns, 2009; Gallos et al., 2012; Hermundstad et al., 2013).
The pairwise correlation is defined as follows:
$Cij=〈xixj〉−〈xi〉〈xj〉(〈xi2〉−〈xi〉)2(〈xj2〉−〈xj〉)2,$
(1)
where xi is a vector encoding the fMRI time response of voxel i and 〈⋅〉 indicates a temporal average.
Accordingly, pairs of voxels with correlation above a fixed threshold are connected by a link (Bullmore & Sporns, 2009; Del Ferraro et al., 2018; Lucini et al., 2019). The threshold is an absolute threshold, which means we pick a reference value of the correlation as the smallest and assign a zero to any value below. The link weight is given by the correlation strength as defined above in Equation 1. Nearby active voxels are grouped together based on each subject’s individual anatomy and are considered part of the same fROI. Figure 2, upper panel, shows a realization of the voxel-level functional network for a representative subject, where voxels that are part of the same fROI are colored equally.
Figure 2.
Two visualizations of the individual functional network. The figure shows functional networks for a representative subject relative to the fMRI active brain areas during the language task. Upper panel: voxel-level network. Each node in the network represents a voxel, each link connects a pair of voxels in different brain modules, and it is indicative of functional interdependency. Links connecting voxels within the same brain module are not visible but exist. Lower panel: fROI-level network for the same voxel-level architecture shown in the upper panel. Voxels belonging to the same anatomical region are grouped into an fROI, represented as a node in the network. Node’s size is proportional to the number of voxels in the fROI. Colored borders have no meaning and are used only for illustrative purpose. Each link’s thickness connecting two fROIs is proportional to the sum of link’s weight connecting all the voxels in the two fROI (exact definition given in Equation 2).
Figure 2.
Two visualizations of the individual functional network. The figure shows functional networks for a representative subject relative to the fMRI active brain areas during the language task. Upper panel: voxel-level network. Each node in the network represents a voxel, each link connects a pair of voxels in different brain modules, and it is indicative of functional interdependency. Links connecting voxels within the same brain module are not visible but exist. Lower panel: fROI-level network for the same voxel-level architecture shown in the upper panel. Voxels belonging to the same anatomical region are grouped into an fROI, represented as a node in the network. Node’s size is proportional to the number of voxels in the fROI. Colored borders have no meaning and are used only for illustrative purpose. Each link’s thickness connecting two fROIs is proportional to the sum of link’s weight connecting all the voxels in the two fROI (exact definition given in Equation 2).
Close modal
We define fROIs within each subject individually, based on the activation and anatomy of the specific subject (Lucini et al., 2019; Del Ferraro et al., 2018; Fedorenko, Hsieh, Nieto-Castañón, Whitfield-Gabrieli, & Kanwisher, 2010). For instance, all the active voxels in Brodmann area 22 of the superior temporal lobe define the Wernicke’s area fROI. The reason for choosing individual-based fROIs is that group-based ROI-level analysis suffers from intersubject variability in the location of activation. In contrast, individual-subject-based fROI analysis can reveal greater functional specificity (Fedorenko et al., 2010). Furthermore, working in individual native space prevents the propagation of errors due to coregistration to universal ATLAS.
At the fROI level or module level a node represents an entire fROI, that is, a group (collection) of nearby active voxels in the spatially (anatomically) proximate area. Hereafter, we might use the word “module” or brain “region” as substitute of the word fROI. At this level, a functional link connects two fROIs if and only if there exists at least one link, at the voxel level, between a pair of voxels in the two fROIs. The functional link’s weight between two fROIs i and j (Wij) is defined as the sum of the number of links connecting pairs of voxels between the two fROIs, normalized by the sum of the two fROIs’ size:
$Wij=#linksconnectingi↔jsize(fROIi)+size(fROIj).$
(2)
For each individual, we then normalize eachWij by the value of the largest W for that individual (Wmax):
$W~ij=WijWmax,for all i and j fROIs.$
(3)
In this way the link’s weight scale is the same across subjects (see Supplementary Table S1) and the maximum weight is $W~=1$ in each individual. Figure 2, lower panel, illustrates the functional network at the fROI level for a representative subject.
For each individual, we are interested in uncovering the functional architecture of the subdivision of the Broca’s area, that is, tri-BA and op-BA, which correspond to all active voxels in Brodmann area 44 and 45, respectively. Each of these subareas has been associated with different language processes in previous studies (Devlin et al., 2003; Gough et al., 2005; Newman et al., 2003; Nixon et al., 2004). Through our analysis we aim to find out the specificity of their functional connectivity, to unveil whether their different engagement in language processing may be associated with a different functional wiring with the rest of the brain. Thus, when building the individual functional network, we group the active voxels of the BA into two different and separate fROIs: op-BA and tri-BA (see Figure 2).
We named the fROIs according to their main anatomical boundaries as follows. We retained the classical designations of BA (Brodmann area 44–45, inferior frontal gyrus) and WA (Brodmann area 22, superior temporal gyrus), as these designations still predominate in neurosurgery, which dominates clinical practice (Friederici, 2011). We defined the ventral premotor area (v-preMA) as the ventral portion of the premotor cortex, which includes the inferior part of Brodmann area 6, centered on the posteriormost portion of the middle frontal gyrus (MFG; Friederici, 2011). The superior portion of Brodmann area 6 was considered dorsal premotor area (d-preMA). The anteriormost part of the middle frontal gyrus was identified as anterior middle frontal gyrus (aMFG). The pre-SMA was defined within the medial frontal cortex, at the level of Brodmann area 6 (Nachev, Kennard, & Husain, 2008). The precentral gyrus was identified with Brodmann area 4, the supramarginal gyrus was identified with Brodmann area 40 and angular gyrus with Brodmann area 39 (Friederici, 2011). The deep opercular cortex (DOC) included the innermost portion of the frontal operculum (Friederici, 2011).
The visual and the auditory cortex, which are active areas that support nonlinguistic processing, were excluded from the analysis (Fedorenko et al., 2010; Fedorenko & Kanwisher, 2009). These areas are indeed activated because the subject is presented with auditory stimuli and may keep the eyes open.
The same functional network construction as described above is carried over for all 20 subjects individually, both at the voxel and the fROI level. Next, we carry out a group analysis to identify the common functional network shared across individuals, beyond the intersubject variability, as described in following section.
#### Common network construction across subjects.
Our interest in studying functional networks for single individuals performing language tasks is aimed at uncovering functional architectures that are persistent across healthy subjects and could be useful and informative when dealing with clinical cases. Individual functional networks have innate subject variability (e.g., one subject activates in one specific area or has a functional link while another does not). Therefore, after we reconstructed the individual functional networks, we performed a group analysis at the fROI level by investigating which set of links and brain areas is persistent across subjects or, in other words, which functional subarchitecture is common among all the individuals.
This functional architecture is informative of which areas and functional links persist beyond the intersubject variability, and therefore it represents a language core structure for the specific language task under study. Accordingly, surgical intervention, as for instance tumor resection, should operate by preserving such core structure existing across healthy controls. In addition, functional damage to this structure due to brain pathologies—and observed from the functional connectivity of the patient—may be informative of the damage extent (e.g., a missing functional link in the core may signify a larger harm than a missing connection between more peripheral areas not in the core). We name this most persistent functional architecture across subjects, at the fROI level, common network. This common architecture is defined retaining a pair of fROIs and a functional link connecting them only if these areas and link are present across subjects.
The weight of the functional link connecting two fROIs i and j in the common network ($WijC$) is defined as the average of the $W~ij$ connecting those fROIs across subjects:
$WijC=1N∑l=1NW~ij(l),$
(4)
where N is the total number of individuals.
We report and discuss the results of this quantitative analysis in the following sections.
### Individual Networks
For each individual we observe fMRI activation in both hemispheres, however, left dominance is clearly observed, as expected since all the subjects are right-handed (Isaacs, Barr, Nelson, & Devinsky, 2006; Knecht et al., 2000). The number of left hemisphere areas of activation is greater and in most cases their frequency of activation is greater as well.
Active fROIs across subjects include the following, in alphabetic order: angular gyrus (L), Broca’s area (L; op-BA and tri-BA), Broca’s area (R), caudate (L and R), deep opercular cortex (L and R; Friederici, 2011), aMFG (L and R), precentral gyrus (L and R), ventral and dorsal preMA (L), ventral preMA (R), pre-SMA, supramarginal gyrus (L and R), and Wernicke’s area (L and R). Detailed information on the frequency of activation of each area across subjects is summarized in Supporting Information Table S2. In the following, for brevity, we will refer to left hemisphere brain areas simply with the name of the areas, omitting the specification (L).
The functional network for a representative subject at both the voxel and the fROI level is shown in Figure 2. All the single subjects’ functional connectivity at the fROI level for each of the 20 healthy individuals considered in our study are shown in Supporting Information Figure S1, and all the connectivity values between pairs of fROIs are reported in Supporting Information Table S1.
We observe that, overall, the preMA is the most connected area across subjects, in terms of connectivity weight. In 8 over 20 individuals the strongest functional connection is between preMA–op-BA and in 7 over 20 cases it is between preMA–pre-SMA. In total, the preMA turns out to have the strongest connection with one of the other areas in 17 out of 20 subjects; in 3 cases the strongest functional connection is between op-BA and tri-BA. For further details on the connectivity of each area, see Supporting Information Table S1.
Wernicke’s area is known to structurally connect to BA through the arcuate fasciculus, a bundle of axons linking the inferior frontal gyrus with the superior temporal gyrus. We investigated the functional connections of the BA subdivisions with the rest of the brain and, with focus on WA, at the fROI level, we compared how frequent op-BA connects to WA versus how frequent tri-BA connects to WA. We find that op-BA connects to WA in 18 out of 20 subjects (90% of the cases), while tri-BA connects to WA in 15 out of 20 individuals (75% of the cases). In terms of connectivity weight, in 10 out of 20 subjects (50%) WA connects more strongly to op-BA than to tri-BA, whereas in 7 subjects (35%) we have the opposite finding, tri-BA connects more to WA than the opercular counterpart. In 2 individuals the functional connectivity of op-BA and tri-BA to WA is, instead, approximately the same. One subject does not show WA activation at all.
Regarding other relevant areas such as preMA and pre-SMA, we find that the connectivity frequency of these areas with op-BA and tri-BA is about the same. Indeed the preMA connects to op-BA in 18 subjects and to tri-BA in 17 out of 20. The pre-SMA connects to op-BA in 19 subjects and to tri-BA in 18 individuals. So, overall, the connectivity frequency of the BA subdivisions with preMA and pre-SMA is similar. In terms of connectivity weight, op-BA connects more strongly to both preMA and pre-SMA compared with tri-BA. Thus, although the BA subdivisions connect to preMA and pre-SMA with about the same frequency across subjects, op-BA has, overall, a larger connectivity weight.
To investigate the overall functional connectivity that each fROI has with the rest of the common network shown in Figure 3 we computed, for each individual, the sum of the functional links of each fROI with the rest of the network. In each subject, each fROI that appears in the common network can then be ranked according to this connectivity strength (most connected at the top and least connected at the bottom). We present these results in Supporting Information Figure S2. The results show that preMA is either the most connected or the second most connected area in 95% of the cases. On the contrary, WA is either the least connected or the second least connected area in 95% of the cases.
Figure 3.
Common network across subjects for the language task under study. The figure illustrates the functional network, beyond intersubject variability, shared across individuals (17 out of 20). The weight of a link connecting two fROIs is proportional to the average of the functional links connecting those fROIs across subjects. Upper panel: fROIs are located on their anatomical location on the brain. Lower panel: pictorial illustration of the network in the upper panel, with the fROIs equally spaced on a plane.
Figure 3.
Common network across subjects for the language task under study. The figure illustrates the functional network, beyond intersubject variability, shared across individuals (17 out of 20). The weight of a link connecting two fROIs is proportional to the average of the functional links connecting those fROIs across subjects. Upper panel: fROIs are located on their anatomical location on the brain. Lower panel: pictorial illustration of the network in the upper panel, with the fROIs equally spaced on a plane.
Close modal
### Common Network Across Subjects and Functional Subdivisions of Broca’s Area
The common network at the fROI level, as described in the Common Network Construction section, is made by those fROIs and links present (persistent) across the majority of subjects. As a result of the left dominance at the individual level, no consistent overlap of right-hemisphere activation has been found across subjects.
We find that the persistent structure across individuals (17 over 20), beyond intersubject variability, is made by op-BA, tri-BA, WA, preMA, and pre-SMA connected together in a functional architecture (see Figure 3). This circuitry represents the core structure for the specific clinical language task under investigation since it is the functional architecture that prevails in nearly all subjects. We find this network in 17 over 20 subjects and not in all of them because three subjects show lack of activation for either the op-BA (1 case), the tri-BA (1 case), or neither WA nor tri-BA (1 case). The common network shown in Figure 3 is therefore the one prevailing in closely all the subjects and, thus, the functional structure that is persistent beyond intersubject variability. We tested the robustness of these results by varying the threshold in each individual network (discussed in the Individual Brain Network Construction section) by 5−10% and by recomputing the common network analysis. We find that within this range of variation the common network is made by the same brain modules (fROIs) and links shown in Figure 3.
Furthermore, this conclusion is additionally supported by additional findings we obtained on a study conducted on bilingual healthy subjects when they speak their native language (Li, Pasquini, et al., 2019). In Li, Pasquini, et al. (2019) we study the functional network of English monolingual subjects and of bilinguals (native in Spanish, bilingual in English). We find that for both groups, when subjects speak their native language the persistent structure is made by BA, WA, preMA, and pre-SMA. Additionally, data in Li, Pasquini, et al. (2019) were acquired for a letter-generation language task, that is, a different task that we use in the present paper (verb generation). Thus, results of Li, Pasquini, et al. (2019) together with the present findings support evidence that the core language network is the most consistent functional architecture beyond intersubject variability and it is not limited to one specific language task.
In terms of functional connectivity, the strongest connectivity weight in the common network ($WmaxC$) is between op-BA and preMA ($WmaxC=0.74±0.31$, where the average is made across all the subjects that have such link). The triangular BA also connects with the preMA but with about half of the magnitude (WC = 0.37 ± 0.29). Detailed information on the functional connection of the other areas in the common network is reported in Supporting Information Table S3. Broca’s area has been long recognized as a central language area; its strong connectivity with the preMA(L) is of particular interest since the preMA(L) has been more recently identified as an area with dominant role for language (Duffau et al., 2003). We discuss this result further in the Discussion subsection: Functional and Structural Connectivity of the Common Network.
When we look at the connectivity of the BA subdivisions with Wernicke’s areas, a primary area for language comprehension, we observe that, in the common network, WA only connects to op-BA. This reveals the existence of larger coactivation of the BOLD signal between these two areas that might also be driven by their spatial vicinity (WA is anatomically closer to op-BA than to tri-BA). In more detail, as discussed in Results section, at the individual level we find that WA connects to tri-BA in 15 out of 20 cases whereas it connects to op-BA in 18 out of 20 cases. Therefore, overall, BA subdivisions both connect to WA in several different individuals with a slightly larger presence of WA–op-BA connectivity across subjects. In terms of connectivity weight, when we count only subjects where both op-BA and tri-BA connect to WA, we find that op-BA connects slightly more strongly to WA compared with tri-BA (WC = 0.17 ± 0.23 versus WC = 0.15 ± 0.20, respectively).
Furthermore, we observe that op-BA has a larger connectivity than tri-BA both on the number of connections with the rest of the areas in this network (4 versus 3 respectively, the extra one being WA–op-BA) and in terms of functional connectivity weight. Indeed, the average connectivity of the op-BA, across subjects and across areas, in the common network is WC = 0.45 ± 0.25 (WC = 0.55 ± 0.20 without the link WA–op-BA), whereas the comprehensive connectivity weight of the tri-BA is WC = 0.32 ± 0.18.
Finally, we observe that the average values for the common-network functional weights reported in Supporting Information Table S3 have large standard deviations (magnitude comparable with the mean). This result signals a large intersubject variability for the weight of the single functional link across subjects. To investigate this further, we plot the empirical distribution of all the functional links’ weights across subjects and observe that it displays a long tail shape (see Supporting Information Figure S3), which is explicative of the large standard deviation values.
### The Common Network is Part of the Maximum k-core: The Most Resilient Architecture
The notion of k-core in theoretical physics has been used as a fundamental measure of centrality and robustness within a network (Morone et al., 2019). Since it was first introduced in social sciences (Seidman, 1983) it has been used in several contexts (Kitsak et al., 2010), as in random network theory (Pittel et al., 1996) or to describe large-scale structure in the brain (Hagmann et al., 2008).
The k-core of a given architecture is defined as the maximal subgraph, not necessarily globally connected, made of all nodes having degree (number of connections) at least k. In practice, the k-core subgraph can be derived by removing from the network all nodes with degree less than k. The removal of these nodes reduces the degree of their neighbors, and if the degree of the latter drops below k then also these nodes should in turn be removed. The procedure iterates until there are no further nodes that can be pulled out from the network. The remaining graph is the k-core of the network. A k-core structure includes subnetworks with higher ks: k + 1, k + 2, and so forth. For instance, the 1-core includes the 2-core which, in turn, includes the 3-core and so forth (see Figure 4). In each k-core, nodes in the periphery (not included in the k + 1-core) are called k-shell (ks). Thus, in each network, k-core (and k-shell) structures are nested within each other with increasing k. The innermost structure of the network corresponds to the graph with the maximum k-core (Dorogovtsev et al., 2006). Thus, by definition, the max k-core is not nested into any other structure with higher k. As a consequence, by definition, the max k-shell always coincides with the max k-core. Figure 4A illustrates k-core and k-shell structures in a simple explanatory network.
Figure 4.
k-core and k-shell of a network. Panel A illustrates pictorially a network. Nodes in the same disk have the same k-core. A k-core structure includes subnetworks with higher ks, so the 1-core includes the 2-core which, in turn, includes the 3-core, and so forth. Nodes that are in the k-core but not in the k + 1-core are called k-shell and are colored differently. The maximum k-core coincides with the maximum k-shell, in this network is kcoremax = 4 and depicted with brown nodes. Panel B illustrates pictorially the construction of the k-core histogram shown in Figure 5. Note that here nodes in each k-shell are colored differently, whereas in Figure 5 different colors indicate nodes in different fROIs, piled up according to their k-shell as in this panel.
Figure 4.
k-core and k-shell of a network. Panel A illustrates pictorially a network. Nodes in the same disk have the same k-core. A k-core structure includes subnetworks with higher ks, so the 1-core includes the 2-core which, in turn, includes the 3-core, and so forth. Nodes that are in the k-core but not in the k + 1-core are called k-shell and are colored differently. The maximum k-core coincides with the maximum k-shell, in this network is kcoremax = 4 and depicted with brown nodes. Panel B illustrates pictorially the construction of the k-core histogram shown in Figure 5. Note that here nodes in each k-shell are colored differently, whereas in Figure 5 different colors indicate nodes in different fROIs, piled up according to their k-shell as in this panel.
Close modal
Recently, the maximum k-core ($kcoremax$) has been linked to the most resilient structure of biological systems with positive interactions (Morone et al., 2019) and, in an fMRI study of human brains, the$kcoremax$ of the functional connectivity for a visual-task-based experiment has been found to be the most robust structure, which remains active even during subliminal conscious states (subject not aware of seeing images; Lucini et al., 2019).
Motivated by these recent findings, we pruned each voxel-level individual functional network until the maximum k-core structure, and we investigated to which k-core each node (voxel) belongs. We focused on the areas part of the common network (BA, WA, pre-SMA, and preMA) because these are the fROIs that form a persistent language structure across individuals, as shown in Figure 3. We aim to explore whether these regions are part of some significative k-core structure that might shed light on the architecture of the network. Our goal is to investigate, across subjects, which fROIs characterize the occupancy of each k-shell and, thus, we proceed as follows. For each individual network we compute the k-core and k-shell of all the nodes (voxels) as described above. Each subject has, in general, a different $kcoremax$(k-shell), which is a integer number that can go up to the maximum degree of the subject’s network. Thus, in order to compare results across subjects, we normalize the k-core range-of-values to 1, in each subject. We do this by dividing all the k-core (k-shell) values in each individual network by the individual $kcoremax$ for that network. In Figure 5 we then plot the total k-shell occupancy for all the individuals, and we color differently the contribution of each fROI in order to visualize to which k-shell they belong.
Figure 5.
k-shell occupancy. The histogram shows the k-shell occupancy for nodes in the four fROIs of the common network of Figure 3. Overall, the majority of the nodes of this structure are located in the maximum k-shell, which coincides with $kcoremax$, a quantity linked to the robustness of a complex network (Morone et al., 2019). Of the four fROIs of the common network, the pre-SMA, op-BA, tri-BA, and ventral preMA are mostly part of the $kcoremax$. Wernicke’s area (WA) is more an outlier; it is mostly located in lower k-shells and minimally located in the $kcoremax$.
Figure 5.
k-shell occupancy. The histogram shows the k-shell occupancy for nodes in the four fROIs of the common network of Figure 3. Overall, the majority of the nodes of this structure are located in the maximum k-shell, which coincides with $kcoremax$, a quantity linked to the robustness of a complex network (Morone et al., 2019). Of the four fROIs of the common network, the pre-SMA, op-BA, tri-BA, and ventral preMA are mostly part of the $kcoremax$. Wernicke’s area (WA) is more an outlier; it is mostly located in lower k-shells and minimally located in the $kcoremax$.
Close modal
Results in Figure 5 show that the maximum k-shell (which is in turn the maximum k-core) is the most populated of all the k-shells of the common network. More importantly, if we look at each area individually, we observe that the largest concentration of pre-SMA, op-BA, tri-BA, and v-preMA is in the maximum k-shell. Among the areas of the common network, WA is the only area that does not have the largest portion in the $kcoremax$but, rather, populates smaller k-shell values. These results are robust to 5−10% variation of the threshold used to build individual functional networks (discussed in the Individual Brain Network Construction section). In order to asses how much of our results are given by construction we compared them with randomized voxel assignation to fROI and recomputed the k-shell occupancy as a null model. Results are shown in Supporting Information Figure S4 and show that the relative occupancy of each fROI in the maximum k-shell is different from the random case. Furthermore, WA in the real model populates more the smaller shells compared with the null model case. We conclude that our results are not due to a random effect.
In Morone et al. (2019) the authors have shown that, for complex networks with positive couplings, the $kcoremax$of the network is the most resilient structure under decreasing of the coupling weight. In our functional networks, all the links are obtained through thresholding of pairwise correlations that, from our findings, turn out to be all positive. This is because the BOLD signal is extracted from a task-based fMRI experiment, stimulated by an external input. In this way, active voxels are those mostly correlated with the task model and, when computing pairwise correlations among voxels correlated with the same external stimulus, most likely one finds positive correlations, as we observe from our data analysis. This result allows us to interpret the functional networks wired by positive interactions and, therefore, the theory of Morone et al. (2019) applies. Accordingly, we can interpret the maximum k-core structure of our network as the most resilient one under decreasing of the correlation weight.
In other words, the circuitry made by the pre-SMA, BA, and preMA represents the most robust structure of the functional network. Wernicke’s area, although it is part of the common network, for the most part does not lay in the $kcoremax$of the network, probably because of its more peripheral anatomical location compared with the other fROIs of this common architecture. Therefore, although it is one of the most important areas for language, it is not part of the most resilient core.
### Rich-Club Response to Changes in Threshold
The rich-club coefficient is a measure of the connectivity as a function of degree (Zhou & Mondragón, 2004). If subnetworks of nodes have a tendency towards a higher percentage of connectivity for larger degrees, then the network is said to be a “rich club.” It has been shown that the brain structural network typically conforms to this paradigm (van den Heuvel & Sporns, 2011). Since the number of connections will naturally increase with degree, it is standard to normalize the rich-club coefficient by the coefficient obtained for a random network with the same number of nodes and degree distribution. The rich-club coefficient is defined as
$φ(k)=2E>kN>k(N>k−1),$
(5)
where E>kand N>kis the number of edges and nodes in the network where all nodes with degree less than k have been removed. The normalized coefficient is then defined by $φ(k)φrand(k)$ where φrand(k) refers to the rich-club coefficient calculated in the same network that has been randomly rewired while preserving the degree distribution.
In the Individual Brain Network Construction section we defined an absolute threshold for the voxel-voxel correlation in order to define what represents a minimum edge weight that defines links in the functional language networks. However, the choice of threshold is somewhat arbitrary and may have an effect on the results. In order to probe how the connectivity responds to changes in the choice of threshold, we show normalized rich-club coefficients for a representative subject in Figure 6.
Figure 6.
Rich clubs. Normalized rich-club coefficients for a representative subject with 5% (lowered/raised) variation of the current threshold. All other subjects have qualitatively the same normalized rich-club coefficient. The red curve corresponds to the original threshold (network). The blue curve corresponds to a 5% lower variation of the threshold, which leads to a denser network. The green curve corresponds to a 5% higher variation of the threshold, which leads to a sparser network. The figure shows rich-club behavior of the functional brain network, showing an increasing normalized rich-club coefficient.
Figure 6.
Rich clubs. Normalized rich-club coefficients for a representative subject with 5% (lowered/raised) variation of the current threshold. All other subjects have qualitatively the same normalized rich-club coefficient. The red curve corresponds to the original threshold (network). The blue curve corresponds to a 5% lower variation of the threshold, which leads to a denser network. The green curve corresponds to a 5% higher variation of the threshold, which leads to a sparser network. The figure shows rich-club behavior of the functional brain network, showing an increasing normalized rich-club coefficient.
Close modal
The threshold was lowered by 5% and raised by 5% in each case, and we can see that the rich-club structure is evident as shown by the rising curve above 1 as a function of k up to the high-degree sector of the network where there is high volatility. We also observe that a 5% change in threshold value does not change qualitatively the behavior of the rich-club coefficient. These results show that, in addition to the rich-club behavior found in (van den Heuvel & Sporns, 2011), functional brain networks, such as those we studied in this work, also show the same feature.
In this study, we reconstructed the functional language network of 20 healthy subjects from tb-fMRI data providing information about the functional connectivity between active areas on fMRI maps, with both a voxel- and an fROI-level resolution. The language task designed for the experiment is customarily used in clinical cases and has shown to produce robust activation in previous studies (Brennan et al., 2007; Li, Dong, et al., 2019; Ramsey et al., 2001; Xiong et al., 2000). Functional activation is generally sensitive to the fMRI task employed; our interest in reconstructing functional networks for this specific task aimed to create benchmark results for healthy individuals that can be used as reference for functional networks affected by brain pathologies. Indeed, brain impairments are known to create damage to the functional connectivity. It is therefore paramount to have healthy functional architectures relative to clinical language tasks in order to make the comparison between healthy individuals, and patients’ functional networks possible.
Our main finding is the existence of a common persistent functional network across subjects that wires together BA, WA, ventral preMA, and pre-SMA in the left dominant hemisphere for 17 out of 20 right-handed healthy subjects (see Figure 3). We interpret this circuitry as a core structure for the language task under study since this network persists across nearly all individuals.
Furthermore, we compute the k-core of each node (voxel) in the common network—the maximum value of which has been recently linked to network resilience in ecosystems and fMRI studies (Lucini et al., 2019; Morone et al., 2019). We find that three out of four areas of the common architecture (specifically pre-SMA, BA, and preMA) are mostly concentrated in the maximum k-core of the network (see Figure 5). This led us to conclude, following the findings of Lucini et al. (2019) and Morone et al. (2019), that these areas are the most robust of the language network in terms of fMRI-correlated signal.
Wernicke’s area is a crucial language area and indeed appears as part of the common network across individuals, yet its type of connectivity with the rest of the fROIs in this architecture is slightly different from the connectivity of the other areas. Indeed, overall, WA shares two connections with other areas in this network, one with BA and one with the preMA, whereas each of the other fROIs has at least three total functional connections. This might be a by-product of the more peripheral location of the WA compared with the other fROIs, which being spatially closer to each other are facilitated to coactivate because of white fibers wiring them together. Wernicke’s area is also the only area among the four in the common network that is not largely part of the maximum k-core (see Figure 5). This result is in agreement with the discussion above about this area and, again, might be due to the more perimetric location of the WA in the common network.
Finally, we investigated the functional architecture of the BA anatomical subareas, revealing a different connectivity between tri-BA, op-BA, and the other areas of the common network for this specific language task. In the first subsection, we discuss our findings regarding the functional connectivity of the BA subdivisions, contextualizing them with known white matter connections that these areas share with the rest of the brain, found in other studies.
### Functional and Structural Connectivity of the Common Network
We observe that the left ventral preMA is the most connected area of the common network, with four total connections and the strongest connectivity with op-BA (WC = 0.74 ± 31) and with pre-SMA (WC = 0.64 ± 0.31). As shown in Figure 2 for a representative subject, the ventral preMA is functionally connected to all the main cortical language areas of the dominant hemisphere, suggesting that this area may play an important role in speech production (other subjects show qualitatively the same feature; see Supporting Information Figure S1).
Tate, Herbet, Moritz-Gasser, Tate, and Duffau (2014) investigated the crucial cortical epicenters of human language function by means of intraoperative direct cortical stimulation in 165 consecutive patients affected by low-grade glioma. The study shows that speech arrest is localized to the ventral preMA instead of the classical BA. Furthermore, the presence of gliomas growing in the left ventral preMA has been related to a higher percentage of speech deficits than gliomas infiltrating the classical BA, providing a possible clinical correlate of the results of Tate et al. (Bizzi et al., 2012; Tate et al., 2014). Furthermore, Duffau et al. (2003), through intraoperative functional mapping in awake patients, have concluded that the left dominant preMA seems to play a major role in language since its electrical stimulation causes speech disturbances. All these results, together with the findings of our study, pinpoint a central role of the preMA in language production.
However, one must be careful not to overinterpret these results, as the highest connectivity does not necessarily imply a central or essential role of that particular fROI in the network. Using advanced graph theoretical analysis, Morone and Makse (2015) demonstrated that the most connected nodes in a network often do not correspond to the most essential nodes, the elimination of which would lead to collapse of that particular network. This idea has been recently tested on functional networks obtained from fMRI of rodent brains and verified through in vivo pharmaco-genetic intervention (Del Ferraro et al., 2018).
Although the correspondence between structural and functional connectivity is not fully understood yet (Honey et al., 2009), the arrangement displayed by our study is supported by structural evidence. The existence of a physical connection between ventral preMA and BA seems realistic, given their spatial contiguity. Besides representing a shared origin for the main bundles of the dorsal pathway (Chang, Raygor, & Berger, 2015; Dick, Bernal, & Tremblay, 2014), the two areas may be directly connected by a specific opercular-premotor fascicle (described in the next section; Lemaire et al., 2013).
The pre-SMA shows connectivity with both ventral preMA and BA (see Figure 3, Supporting Information Figure S1, and Supporting Information Table S3). These functional connections are consistent with the organization of the structural language connectome to some extent: the frontal aslant tract (FAT), an association motor pathway that underlies verbal fluency and connects pre-SMA and BA (Catani et al., 2013; Ford, McGregor, Case, Crosson, & White, 2010; Jenabi, Peck, Young, Brennan, & Holodny, 2014), likely includes projection to posterior regions of the MFG, corresponding to the ventral preMA (Chang et al., 2015).
The low connectivity weight between ventral preMA and WA (see Supporting Information Table S3) may be explained by the increased distance between the two structures. Of note, we find that the functional connectivity weight between op-BA and WA is similar to that of the ventral preMA and WA (Supporting Information Table S3), which is consistent with their structural connection through the same white matter tract, corresponding to the arcuate component of the AF/SLF system (Chang et al., 2015; Dick et al., 2014).
### Broca’s Area Subdivisions
Our findings show that the subdivisions of Broca’s area present different patterns of connectivity within the language network, with the opercular portion appearing more connected to all the significant nodes of the common network compared with the triangular part. This evidence appears in line with the structural architecture of the network.
The prominent interaction between ventral preMA and op-BA found in this study (WC = 0.74 ± 0.31, see Supporting Information Table S3) supports the evidence of a structural link between op-BA and preMA, as suggested by Lemaire et al. (2013) using DTI analysis. The authors investigated the structural connectome of the extended BA, identifying the U-shaped opercular-premotor fasciculus that connects the op-BA to the ipsilateral preMA (Lemaire et al., 2013). On the contrary, tri-BA and ventral preMA showed lower functional connectivity (WC = 0.37 ± 0.29), possibly suggesting indirect communication through the op-BA.
The second strongest functional connection between BA’s subareas and other fROIs of the common network that we find is the link between op-BA–pre-SMA (WC = 0.35 ± 0.23). These two areas are connected by the FAT (Ford et al., 2010), which originates in the SMA/pre-SMA and terminates into the posteriormost aspect of the inferior frontal gyrus (IFG) (Catani et al., 2013). Triangular BA and pre-SMA share a lower functional connectivity weight (WC = 0.20 ± 0.21) compared with op-BA and pre-SMA, reflecting the anatomic boundaries of the FAT.
Finally, the functional link between op-BA and WA is in line with the evidence of a dorsal pathway of language between op-BA and STG through the AF/SLF system (dorsal pathway II; Friederici, 2011).
### Limitations of the Study
Functional networks are task dependent and brain impairments, such as brain tumors, affect their functional connectivity (e.g., by destroying functional links or preventing the fMRI activation of entire brain regions). In our study, we focused on a clinical language task since we wanted to enhance a functional network that emerges in clinical studies. We employed healthy subjects because we wanted to find out the network associated with the clinical language task without possible functional damage induced by brain impairments. A comparison with functional networks of patients with brain impairments is of great interest but goes beyond the purpose of the present paper and it is, instead, presented in a follow-up (Del Ferraro, Pasquini, Peck, Holodny, & Makse, 2019). In Del Ferraro et al. (2019) we compare the functional network as well as the structural network (obtained through diffusion tractography imaging) of patients who present speech impairment under awake cortical electrostimulation and patients who do not. As a control, we also compare these networks with the same architecture for healthy subjects, and the results of the present paper are further used as a benchmark.
Our study is limited to a specific clinical language task (i.e., verb generation), and since functional networks are task-dependent, in principle, our results could be limited to this particular language task. In a follow-up work (Li, Pasquini, et al., 2019) we investigate the functional connectivity differences between healthy subjects who are monolinguals, native English speakers, and subjects who are bilingual, native Spanish speakers who speak English as a second language. We employ a different type of language task (i.e., letter generation), and in both datasets we find that the core language network is the same as the one we find in the present paper by using a verb-generation task and discussed in the Results section. This strengthens our findings showing that the core common network is not limited to a specific language task but that it might be a robust structure shared across several tasks. Further studies that make use of other language tasks are needed to elucidate this point and the generality of the core common network.
As a final limitation of our study we remark that the sample size of our data is limited to 20 individuals, and it would be important, in following works, to test the core network analysis on larger datasets.
Supporting information for this article is available at https://doi.org/10.1162/netn_a_00112. Data that support the findings of this study are publicly available and have been deposited in http://www-levich.engr.ccny.cuny.edu/webpage/hmakse/brain/ (Li, Del Ferraro, et al., 2019).
Qiongge Li: Formal analysis; Methodology; Software; Visualization; Writing - Original Draft; Writing - Review & Editing. Gino Del Ferraro: Methodology; Software; Supervision; Writing - Original Draft; Writing - Review & Editing. Luca Pasquini: Methodology; Validation; Writing - Review & Editing. Kyung K. Peck: Data curation; Writing - Review & Editing. Hernn A. Makse: Funding acquisition; Investigation; Supervision; Writing - Review & Editing. Andrei I. Holodny: Funding acquisition; Investigation; Supervision; Writing - Review & Editing.
Hernán A. Makse, National Institutes of Health (http://dx.doi.org/10.13039/100000002), Award ID: 1R01EB022720. Craig Thompson, National Institutes of Health (http://dx.doi.org/10.13039/100000002), Award ID: P30 CA008748. Hernn A. Makse, National Science Foundation (http://dx.doi.org/10.13039/100000001), Award ID: 1515022. Tim Ahles, National Institutes of Health (http://dx.doi.org/10.13039/100000002), Award ID: U54CA137788. Tim Ahles, National Institutes of Health (http://dx.doi.org/10.13039/100000002), Award ID: U54CA132378. Luca Pasquini, Italian Scientists and Scholars in North America Foundation (http://dx.doi.org/10.13039/100009799), Award ID: Imaging chapter award 2018. Luca Pasquini, European Society of Radiology, Award ID: Bracco clinical fellowship 2018. Qiongge Li, City University of New York (http://dx.doi.org/10.13039/100006462), Award ID: Doctoral student research grant.
We thank Mehrnaz Jenabi for help with AFNI and FSL software and Medeleine Gene for help with mining the data.
• Aphasia:
Language impairment affecting the production or comprehension of speech and the ability to read or write. Aphasia is caused by brain injuries such as strokes, head traumas, brain tumors, or brain infections.
•
• Language task fMRI:
Functional MRI scan where participants perform specific language tasks while the fMRI brain activity is measured. It is used as a radiology clinical routine to determine the localization of language areas in patients with brain impairments.
•
• fROI:
Functional region of interest. Group of fMRI active voxels located in the same anatomical brain region that are identified as part of the same active brain area.
•
• Voxel-level functional network:
Functional connectivity network where each node represents a voxel and a link depicts the functional interdependency between a pair of voxels.
•
• fROI-level functional network:
Course-grained representation of a voxel-level functional network. Each node represents an fROI and a link depicts the functional interdependency between a pair of fROIs.
•
• Direct cortical stimulation:
Electrical stimulation of the cerebral cortex with the aim of identifying brain areas that take part in a specific system function (e.g., language, motor). It remains the gold standard for presurgical mapping of the motor cortex and language areas to prevent unnecessary functional damage.
Aubert
,
A.
,
Costalat
,
R.
,
Duffau
,
H.
, &
Benali
,
H.
(
2002
).
Modeling of pathophysiological coupling between brain electrical activation, energy metabolism and hemodynamics: Insights for the interpretation of intracerebral tumor imaging
.
Acta Biotheoretica
,
50
,
281
295
.
Bassett
,
D. S.
,
Wymbs
,
N. F.
,
Rombach
,
M. P.
,
Porter
,
M. A.
,
Mucha
,
P. J.
, &
Grafton
,
S. T.
(
2013
).
Task-based core-periphery organization of human brain dynamics
.
PLoS Computational Biology
,
9
,
e1003171
.
Bizzi
,
A.
,
Nava
,
S.
,
Ferrè
,
F.
,
Castelli
,
G.
,
Aquino
,
D.
,
Ciaraffa
,
F.
, …
Piacentini
,
S.
(
2012
).
Aphasia induced by gliomas growing in the ventrolateral frontal region: Assessment with diffusion MR tractography, functional MR imaging and neuropsychology
.
Cortex
,
48
,
255
272
.
Bookheimer
,
S.
(
2002
).
Functional MRI of language: New approaches to understanding the cortical organization of semantic processing
.
Annual Review of Neuroscience
,
25
,
151
188
.
Booth
,
J. R.
,
Wood
,
L.
,
Lu
,
D.
,
Houk
,
J. C.
, &
Bitan
,
T.
(
2007
).
The role of the basal ganglia and cerebellum in language processing
.
Brain Research
,
1133
,
136
144
.
Brennan
,
N. M. P.
,
Whalen
,
S.
,
de Morales Branco
,
D.
,
O’Shea
,
J. P.
,
Norton
,
I. H.
, &
Golby
,
A. J.
(
2007
).
Object naming is a more sensitive measure of speech localization than number counting: Converging evidence from direct cortical stimulation and fMRI
.
NeuroImage
,
37
,
S100
S108
.
Bullmore
,
E.
, &
Sporns
,
O.
(
2009
).
Complex brain networks: Graph theoretical analysis of structural and functional systems
.
Nature Reviews Neuroscience
,
10
,
312
.
Catani
,
M.
,
Mesulam
,
M. M.
,
Jakobsen
,
E.
,
Malik
,
F.
,
Martersteck
,
A.
,
Wieneke
,
C.
, …
Rogalski
,
E.
(
2013
).
A novel frontal pathway underlies verbal fluency in primary progressive aphasia
.
Brain
,
136
,
2619
2628
.
Chai
,
L. R.
,
Mattar
,
M. G.
,
Blank
,
I. A.
,
Fedorenko
,
E.
, &
Bassett
,
D. S.
(
2016
).
Functional network dynamics of the language system
.
Cerebral Cortex
,
26
,
4148
4159
.
Chang
,
E. F.
,
Raygor
,
K. P.
, &
Berger
,
M. S.
(
2015
).
Contemporary model of language organization: An overview for neurosurgeons
.
Journal of Neurosurgery
,
122
,
250
261
.
Cox
,
R. W.
(
1996
).
AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages
.
Computers and Biomedical Research
,
29
,
162
173
.
Del Ferraro
,
G.
,
Moreno
,
A.
,
Min
,
B.
,
Morone
,
F.
,
Pérez-Ramírez
,
Ú.
,
Pérez-Cervera
,
L.
, …
Makse
,
H. A.
(
2018
).
Finding influential nodes for integration in brain networks using optimal percolation theory
.
Nature Communications
,
9
,
2274
.
Del Ferraro
,
G.
,
Pasquini
,
L.
,
Peck
,
K. K.
,
Holodny
,
A.
, &
Makse
,
H. A.
(
2019
).
Structural and functional connectivity differences between brain tumor patients who exhibit intra-operative speech impairments vs patients who show no speech deficit.
(
In preparation
)
Devlin
,
J. T.
,
Matthews
,
P. M.
, &
Rushworth
,
M. F.
(
2003
).
Semantic processing in the left inferior prefrontal cortex: A combined functional magnetic resonance imaging and transcranial magnetic stimulation study
.
Journal of Cognitive Neuroscience
,
15
,
71
84
.
Dick
,
A. S.
,
Bernal
,
B.
, &
Tremblay
,
P.
(
2014
).
The language connectome: New pathways, new concepts
.
Neuroscientist
,
20
,
453
467
.
Dorogovtsev
,
S. N.
,
Goltsev
,
A. V.
, &
Mendes
,
J. F. F.
(
2006
).
K-core organization of complex networks
.
Physical Review Letters
,
96
,
040601
.
Dronkers
,
N. F.
,
Plaisant
,
O.
,
Iba-Zizen
,
M. T.
, &
Cabanis
,
E. A.
(
2007
).
Paul Broca’s historic cases: High resolution MR imaging of the brains of Leborgne and Lelong
.
Brain
,
130
,
1432
1441
.
Duffau
,
H.
,
Capelle
,
L.
,
Denvil
,
D.
,
Gatignol
,
P.
,
Sichez
,
N.
,
Lopes
,
M.
, …
Van Effenterre
,
R.
(
2003
).
The role of dominant premotor cortex in language: A study using intraoperative functional mapping in awake patients
.
NeuroImage
,
20
,
1903
1914
.
Fedorenko
,
E.
,
Hsieh
,
P.-J.
,
Nieto-Castañón
,
A.
,
Whitfield-Gabrieli
,
S.
, &
Kanwisher
,
N.
(
2010
).
New method for fMRI investigations of language: Defining ROIs functionally in individual subjects
.
Journal of Neurophysiology
,
104
,
1177
1194
.
Fedorenko
,
E.
, &
Kanwisher
,
N.
(
2009
).
Neuroimaging of language: Why hasn’t a clearer picture emerged?
Language and Linguistics Compass
,
3
,
839
865
.
Ford
,
A.
,
McGregor
,
K. M.
,
Case
,
K.
,
Crosson
,
B.
, &
White
,
K. D.
(
2010
).
Structural connectivity of Broca’s area and medial frontal cortex
.
NeuroImage
,
52
,
1230
1237
.
Friederici
,
A. D.
(
2011
).
The brain basis of language processing: From structure to function
.
Physiological Reviews
,
91
,
1357
1392
.
Friederici
,
A. D.
,
Chomsky
,
N.
,
Berwick
,
R. C.
,
Moro
,
A.
, &
Bolhuis
,
J. J.
(
2017
).
Language, mind and brain
.
Nature Human Behavior
,
1
,
713
722
.
Gallos
,
L. K.
,
Makse
,
H. A.
, &
Sigman
,
M.
(
2012
).
A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks
.
Proceedings of the National Academy of Sciences
,
109
,
2825
2830
.
Gough
,
P. M.
,
Nobre
,
A. C.
, &
Devlin
,
J. T.
(
2005
).
Dissociating linguistic processes in the left inferior frontal cortex with transcranial magnetic stimulation
.
Journal of Neuroscience
,
25
,
8010
8016
.
Hagmann
,
P.
,
Cammoun
,
L.
,
Gigandet
,
X.
,
Meuli
,
R.
,
Honey
,
C. J.
,
Wedeen
,
V. J.
, &
Sporns
,
O.
(
2008
).
Mapping the structural core of human cerebral cortex
.
PLoS Biology
,
6
,
e159
.
Hermundstad
,
A. M.
,
Bassett
,
D. S.
,
Brown
,
K. S.
,
Aminoff
,
E. M.
,
Clewett
,
D.
,
Freeman
,
S.
, …
Carlson
,
J. M.
(
2013
).
Structural foundations of resting-state and task-based functional connectivity in the human brain
.
Proceedings of the National Academy of Sciences
,
110
,
6169
6174
.
Hertrich
,
I.
,
Dietrich
,
S.
, &
Ackermann
,
H.
(
2016
).
The role of the supplementary motor area for speech and language processing
.
Neuroscience and Biobehavioral Reviews
,
68
,
602
610
.
Honey
,
C.
,
Sporns
,
O.
,
Cammoun
,
L.
,
Gigandet
,
X.
,
Thiran
,
J.-P.
,
Meuli
,
R.
, &
Hagmann
,
P.
(
2009
).
Predicting human resting-state functional connectivity from structural connectivity
.
Proceedings of the Natinal Academy of Sciences
,
106
,
2035
2040
.
Isaacs
,
K. L.
,
Barr
,
W. B.
,
Nelson
,
P. K.
, &
Devinsky
,
O.
(
2006
).
Degree of handedness and cerebral dominance
.
Neurology
,
66
,
1855
1858
.
Jenabi
,
M.
,
Peck
,
K. K.
,
Young
,
R. J.
,
Brennan
,
N.
, &
Holodny
,
A. I.
(
2014
).
Probabilistic fiber tracking of the language and motor white matter pathways of the supplementary motor area (SMA) in patients with brain tumors
.
Journal of Neuroradiology
,
41
,
342
349
.
Kitsak
,
M.
,
Gallos
,
L. K.
,
Havlin
,
S.
,
Liljeros
,
F.
,
Muchnik
,
L.
,
Stanley
,
H. E.
, &
Makse
,
H. A.
(
2010
).
Identification of influential spreaders in complex networks
.
Nature Physics
,
6
,
888
893
.
Knecht
,
S.
,
Dräger
,
B.
,
Deppe
,
M.
,
Bobe
,
L.
,
Lohmann
,
H.
,
Flöel
,
A.
, …
Henningsen
,
H.
(
2000
).
Handedness and hemispheric language dominance in healthy humans
.
Brain
,
123
,
2512
2518
.
Lee
,
M. H.
,
Smyser
,
C. D.
, &
Shimony
,
J. S.
(
2013
).
Resting-state fMRI: A review of methods and clinical applications
.
AJNR
,
34
,
1866
1872
.
Lemaire
,
J.-J.
,
Golby
,
A.
,
Wells
III
, W. M.,
Pujol
,
S.
,
Tie
,
Y.
,
Rigolo
,
L.
, …
Kikinis
,
R.
(
2013
).
Extended Broca’s area in the connectome of language in adults: Subcortical single-subject analysis using DTI tractography
.
Brain Topography
,
26
,
428
441
.
Li
,
Q.
,
Del Ferraro
,
G.
,
Pasquini
,
L.
,
Peck
,
K.
,
Makse
,
H.
, &
Holodny
,
A.
(
2019
).
Task-based fMRI of 20 healthy individual performing a language task used for clinical studies
. http://www-levich.engr.ccny.cuny.edu/webpage/hmakse/brain/
Li
,
Q.
,
Dong
,
J. W.
,
Del Ferraro
,
G.
,
Petrovich Brennan
,
N.
,
Peck
,
K. K.
,
Tabar
,
V.
, …
Holodny
,
A. I.
(
2019
).
Functional translocation of Broca’s area in a low-grade left frontal glioma: Graph theory reveals the novel, adaptive network connectivity
.
Frontiers in Neurology
,
10
,
702
.
Li
,
Q.
,
Pasquini
,
L.
,
Del Ferraro
,
G.
,
Gene
,
M.
,
Peck
,
K. K.
,
Makse
,
H. A.
, &
Holodny
,
A.
(
2019
).
Functional connectivity core differences between monolinguals and bilinguals healthy brains
.
arXiv:1909.03109
Lucini
,
F. A.
,
Del Ferraro
,
G.
,
Sigman
,
M.
, &
Makse
,
H. A.
(
2019
).
How the brain transitions from conscious to subliminal perception
.
Neuroscience
,
411
,
280
290
.
Morone
,
F.
,
Del Ferraro
,
G.
, &
Makse
,
H. A.
(
2019
).
The k-core as a predictor of structural collapse in mutualistic ecosystems
.
Nature Physics
,
15
,
95
102
.
Morone
,
F.
, &
Makse
,
H. A.
(
2015
).
Influence maximization in complex networks through optimal percolation
.
Nature
,
524
,
65
68
.
Nachev
,
P.
,
Kennard
,
C.
, &
Husain
,
M.
(
2008
).
Functional role of the supplementary and pre-supplementary motor areas
.
Nature Reviews Neuroscience
,
9
,
856
869
.
Newman
,
S. D.
,
Just
,
M. A.
,
Keller
,
T. A.
,
Roth
,
J.
, &
Carpenter
,
P. A.
(
2003
).
Differential effects of syntactic and semantic processing on the subregions of Broca’s area
.
Cognitive Brain Research
,
16
,
297
307
.
Nixon
,
P.
,
Lazarova
,
J.
,
Hodinott-Hill
,
I.
,
Gough
,
P.
, &
Passingham
,
R.
(
2004
).
The inferior frontal gyrus and phonological processing: An investigation using rTMS
.
Journal of Cognitive Neuroscience
,
16
,
289
300
.
Pittel
,
B.
,
Spencer
,
J. H.
, &
Wormald
,
N. C.
(
1996
).
Sudden emergence of a giant k-core in a random graph
.
Journal of Combinatorial Theory, Series B
,
67
,
111
151
.
Ramsey
,
N.
,
Sommer
,
I.
,
Rutten
,
G.
, &
Kahn
,
R.
(
2001
).
Combined analysis of language tasks in fMRI improves assessment of hemispheric dominance for language functions in individual subjects
.
NeuroImage
,
13
,
719
733
.
Rosenberger
,
L.
,
Zeck
,
J.
,
Berl
,
M.
,
Moore
,
E.
,
Ritzl
,
E.
,
Shamim
,
S.
, …
Gaillard
,
W. D.
(
2009
).
Interhemispheric and intrahemispheric language reorganization in complex partial epilepsy
.
Neurology
,
72
,
1830
1836
.
Rubinov
,
M.
, &
Sporns
,
O.
(
2010
).
Complex network measures of brain connectivity: Uses and interpretations
.
NeuroImage
,
52
,
1059
1069
.
Seidman
,
S. B.
(
1983
).
Network structure and minimum degree
.
Social Networks
,
5
,
269
287
.
Tate
,
M. C.
,
Herbet
,
G.
,
Moritz-Gasser
,
S.
,
Tate
,
J. E.
, &
Duffau
,
H.
(
2014
).
Probabilistic map of critical functional regions of the human cerebral cortex: Broca’s area revisited
.
Brain
,
137
,
2773
2782
.
Tombari
,
D.
,
Loubinoux
,
I.
,
Pariente
,
J.
,
Gerdelat
,
A.
,
Albucher
,
J.-F.
,
Tardy
,
J.
, …
Chollet
,
F.
(
2004
).
A longitudinal fMRI study: In recovering and then in clinically stable sub-cortical stroke patients
.
NeuroImage
,
23
,
827
839
.
van den Heuvel
,
M. P.
, &
Sporns
,
O.
(
2011
).
Rich-club organization of the human connectome
.
Journal of Neuroscience
,
31
,
15775
15786
.
Wang
,
L.
,
Chen
,
D.
,
Yang
,
X.
,
Olson
,
J. J.
,
Gopinath
,
K.
,
Fan
,
T.
, &
Mao
,
H.
(
2013
).
Group independent component analysis and functional MRI examination of changes in language areas associated with brain tumors at different locations
.
PLoS ONE
,
8
,
e59657
.
Wernicke
,
C.
(
1970
).
The aphasic symptom-complex: A psychological study on an anatomical basis
.
Archives of Neurology
,
22
,
280
282
.
Xiong
,
J.
,
Rao
,
S.
,
Jerabek
,
P.
,
Zamarripa
,
F.
,
Woldorff
,
M.
,
Lancaster
,
J.
, &
Fox
,
P. T.
(
2000
).
Intersubject variability in cortical activations dung a complex language task
.
NeuroImage
,
12
,
326
339
.
Zhou
,
S.
, &
Mondragón
,
R. J.
(
2004
).
The rich-club phenomenon in the internet topology
.
IEEE Communications Letters
,
8
(
3
),
180
182
.
## Author notes
Competing Interests: The authors have declared that no competing interests exist.
Equal contributions as first authors.
Handling Editor: Alex Arenas
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.
|
2022-10-02 21:00:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5555388331413269, "perplexity": 3235.7178230298427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00787.warc.gz"}
|
http://math.stackexchange.com/questions/119878/how-to-find-the-parametric-equations-for-zx-zy-xy-0
|
# How to find the parametric equations for: $zx + zy - xy = 0$
I'm trying to find the parametric equation for $zx + zy - xy = 0$ or equivalent $z = \frac{xy}{x+y}$ but couldn't find any hint in the web neither in a couple of calculus books for this particular equation.
I have no clue about how to proceed :(
-
Big hint: When $xyz\neq 0$, $$\frac{1}{z} = \frac{1}{y}+\frac{1}{x}$$
|
2014-08-28 05:32:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.761497974395752, "perplexity": 127.48691398359082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830094.68/warc/CC-MAIN-20140820021350-00238-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://mathoverflow.net/revisions/56948/list
|
5 Usage of *natural* by Mochizuki
Edit As an example of the increasing prevalence of the notion of naturality in contemporary mathematics, it is notable that Shinichi Mochizuki’s four preprints asserting a proof of the ABC conjecture employ the word "natural" and its derivatives on more than six hundred occasions (for details and several related quotations, see this post on Gödel's Lost Letter and P=NP).
In accord with Daniel Miller's answer above, the study of "naturality" as a formal abstraction in mathematics can be traced back largely to two articles by Saunders Mac Lane and Samuel Eilenberg: Natural isomorphisms in group theory (1942) and General theory of natural equivalences (1945). Both articles are well worth reading.
How did these ideas arise? Roughly speaking, Eilenberg and MacLane began by recognizing that if an arbitrary choice of coordinates makes a difference to the quantities you are calculating and/or the theorems you are proving, then those quantities and theorems are not natural. Mac Lane has described this process in The development and prospects for category theory (1996) as follows:
I emphasize that the notions category and functor were not formulated or put in print until the idea of a natural transformation was also at hand.
Thus, one good start for students is to acquire a thorough practical grasp of naturality in the context of coordinate transformations in linear algebra and differential geometry.
A concrete example of a canonical usage of "naturality", which is accompanied by an extensive motivating discussion, is given in John Lee's Introduction to Smooth Manifolds as the following lemma:
Lemma 12.16: (Naturality of the Exterior Derivative)
If $G\colon M\to N$ is a smooth map, then the pullback map $G^\star\colon \mathcal{A}^k(N)\to \mathcal{A}^k(M)$ commutes with $d$. That is, for all $\omega \in \mathcal{A}^k(N)$, we have $G^\star(d\omega) = d(G^\star\omega)$.
As a commutative diagram the above lemma exhibits a canonical form:
$$\begin{array}{c@{}ccc@{}c} &&d&&\\ &\mathcal{A}^k(N)&\longrightarrow&\mathcal{A}^{k+1}(N)\\[2ex] G^\star\!\!\!\!\!\!&\big\downarrow&&\big\downarrow&\!\!\!\!\!\!\!G^\star\\[2ex] &\mathcal{A}^k(M)&\longrightarrow&\mathcal{A}^{k+1}(M)\\ &&d&& \end{array}$$
Being interested in practical applications of geometrically "natural" formalisms for quantum systems engineering, I've studied the usage in MacSciNet reviews of the words "natural*" (chiefly "natural" and "naturality") and "universal*" (chiefly "universal" and "universality").
Here are the numbers; their use is burgeoning!
• Year-Range (natural*, universal*)
• 2001-2005: (16788 uses, 05288 uses)
• 1996-2000: (14880, 04977)
• 1991-1995: (12550, 04432)
• 1986-1990: (10335, 03343)
• 1981-1985: (08775, 03013)
• 1976-1980: (07402, 02412)
• 1971-1975: (05668, 02040)
• 1966-1970: (03466, 01167)
• 1961-1965: (02211, 00610)
• 1956-1960: (01368, 00406)
• 1951-1955: (00880, 00253)
• 1946-1950: (00502, 00107)
• 1941-1945: (00251, 00060)
So to judge by the literature, it seems that we are entering into a Golden Era of mathematical "naturality" and "universality" ... we can hope so, anyway! :)
In accord with Daniel Miller's answer above, the study of "naturality" as a formal abstraction in mathematics can be traced back largely to two articles by Saunders Mac Lane and Samuel Eilenberg: Natural isomorphisms in group theory (1942) and General theory of natural equivalences (1945). Both articles are well worth reading.
How did these ideas arise? Roughly speaking, Eilenberg and MacLane began by recognizing that if an arbitrary choice of coordinates makes a difference to the quantities you are calculating and/or the theorems you are proving, then those quantities and theorems are not natural. Mac Lane has described this process in The development and prospects for category theory (1996) as follows:
I emphasize that the notions category and functor were not formulated or put in print until the idea of a natural transformation was also at hand.
Thus, one good start for students is to acquire a thorough practical grasp of naturality in the context of coordinate transformations in linear algebra and differential geometry.
A concrete example of a canonical usage of "naturality", which is accompanied by an extensive motivating discussion, is given in John Lee's Introduction to Smooth Manifolds as the following lemma:
Lemma 12.16: (Naturality of the Exterior Derivative)
If $G\colon M\to N$ is a smooth map, then the pullback map $G^\star\colon \mathcal{A}^k(N)\to \mathcal{A}^k(M)$ commutes with $d$. That is, for all $\omega \in \mathcal{A}^k(N)$, we have $G^\star(d\omega) = d(G^\star\omega)$.
As a commutative diagram the above lemma exhibits a canonical form:
$$\begin{array}{c@{}ccc@{}c} &&d&&\\ &\mathcal{A}^k(N)&\longrightarrow&\mathcal{A}^{k+1}(N)\\[2ex] G^\star\!\!\!\!\!\!&\big\downarrow&&\big\downarrow&\!\!\!\!\!\!\!G^\star\\[2ex] &\mathcal{A}^k(M)&\longrightarrow&\mathcal{A}^{k+1}(M)\\ &&d&& \end{array}$$
Being interested in practical applications of geometrically "natural" formalisms for quantum systems engineering, I've studied the usage in MacSciNet reviews of the words "natural*" (chiefly "natural" and "naturality") and "universal*" (chiefly "universal" and "universality").
Here are the numbers; their use is burgeoning!
• Year-Range (natural*, universal*)
• 2001-2005: (16788 uses, 05288 uses)
• 1996-2000: (14880, 04977)
• 1991-1995: (12550, 04432)
• 1986-1990: (10335, 03343)
• 1981-1985: (08775, 03013)
• 1976-1980: (07402, 02412)
• 1971-1975: (05668, 02040)
• 1966-1970: (03466, 01167)
• 1961-1965: (02211, 00610)
• 1956-1960: (01368, 00406)
• 1951-1955: (00880, 00253)
• 1946-1950: (00502, 00107)
• 1941-1945: (00251, 00060)
So to judge by the literature, it seems that we are entering into a Golden Era of mathematical "naturality" and "universality" ... we can hope so, anyway! :)
In accord with Daniel Miller's answer above, the study of "naturality" as a formal abstraction in mathematics can be traced back largely to two articles by Saunders Mac Lane and Samuel Eilenberg: Natural isomorphisms in group theory (1942) and General theory of natural equivalences (1945). Both articles are well worth reading.
How did these ideas arise? Roughly speaking, Eilenberg and MacLane began by recognizing that if an arbitrary choice of coordinates makes a difference to the quantities you are calculating and/or the theorems you are proving, then those quantities and theorems are not natural. Mac Lane has described this process in The development and prospects for category theory (1996) as follows:
I emphasize that the notions category and functor were not formulated or put in print until the idea of a natural transformation was also at hand.
Thus, one good start for students is to acquire a thorough practical grasp of naturality in the context of coordinate transformations in linear algebra and differential geometry. For
A concrete example , of a canonical usage of "naturality", which is accompanied by an extensive motivating discussion, can be found is given in John Lee's Introduction to Smooth Manifolds as the following lemma:
Lemma 12.16: (Naturality of the Exterior Derivative)
If $G\colon M\to N$ is a smooth map, then the pullback map $G^\star\colon \mathcal{A}^k(N)\to \mathcal{A}^k(M)$ commutes with $d$. That is, for all $\omega \in \mathcal{A}^k(N)$, we have $G^\star(d\omega) = d(G^\star\omega)$.
Being interested in practical applications of geometrically "natural" formalisms for quantum systems engineering, I've studied the usage in MacSciNet reviews of the words "natural*" (chiefly "natural" and "naturality") and "universal*" (chiefly "universal" and "universality").
Here are the numbers; their use is burgeoning!
• Year-Range (natural*, universal*)
• 2001-2005: (16788 uses, 05288 uses)
• 1996-2000: (14880, 04977)
• 1991-1995: (12550, 04432)
• 1986-1990: (10335, 03343)
• 1981-1985: (08775, 03013)
• 1976-1980: (07402, 02412)
• 1971-1975: (05668, 02040)
• 1966-1970: (03466, 01167)
• 1961-1965: (02211, 00610)
• 1956-1960: (01368, 00406)
• 1951-1955: (00880, 00253)
• 1946-1950: (00502, 00107)
• 1941-1945: (00251, 00060)
So to judge by the literature, it seems that we are entering into a Golden Era of mathematical "naturality" and "universality" ... we can hope so, anyway! :)
2 Added example of "naturality" from John Lee's book
1
|
2013-05-26 08:34:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7326069474220276, "perplexity": 2490.8188766849826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706762669/warc/CC-MAIN-20130516121922-00045-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://en.wikibooks.org/wiki/Electronics_Handbook/Components/Transformers
|
# Electronics Handbook/Components/Transformers
## Transformer
Transformer is an electromagnetic component consists of two coils wrap around a square magnetic material that has the capability of Step up, Step down and Buffer the voltage
## Characteristic
$V_S = V_P \frac{N_S}{N_P}$
• Step Up Voltage when $\frac{N_S}{N_P} > 1$
• Step Down Voltage when $\frac{N_S}{N_P} < 1$
• Buffer Voltage when $\frac{N_S}{N_P} = 1$
## Summary
Component Transformer
Symbol
Construction
$\frac{V_o}{V_i}$ $\frac{V_o}{V_i} = \frac{N_2}{N_1}$
Function Step Up $V_o > V_i , N_2 > N_1$
Step Down$V_o < V_i , N_2 < N_1$
Buffer $V_o = V_i , N_2 = N_1$
|
2014-10-24 16:14:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7413393259048462, "perplexity": 12059.108903135148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646269.50/warc/CC-MAIN-20141024030046-00240-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.transforms.PointPairFeatures.html
|
# torch_geometric.transforms.PointPairFeatures
class PointPairFeatures(cat: bool = True)[source]
Bases: BaseTransform
Computes the rotation-invariant Point Pair Features (functional name: point_pair_features)
$\left( \| \mathbf{d_{j,i}} \|, \angle(\mathbf{n}_i, \mathbf{d_{j,i}}), \angle(\mathbf{n}_j, \mathbf{d_{j,i}}), \angle(\mathbf{n}_i, \mathbf{n}_j) \right)$
of linked nodes in its edge attributes, where $$\mathbf{d}_{j,i}$$ denotes the difference vector between, and $$\mathbf{n}_i$$ and $$\mathbf{n}_j$$ denote the surface normals of node $$i$$ and $$j$$ respectively.
Parameters
cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)
|
2023-02-09 08:27:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4680631756782532, "perplexity": 6329.973061365596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00125.warc.gz"}
|
http://en.wikipedia.org/wiki/Visual_magnitude
|
# Apparent magnitude
(Redirected from Visual magnitude)
Asteroid 65 Cybele and two stars, with their magnitudes labeled
The apparent magnitude (m) of a celestial body is a measure of its brightness as seen by an observer on Earth, adjusted to the value it would have in the absence of the atmosphere. The brighter the object appears, the lower the value of its magnitude. Generally the visible spectrum (vmag) is used as a basis for the apparent magnitude, but other regions of the spectrum, such as the near-infrared J-band, are also used. In the visible spectrum Sirius is the brightest star in the night sky, whereas in the near-infrared J-band, Betelgeuse is the brightest.
## History
Visible to
typical
human eye[1]
Apparent
magnitude
Brightness
relative
to Vega
Number of stars
brighter than
apparent magnitude[2]
Yes −1.0 250% 1
0.0 100% 4
1.0 40% 15
2.0 16% 48
3.0 6.3% 171
4.0 2.5% 513
5.0 1.0% 1 602
6.0 0.40% 4 800
6.5 0.25% 9 096[3]
No 7.0 0.16% 14 000
8.0 0.063% 42 000
9.0 0.025% 121 000
10.0 0.010% 340 000
The scale now used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The brightest stars in the night sky were said to be of first magnitude (m = 1), whereas the faintest were of sixth magnitude (m = 6), the limit of human visual perception (without the aid of a telescope). Each grade of magnitude was considered twice the brightness of the following grade (a logarithmic scale). This somewhat crude method of indicating the brightness of stars was popularized by Ptolemy in his Almagest, and is generally believed to originate with Hipparchus. This original system did not measure the magnitude of the Sun.
In 1856, Norman Robert Pogson formalized the system by defining a typical first magnitude star as a star that is 100 times as bright as a typical sixth magnitude star; thus, a first magnitude star is about 2.512 times as bright as a second magnitude star. The fifth root of 100 is known as Pogson's Ratio.[4] Pogson's scale was originally fixed by assigning Polaris a magnitude of 2. Astronomers later discovered that Polaris is slightly variable, so they first switched to Vega as the standard reference star, and then switched to using tabulated zero points[clarification needed] for the measured fluxes.[5] The magnitude depends on the wavelength band (see below).
The modern system is no longer limited to 6 magnitudes or only to visible light. Very bright objects have negative magnitudes. For example, Sirius, the brightest star of the celestial sphere, has an apparent magnitude of –1.4. The modern scale includes the Moon and the Sun. The full Moon has a mean apparent magnitude of –12.74[6] and the Sun has an apparent magnitude of –26.74.[7] The Hubble Space Telescope has located stars with magnitudes of 30 at visible wavelengths and the Keck telescopes have located similarly faint stars in the infrared.
## Calculations
30 Doradus image taken by ESO's VISTA. This nebula has an apparent magnitude of 8.
As the amount of light received actually depends on the thickness of the Earth's atmosphere in the line of sight to the object, the apparent magnitudes are adjusted to the value they would have in the absence of the atmosphere. The dimmer an object appears, the higher the numerical value given to its apparent magnitude. Note that brightness varies with distance; an extremely bright object may appear quite dim, if it is far away. Brightness varies inversely with the square of the distance. The absolute magnitude, M, of a celestial body (outside the Solar System) is the apparent magnitude it would have if it were at 10 parsecs (~32.6 light years); that of a planet (or other Solar System body) is the apparent magnitude it would have if it were 1 astronomical unit from both the Sun and Earth. The absolute magnitude of the Sun is 4.83 in the V band (yellow) and 5.48 in the B band (blue).[8]
The apparent magnitude, m, in the band, x, can be defined as,
$m_{x} - m_{x,0}= -2.5 \log_{10} \left(\frac {F_x}{F_{x,0} }\right)\,$,
where $F_x\!\,$ is the observed flux in the band x, and $m_{x,0}$ and $F_{x,0}$ are a reference magnitude, and reference flux in the same band x, such as that of Vega. An increase of 1 in the magnitude scale corresponds to a decrease in brightness by a factor of $\approx 2.512$. Based on the properties of logarithms, a difference in magnitudes, $m_1 - m_2 = \Delta m$, can be converted to a variation in brightness as $F_2/F_1 \approx 2.512^{\Delta m}$.
### Example: Sun and Moon
What is the ratio in brightness between the Sun and the full moon?
The apparent magnitude of the Sun is -26.74 (brighter), and the mean apparent magnitude of the full moon is -12.74 (dimmer).
Difference in magnitude : $x = m_1 - m_2 = (-12.74) - (-26.74) = 14.00$
Variation in Brightness : $v_b = 2.512^x = 2.512^{14.00} \approx 400,000$
The Sun appears about 400,000 times brighter than the full moon.
Sometimes, it might be useful to add magnitudes. For example, to determine the combined magnitude of a double star when the magnitudes of the individual components are known. This can be done by setting an equation using the brightness (in linear units) of each magnitude.[9]
$2.512^{-m_f} = 2.512^{-m_1} + 2.512^{-m_2} \!\$
Solving for $m_f$ yields
$m_f = -\log_{2.512} \left(2.512^{-m_1} + 2.512^{-m_2} \right) \!\$
where $m_f$ is the resulting magnitude after adding $m_1$ and $m_2$. Note that the negative of each magnitude is used because greater intensities equate to lower magnitudes.
## Standard reference values
Standard apparent magnitudes and fluxes for typical bands[10]
Band $\lambda$ ($\mu m$) $\Delta \lambda / \lambda$[clarification needed] Flux at m = 0, $F_{x,0}$ (Jy) Flux at m = 0, $F_{x,0}$ $(10^{-20} \text{ erg/s/cm}^2\text{/Hz})$
U 0.36 0.15 1810 1.81
B 0.44 0.22 4260 4.26
V 0.55 0.16 3640 3.64
R 0.64 0.23 3080 3.08
I 0.79 0.19 2550 2.55
J 1.26 0.16 1600 1.6
H 1.60 0.23 1080 1.08
K 2.22 0.23 670 6.7
L 3.50
g 0.52 0.14 3730 3.73
r 0.67 0.14 4490 4.49
i 0.79 0.16 4760 4.76
z 0.91 0.13 4810 4.81
It is important to note that the scale is logarithmic: the relative brightness of two objects is determined by the difference of their magnitudes. For example, a difference of 3.2 means that one object is about 19 times as bright as the other, because Pogson's Ratio raised to the power 3.2 is approximately 19.05. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber-Fechner law), but it is now believed that the response is a power law (see Stevens' power law).[11]
Magnitude is complicated by the fact that light is not monochromatic. The sensitivity of a light detector varies according to the wavelength of the light, and the way it varies depends on the type of light detector. For this reason, it is necessary to specify how the magnitude is measured for the value to be meaningful. For this purpose the UBV system is widely used, in which the magnitude is measured in three different wavelength bands: U (centred at about 350 nm, in the near ultraviolet), B (about 435 nm, in the blue region) and V (about 555 nm, in the middle of the human visual range in daylight). The V band was chosen for spectral purposes and gives magnitudes closely corresponding to those seen by the light-adapted human eye, and when an apparent magnitude is given without any further qualification, it is usually the V magnitude that is meant, more or less the same as visual magnitude.
Because cooler stars, such as red giants and red dwarfs, emit little energy in the blue and UV regions of the spectrum their power is often under-represented by the UBV scale. Indeed, some L and T class stars have an estimated magnitude of well over 100, because they emit extremely little visible light, but are strongest in infrared.
Measures of magnitude need cautious treatment and it is extremely important to measure like with like. On early 20th century and older orthochromatic (blue-sensitive) photographic film, the relative brightnesses of the blue supergiant Rigel and the red supergiant Betelgeuse irregular variable star (at maximum) are reversed compared to what human eyes perceive, because this archaic film is more sensitive to blue light than it is to red light. Magnitudes obtained from this method are known as photographic magnitudes, and are now considered obsolete.
For objects within the Milky Way with a given absolute magnitude, 5 is added to the apparent magnitude for every tenfold increase in the distance to the object. This relationship does not apply for objects at very great distances (far beyond the Milky Way), because a correction for general relativity must then be taken into account due to the non-Euclidean nature of space.
For planets and other Solar System bodies the apparent magnitude is derived from its phase curve and the distances to the Sun and observer.
## Table of notable celestial objects
Apparent visual magnitudes of known celestial objects
App. Mag. (V) Celestial object
–38.00 Rigel as seen from 1 astronomical unit. It would be seen as a large very bright bluish scorching ball of 35° apparent diameter.
–30.30 Sirius as seen from 1 astronomical unit
–29.30 Sun as seen from Mercury at perihelion
–27.40 Sun as seen from Venus at perihelion
–26.74[7] Sun as seen from Earth (about 400,000 times brighter than mean full moon)
–25.60 Sun as seen from Mars at aphelion
–23.00 Sun as seen from Jupiter at aphelion
–21.70 Sun as seen from Saturn at aphelion
–20.20 Sun as seen from Uranus at aphelion
–19.30 Sun as seen from Neptune
–18.20 Sun as seen from Pluto at aphelion
–16.70 Sun as seen from Eris at aphelion
–14.2 An illumination level of one lux [12][13]
–12.92 Maximum brightness of full moon (mean is –12.74)[6]
–11.20 Sun as seen from Sedna at aphelion
–10 Comet Ikeya–Seki (1965), which was the brightest Kreutz Sungrazer of modern times[14]
–9.50 Maximum brightness of an Iridium (satellite) flare
–7.50 The SN 1006 supernova of AD 1006, the brightest stellar event in recorded history (7200 light years away)[15]
–6.50 The total integrated magnitude of the night sky as seen from Earth
–6.00 The Crab Supernova (SN 1054) of AD 1054 (6500 light years away)[16]
–5.9 International Space Station (when the ISS is at its perigee and fully lit by the Sun)[17]
–4.89 Maximum brightness of Venus[18] when illuminated as a crescent
–4.00 Faintest objects observable during the day with naked eye when Sun is high
–3.99 Maximum brightness of Epsilon Canis Majoris 4.7 million years ago, the historical brightest star of the last and next five million years
–3.82 Minimum brightness of Venus when it is on the far side of the Sun
–2.94 Maximum brightness of Jupiter[19]
–2.91 Maximum brightness of Mars[20]
–2.50 Faintest objects visible during the day with naked eye when Sun is less than 10° above the horizon
–2.50 Minimum brightness of new moon
–2.45 Maximum brightness of Mercury at superior conjunction (unlike Venus, Mercury is at its brightest when on the far side of the Sun, the reason being their different phase curves)
–1.61 Minimum brightness of Jupiter
–1.47 Brightest star (except for the Sun) at visible wavelengths: Sirius[21]
–0.83 Eta Carinae apparent brightness as a supernova impostor in April 1843
–0.72 Second-brightest star: Canopus[22]
–0.49 Maximum brightness of Saturn at opposition and when the rings are full open (2003, 2018)
–0.27 The total magnitude for the Alpha Centauri AB star system. (Third-brightest star to the naked eye)
–0.04 Fourth-brightest star to the naked eye Arcturus[23]
−0.01 Fourth-brightest individual star visible telescopically in the sky Alpha Centauri A
+0.03 Vega, which was originally chosen as a definition of the zero point[24]
+0.50 Sun as seen from Alpha Centauri
1.47 Minimum brightness of Saturn
1.84 Minimum brightness of Mars
3.03 The SN 1987A supernova in the Large Magellanic Cloud 160,000 light-years away.
3 to 4 Faintest stars visible in an urban neighborhood with naked eye
3.44 The well known Andromeda Galaxy (M31)[25]
4.38 Maximum brightness of Ganymede[26] (moon of Jupiter and the largest moon in the Solar System)
4.50 M41, an open cluster that may have been seen by Aristotle[27]
5.20 Maximum brightness of asteroid Vesta
5.32 Maximum brightness of Uranus[28]
5.72 The spiral galaxy M33, which is used as a test for naked eye seeing under dark skies[29][30]
5.73 Minimum brightness of Mercury
5.8 Peak visual magnitude of gamma ray burst GRB 080319B (the "Clarke Event") seen on Earth on March 19, 2008 from a distance of 7.5 gigalight-years.
5.95 Minimum brightness of Uranus
6.49 Maximum brightness of asteroid Pallas
6.50 Approximate limit of stars observed by a mean naked eye observer under very good conditions. There are about 9,500 stars visible to mag 6.5.[1]
6.64 Maximum brightness of dwarf planet Ceres in the asteroid belt
6.75 Maximum brightness of asteroid Iris
6.90 The spiral galaxy M81 is an extreme naked eye target that pushes human eyesight and the Bortle Dark-Sky Scale to the limit[31]
7 to 8 Extreme naked eye limit with class 1 Bortle Dark-Sky Scale, the darkest skies available on Earth[32]
7.78 Maximum brightness of Neptune[33]
8.02 Minimum brightness of Neptune
8.10 Maximum brightness of Titan (largest moon of Saturn),[34][35] mean opposition magnitude 8.4[36]
8.94 Maximum brightness of asteroid 10 Hygiea[37]
9.50 Faintest objects visible using common 7x50 binoculars under typical conditions[38]
10.20 Maximum brightness of Iapetus[35] (brightest when west of Saturn and takes 40 days to switch sides)
12.91 Brightest quasar 3C 273 (luminosity distance of 2.4 giga-light years)
13.42 Maximum brightness of Triton[36]
13.65 Maximum brightness of Pluto[39] (725 times fainter than magnitude 6.5 naked eye skies)
15.40 Maximum brightness of centaur Chiron[40]
15.55 Maximum brightness of Charon (the large moon of Pluto)
16.80 Current opposition brightness of Makemake[41]
17.27 Current opposition brightness of Haumea[42]
18.70 Current opposition brightness of Eris
20.70 Callirrhoe (small ~8 km satellite of Jupiter)[36]
22.00 Approximate limiting magnitude of a 24" Ritchey-Chrétien telescope with 30 minutes of stacked images (6 subframes at 300s each) using a CCD detector[43]
22.91 Maximum brightness of Pluto's moon Hydra
23.38 Maximum brightness of Pluto's moon Nix
24.80 Amateur picture with greatest magnitude: quasar CFHQS J1641 +3755[44][45]
25.00 Fenrir (small ~4 km satellite of Saturn)[46]
27.00 Faintest objects observable in visible light with 8m ground-based telescopes
28.00 Jupiter if it were located 5000AU from the Sun[47]
28.20 Halley's Comet in 2003 when it was 28AU from the Sun[48]
31.50 Faintest objects observable in visible light with Hubble Space Telescope[49]
35.00 LBV 1806-20, a luminous blue variable star, expected magnitude at visible wavelengths due to interstellar extinction
36.00 Faintest objects observable in visible light[citation needed] with E-ELT
Some of the above magnitudes are only approximate. Telescope sensitivity also depends on observing time, optical bandpass, and interfering light from scattering and airglow.
## References
1. ^ a b "Vmag<6.5". SIMBAD Astronomical Database. Retrieved 2010-06-25.
2. ^ "Magnitude". National Solar Observatory—Sacramento Peak. Archived from the original on 2008-02-06. Retrieved 2006-08-23.
3. ^ Bright Star Catalogue
4. ^
5. ^
6. ^ a b Williams, Dr. David R. (2010-02-02). "Moon Fact Sheet". NASA (National Space Science Data Center). Archived from the original on 23 March 2010. Retrieved 2010-04-09.
7. ^ a b Williams, Dr. David R. (2004-09-01). "Sun Fact Sheet". NASA (National Space Science Data Center). Archived from the original on 15 July 2010. Retrieved 2010-07-03.
8. ^ Prof. Aaron Evans. "Some Useful Astronomical Definitions". Stony Brook Astronomy Program. Retrieved 2009-07-12.
9. ^ "Magnitude Arithmetic". Weekly Topic. Caglow. Retrieved 30 January 2012.
10. ^ Prof. Gregory D. Wirth. "Astronomical Magnitude Systems". Department of Physics and Astronomy, University of Toronto. Retrieved 2012-08-15.
11. ^ E. Schulman and C. V. Cox (1997). "Misconceptions About Astronomical Magnitudes". American Journal of Physics 65: 1003. Bibcode:1997AmJPh..65.1003S. doi:10.1119/1.18714.
12. ^
13. ^ Ian S. McLean, Electronic imaging in astronomy: detectors and instrumentation Springer, 2008, ISBN 3-540-76582-4 page 529
14. ^ "Brightest comets seen since 1935". International Comet Quarterly. Retrieved 18 December 2011.
15. ^ Winkler, P. Frank; Gupta, Gaurav; Long, Knox S. (2003). "The SN 1006 Remnant: Optical Proper Motions, Deep Imaging, Distance, and Brightness at Maximum". The Astrophysical Journal 585: 324–335. arXiv:astro-ph/0208415. Bibcode:2003ApJ...585..324W. doi:10.1086/345985.
16. ^ Supernova 1054 - Creation of the Crab Nebula
17. ^ "ISS Information - Heavens-above.com". Heavens-above. Retrieved 2007-12-22.
18. ^ "HORIZONS Web-Interface for Venus (Major Body=299)". JPL Horizons On-Line Ephemeris System. 2006-Feb-27 (GEOPHYSICAL DATA). Retrieved 2010-11-28. (Using JPL Horizons you can see that on 2013-Dec-08 Venus will have an apmag of -4.89)
19. ^ Williams, David R. (2007-11-02). "Jupiter Fact Sheet". National Space Science Data Center. NASA. Retrieved 2010-06-25.
20. ^ Williams, David R. (2007-11-29). "Mars Fact Sheet". National Space Science Data Center. NASA. Archived from the original on 12 June 2010. Retrieved 2010-06-25.
21. ^ "Sirius". SIMBAD Astronomical Database. Retrieved 2010-06-26.
22. ^ "Canopus". SIMBAD Astronomical Database. Retrieved 2010-06-26.
23. ^ "Arcturus". SIMBAD Astronomical Database. Retrieved 2010-06-26.
24. ^ "Vega". SIMBAD Astronomical Database. Retrieved 2010-04-14.
26. ^ Yeomans and Chamberlin. "Horizon Online Ephemeris System for Ganymede (Major Body 503)". California Institute of Technology, Jet Propulsion Laboratory. Retrieved 2010-04-14. (4.38 on 1951-Oct-03)
27. ^ "M41 possibly recorded by Aristotle". SEDS (Students for the Exploration and Development of Space). 2006-07-28. Retrieved 2009-11-29.
28. ^ Williams, David R. (2005-01-31). "Uranus Fact Sheet". National Space Science Data Center. NASA. Archived from the original on 29 June 2010. Retrieved 2010-06-25.
30. ^ Lodriguss, Jerry (1993). "M33 (Triangulum Galaxy)". Retrieved 2009-11-27. (shows b mag not v mag)
31. ^ "Messier 81". SEDS (Students for the Exploration and Development of Space). 2007-09-02. Retrieved 2009-11-28.
32. ^ John E. Bortle (February 2001). "The Bortle Dark-Sky Scale". Sky & Telescope. Retrieved 2009-11-18.
33. ^ Williams, David R. (2007-11-29). "Neptune Fact Sheet". National Space Science Data Center. NASA. Archived from the original on 1 July 2010. Retrieved 2010-06-25.
34. ^ Yeomans and Chamberlin. "Horizon Online Ephemeris System for Titan (Major Body 606)". California Institute of Technology, Jet Propulsion Laboratory. Retrieved 2010-06-28. (8.10 on 2003-Dec-30)
35. ^ a b "Classic Satellites of the Solar System". Observatorio ARVAL. Archived from the original on 31 July 2010. Retrieved 2010-06-25.
36. ^ a b c "Planetary Satellite Physical Parameters". JPL (Solar System Dynamics). 2009-04-03. Archived from the original on 23 July 2009. Retrieved 2009-07-25.
37. ^ "AstDys (10) Hygiea Ephemerides". Department of Mathematics, University of Pisa, Italy. Retrieved 2010-06-26.
38. ^ Ed Zarenski (2004). "Limiting Magnitude in Binoculars". Cloudy Nights. Retrieved 2011-05-06.
39. ^ Williams, David R. (2006-09-07). "Pluto Fact Sheet". National Space Science Data Center. NASA. Archived from the original on 1 July 2010. Retrieved 2010-06-26.
40. ^ "AstDys (2060) Chiron Ephemerides". Department of Mathematics, University of Pisa, Italy. Retrieved 2010-06-26.
41. ^ "AstDys (136472) Makemake Ephemerides". Department of Mathematics, University of Pisa, Italy. Retrieved 2010-06-26.
42. ^ "AstDys (136108) Haumea Ephemerides". Department of Mathematics, University of Pisa, Italy. Retrieved 2010-06-26.
43. ^ Steve Cullen (sgcullen) (2009-10-05). "17 New Asteroids Found by LightBuckets". LightBuckets. Retrieved 2009-11-15.
44. ^ Cooperation with Ken Crawford
45. ^
46. ^ Scott S. Sheppard. "Saturn's Known Satellites". Carnegie Institution (Department of Terrestrial Magnetism). Retrieved 2010-06-28.
47. ^ Magnitude difference is 2.512*log10[(5000/5)^2 X (4999/4)^2] ≈ 30.6, so Jupiter is 30.6 mag fainter at 5000 AU
48. ^ "New Image of Comet Halley in the Cold". ESO. 2003-09-01. Archived from the original on 1 March 2009. Retrieved 2009-02-22.
49. ^ The HST eXtreme Deep Field XDF: Combining all ACS and WFC3/IR Data on the HUDF Region into the Deepest Field Ever
|
2014-03-17 20:06:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160619139671326, "perplexity": 4911.8543955909345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678706176/warc/CC-MAIN-20140313024506-00002-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Series/General
|
# Definition:Series/General
## Definition
Let $\struct{S, \circ}$ be a semigroup.
Let $\sequence{a_n}$ be a sequence in $S$.
Informally, a series is what results when an infinite product is taken of $\sequence {a_n}$:
$\ds s := \sum_{n \mathop = 1}^\infty a_n = a_1 \circ a_2 \circ a_3 \circ \cdots$
Formally, a series is a sequence in $S$.
## Sequence of Partial Products
The sequence $\sequence {s_N}$ defined as the indexed iterated operation:
$\ds s_N = \sum_{n \mathop = 1}^N a_n = a_1 \circ a_2 \circ \cdots \circ a_N$
is the sequence of partial products of the series $\ds \sum_{n \mathop = 1}^\infty a_n$.
|
2021-11-29 21:14:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997389554977417, "perplexity": 680.9964673549597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00429.warc.gz"}
|
https://icml.cc/Conferences/2020/ScheduleMultitrack?event=6430
|
Timezone: »
Poster
Scalable Exact Inference in Multi-Output Gaussian Processes
Wessel Bruinsma · Eric Perim Martins · William Tebbutt · Scott Hosking · Arno Solin · Richard E Turner
Wed Jul 15 12:00 PM -- 12:45 PM & Thu Jul 16 01:00 AM -- 01:45 AM (PDT) @ Virtual #None
Multi-output Gaussian processes (MOGPs) leverage the flexibility and interpretability of GPs while capturing structure across outputs, which is desirable, for example, in spatio-temporal modelling. The key problem with MOGPs is their computational scaling $O(n^3 p^3)$, which is cubic in the number of both inputs $n$ (e.g., time points or locations) and outputs $p$. For this reason, a popular class of MOGPs assumes that the data live around a low-dimensional linear subspace, reducing the complexity to $O(n^3 m^3)$. However, this cost is still cubic in the dimensionality of the subspace $m$, which is still prohibitively expensive for many applications. We propose the use of a sufficient statistic of the data to accelerate inference and learning in MOGPs with orthogonal bases. The method achieves linear scaling in $m$ in practice, allowing these models to scale to large $m$ without sacrificing significant expressivity or requiring approximation. This advance opens up a wide range of real-world tasks and can be combined with existing GP approximations in a plug-and-play way. We demonstrate the efficacy of the method on various synthetic and real-world data sets.
#### Author Information
##### Arno Solin (Aalto University)
Dr. Arno Solin is Assistant Professor in Machine Learning at the Department of Computer Science, Aalto University, Finland, and Adjunct Professor (Docent) at Tampere University, Finland. His research focuses on probabilistic models combining statistical machine learning and signal processing with applications in sensor fusion, robotics, computer vision, and online decision making. He has published around 50 peer-reviewed articles and one book. Previously, he has been a visiting researcher at Uppsala University (2019), University of Cambridge (2017-2018), and University of Sheffield (2014), and worked as a Team Lead in a tech startup. Prof. Solin is the winner of several prizes, hackathons, and modelling competitions, including the Schizophrenia Classification Challenge on Kaggle and the ISIF Jean-Pierre Le Cadre Best Paper Award. Homepage: http://arno.solin.fi
##### Richard E Turner (University of Cambridge)
Richard Turner holds a Lectureship (equivalent to US Assistant Professor) in Computer Vision and Machine Learning in the Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, UK. He is a Fellow of Christ's College Cambridge. Previously, he held an EPSRC Postdoctoral research fellowship which he spent at both the University of Cambridge and the Laboratory for Computational Vision, NYU, USA. He has a PhD degree in Computational Neuroscience and Machine Learning from the Gatsby Computational Neuroscience Unit, UCL, UK and a M.Sci. degree in Natural Sciences (specialism Physics) from the University of Cambridge, UK. His research interests include machine learning, signal processing and developing probabilistic models of perception.
|
2021-10-21 09:14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3292478322982788, "perplexity": 1990.606639906064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00579.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/jimo.2019102
|
Article Contents
Article Contents
# Biobjective optimization over the efficient set of multiobjective integer programming problem
• In this article, an exact method is proposed to optimize two preference functions over the efficient set of a multiobjective integer linear program (MOILP). This kind of problems arises whenever two associated decision-makers have to optimize their respective preference functions over many efficient solutions. For this purpose, we develop a branch-and-cut algorithm based on linear programming, for finding efficient solutions in terms of both preference functions and MOILP problem, without explicitly enumerating all efficient solutions of MOILP problem. The branch and bound process, strengthened by efficient cuts and tests, allows us to prune a large number of nodes in the tree to avoid many solutions. An illustrative example and an experimental study are reported.
Mathematics Subject Classification: Primary: 90C29, 90C10; Secondary: 90C57.
Citation:
• Figure 1. Search tree of the example
Table 1. Optimal simplex table for node 0
$\mathcal{B}_1$ $x_1$ $x_3$ $x_5$ $RHS$ $x_4$ $5/6$ -$1$ $2/3$ $53/6$ $x_2$ $1/3$ $0$ $2/3$ $16/3$ $x_6$ $1/3$ $5/2$ -$2/3$ 2/3 $\bar{d}^1$ -$7/6$ -$1/2$ -$10/3$ $80/3$
Table 2. Optimal simplex table for node 1
$\mathcal{B}_2$ $x_3$ $x_5$ $x_7$ $RHS$ $x_4$ -$1$ $-1$ $\frac{5}{2}$ $8$ $x_1$ $0$ $2$ -$3$ $1$ $x_6$ $\frac{5}{2}$ $0$ -$1$ $1$ $x_2$ $0$ $0$ $1$ $5$ $\bar{d}^1$ -$\frac{1}{2}$ -$1$ -$\frac{7}{2}$ $\frac{51}{2}$ $\bar{d}^2$ $1$ $0$ $0$ $0$ $\bar{c}^1$ $-1$ $-2$ $5$ $-9$ $\bar{c}^2$ -$\frac{1}{2}$ $0$ -$2$ $10$ $\bar{c}^3$ -$1$ -$2$ $3$ $1$
Table 3. Optimal simplex table for node 3
$\mathcal{B}_3$ $x_5$ $x_6$ $x_9$ $RHS$ $x_4$ 1 $\frac{5}{2}$ $\frac{21}{4}$ $\frac{21}{5}$ $x_1$ 2 -3 $\frac{15}{2}$ $\frac{11}{2}$ $x_3$ 0 0 -1 1 $x_2$ 0 1 $\frac{5}{2}$ $\frac{7}{2}$ $x_7$ 0 -1 -$\frac{5}{2}$ $\frac{3}{2}$ $x_8$ 0 -1 -$\frac{5}{2}$ $\frac{1}{2}$ $\bar{d}^1$ -1 -$\frac{7}{2}$ -$\frac{37}{4}$ $\frac{79}{4}$
Table 4. Optimal simplex table for node 4
$\mathcal{B}_4$ $x_6$ $x_9$ $x_{10}$ $RHS$ $x_4$ $1$ $\frac{3}{2}$ -$\frac{1}{2}$ $\frac{11}{2}$ $x_2$ $1$ $\frac{5}{2}$ 0 $\frac{7}{2}$ $x_3$ $0$ -$1$ $0$ $1$ $x_1$ $0$ $0$ $1$ $5$ $x_5$ -$\frac{3}{2}$ -$\frac{15}{4}$ -$\frac{1}{2}$ $\frac{1}{4}$ $x_7$ -$1$ -$\frac{5}{2}$ $0$ $\frac{3}{2}$ $x_8$ -$1$ -$\frac{5}{2}$ 0 $\frac{1}{2}$ $\bar{d}^1$ -$5$ -$13$ -$\frac{1}{2}$ $\frac{39}{2}$
Table 5. Optimal simplex table for node 5
$\mathcal{B}_5$ $x_5$ $x_9$ $x_{10}$ $RHS$ $x_4$ $\frac{2}{3}$ -1 $\frac{5}{6}$ $\frac{29}{6}$ $x_1$ 0 0 -1 6 $x_3$ 0 -1 0 1 $x_2$ $\frac{2}{3}$ 0 $\frac{1}{3}$ $\frac{10}{3}$ $x_7$ $-\frac{2}{3}$ 0 $-\frac{1}{3}$ $\frac{5}{3}$ $x_8$ $-\frac{2}{3}$ 0 $-\frac{1}{3}$ $\frac{2}{3}$ $x_6$ $-\frac{2}{3}$ $\frac{5}{2}$ $-\frac{1}{3}$ $\frac{1}{6}$ $\bar{d}^1$ $-\frac{10}{3}$ $-\frac{1}{2}$ $\frac{7}{6}$ $\frac{115}{6}$
Table 6. Optimal simplex table for node 6
$\mathcal{B}_6$ $x_9$ $x_{10}$ $x_{11}$ RHS $x_4$ -1 $-\frac{1}{2}$ 1 5 $x_1$ 0 1 0 5 $x_3$ -1 0 0 1 $x_2$ 0 0 0 3 $x_5$ 0 $-\frac{1}{2}$ $-\frac{3}{2}$ 1 $x_8$ 0 0 -1 1 $x_{6}$ $\frac{5}{2}$ 0 -1 $\frac{1}{2}$ $x_7$ 0 0 -1 2 $\bar{d}^1$ $-\frac{1}{2}$ $-\frac{1}{2}$ -5 17 $\bar{d}^2$ 1 0 0 1 $\bar{c}^1$ -1 -1 2 -2 $\bar{c}^2$ $-\frac{1}{2}$ 0 -2 $\frac{11}{2}$ $\bar{c}^3$ -1 -1 0 4
Table 7. Optimal simplex table for node 8
$\mathcal{B}_8$ $x_5$ $x_9$ $x_{11}$ RHS $x_4$ -1 -1 $\frac{5}{2}$ 4 $x_2$ 0 0 1 3 $x_3$ 0 -1 0 1 $x_1$ 2 0 -3 7 $x_6$ 0 $\frac{5}{2}$ -1 $\frac{1}{2}$ $x_7$ 0 0 -1 2 $x_8$ 0 0 -1 1 $x_{10}$ 2 0 -3 1 $\bar{d}^1$ -1 $-\frac{1}{2}$ $-\frac{7}{2}$ 18 $\bar{d}^2$ 0 1 0 1 $\bar{c}^1$ -2 -1 5 0 $\bar{c}^2$ 0 $-\frac{1}{2}$ -2 $\frac{11}{2}$ $\bar{c}^3$ -2 -1 3 6
Table 8. Optimal simplex table for the node 10
$\mathcal{B}_{10}$ $x_6$ $x_{10}$ $x_{13}$ RHS $x_5$ $-\frac{3}{2}$ $-\frac{1}{2}$ $-\frac{15}{4}$ 4 $x_4$ 1 $-\frac{1}{2}$ $\frac{3}{2}$ 4 $x_3$ 0 0 -1 2 $x_7$ -1 0 $-\frac{5}{2}$ 4 $x_8$ -1 1 $-\frac{5}{2}$ 3 $x_9$ 0 0 -1 1 $x_2$ 1 0 $\frac{5}{2}$ 1 $x_1$ 0 1 0 5 $\bar{d}^1$ -5 $-\frac{1}{2}$ -13 $\frac{13}{2}$ $\bar{d}^2$ 0 0 1 2 $\bar{c}^1$ 2 -1 4 1 $\bar{c}^2$ -2 0 $-\frac{11}{2}$ 1 $\bar{c}^3$ 0 -1 -1 3
Table 9. Optimal simplex table for node 11
$\mathcal{B}_{11}$ $x_4$ $x_6$ $x_{13}$ RHS $x_5$ -1 $-\frac{5}{2}$ $-\frac{21}{4}$ 0 $x_3$ 0 0 -1 2 $x_7$ 0 -1 $-\frac{5}{2}$ 4 $x_8$ 0 -1 $-\frac{5}{2}$ 3 $x_{10}$ 2 2 3 7 $x_{11}$ 0 -1 $-\frac{5}{2}$ 2 $x_{12}$ 0 -1 $-\frac{5}{2}$ 1 $x_9$ 0 0 -1 1 $x_2$ 1 0 $\frac{5}{2}$ 1 $x_1$ 2 2 3 13 $\bar{d}^1$ -1 -6 $-\frac{29}{2}$ $\frac{21}{2}$ $\bar{d}^2$ 0 0 3 6 $\bar{c}^1$ -2 0 1 9 $\bar{c}^2$ 0 -2 $-\frac{11}{2}$ 1 $\bar{c}^3$ -2 -2 -4 11
Table 10. Random instances execution times
• [1] M. Abbas and D. Chaabane, Optimizing a linear function over an integer efficient set, European J. Oper. Res., 174 (2006), 1140-1161. doi: 10.1016/j.ejor.2005.02.072. [2] P. Belotti, B. Soylu and M. Wiecek, Fathoming rules for biobjective mixed integer linear programs: Review and extensions, Discrete Optim., 22 (2016), 341-363. doi: 10.1016/j.disopt.2016.09.003. [3] N. Boland, H. Charkhgard and and M. Savelsbergh, A new method for optimizing a linear function over the efficient set of a multiobjective integer program, European J. Oper. Res., 260 (2017), 904-919. doi: 10.1016/j.ejor.2016.02.037. [4] H. P. Benson, Optimization over the efficient set, J. Math. Anal. Appl., 98 (1984), 562-580. doi: 10.1016/0022-247X(84)90269-5. [5] M. E-A Chergui and M. Moulaï, An exact method for a discrete multiobjective linear fractional optimization, J. Appl. Math. Decis. Sci., (2008), 12pp. doi: 10.1155/2008/760191. [6] W. Drici, F- Z Ouaïl and M. Moulaï, Optimizing a linear fractional function over the integer efficient set, Ann. Oper. Res., 267 (2018), 135-151. doi: 10.1007/s10479-017-2691-0. [7] J. G. Ecker, N. S. Hegner and I. A. Kouada, Generating all maximal efficient faces for multiple objective linear programs, J. Optim. Theory Appl., 30 (1980), 353-381. doi: 10.1007/BF00935493. [8] J. G. Ecker and I. A. Kouada, Finding all efficient extreme points for multiple objective linear programs, Math. Programming, 14 (1978), 249-261. doi: 10.1007/BF01588968. [9] J. G. Ecker and J. H. Song, Optimizing a linear function over an efficient set, J. Optim. Theory Appl., 83 (1994), 541-563. doi: 10.1007/BF02207641. [10] J. P. Evans and R. E. Steuer, A revised simplex method for linear multiple objective programs, Math. Programming, 5 (1973), 54-72. doi: 10.1007/BF01580111. [11] J. M. Jorge, An algorithm for optimizing a linear function over an integer efficient set, European J. Oper. Res., 195 (2009), 98-103. doi: 10.1016/j.ejor.2008.02.005. [12] F- Z Ouaïl, M. E-A. Chergui and M. Moulaï, An exact method for optimizing a linear function over an integer efficient set, WSEAS Transactions on Circuits and Systems, 16(3) (2017), 141-148. [13] Y. Yamamoto, Optimization over the efficient set: Overview, J. Global Optim., 22 (2002), 285-317. doi: 10.1023/A:1013875600711.
Figures(1)
Tables(10)
|
2023-01-31 10:05:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.46336856484413147, "perplexity": 448.7351463274999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00015.warc.gz"}
|
https://www.clutchprep.com/physics/practice-problems/145233/two-capacitors-of-capacitance-3c-and-5c-where-c-0-13-f-are-connected-in-series-w
|
# Problem: Two capacitors of capacitance 3C and 5C (where C = 0.13 F) are connected in series with a resistor of resistance R Randomized Variables R=5.5 Ω If the circuit was charged by a 10.0 V source how much total charge (in C) did both capacitors have in them to begin with?
###### FREE Expert Solution
Equivalent capacitance for two capacitors:
$\overline{){{\mathbf{C}}}_{\mathbf{e}\mathbf{q}}{\mathbf{=}}\frac{{\mathbf{C}}_{\mathbf{1}}{\mathbf{C}}_{\mathbf{2}}}{{\mathbf{C}}_{\mathbf{1}}\mathbf{+}{\mathbf{C}}_{\mathbf{2}}}}$
The charge stored in a capacitor:
$\overline{){\mathbf{Q}}{\mathbf{=}}{\mathbf{C}}{\mathbf{V}}}$
###### Problem Details
Two capacitors of capacitance 3C and 5C (where C = 0.13 F) are connected in series with a resistor of resistance R Randomized Variables R=5.5 Ω
If the circuit was charged by a 10.0 V source how much total charge (in C) did both capacitors have in them to begin with?
|
2020-10-19 20:56:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7408128976821899, "perplexity": 1408.023227063755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107866404.1/warc/CC-MAIN-20201019203523-20201019233523-00496.warc.gz"}
|
https://www.mathwarehouse.com/algebra/polynomial/how-to-add-subtract-polynomials.php
|
# How to Add and Subtract Polynomials
Step By Step
Whether you want to add polynomials or subtract them, you follow a similar set of steps.
### General Steps
Step 1
Arrange the Polynomial in standard form.
Standard form of a polynomial just means that the term with highest degree is first and each of the following terms.
Step 2
Arrange the like terms in columns and add the like terms.
##### Example 1
Let's find the sum of the following two polynomials.
$$(3y^5 - 2y + y^4 + 2y^3 + 5)$$ and $$(2y^5 + 3y^3 + 2 + 7)$$
#### Subtracting Polynomials
##### Example 2
Let's find the difference of the same two polynomials.
$$(3y^5 - 2y + y^4 + 2y^3 + 5)$$ and $$(2y^5 + 3y^3 + 2 + 7)$$
### Practice Problems
##### Problem 1
This problem is like example 1.
First, remember to rewrite each polynomial in standard form, line up the columns and add the like terms.
##### Problem 2
This problem is like example 1.
First, remember to rewrite each polynomial in standard form, line up the columns and add the like terms.
##### Problem 3
This problem is like example 2 since we are subtracting.
First, remember to rewrite each polynomial in standard form, line up the columns and add the like terms.
(Be careful with $$-11x^3$$ term; it is already negative, so subtracting a negative leads to a positive $$11x^3$$)
##### Problem 4
Although this problem involves addition, there are no like terms. If you line up the polynomials in columns, you will see that no terms are in the same columns.
|
2021-07-31 19:54:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5956878662109375, "perplexity": 1230.289098884874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154099.21/warc/CC-MAIN-20210731172305-20210731202305-00309.warc.gz"}
|
https://brilliant.org/practice/operations-level-3-4-challenges/?subtopic=modular-arithmetic&chapter=operations
|
Number Theory
# Modular Arithmetic Operations: Level 3 Challenges
What is the remainder when $1^{2013}+2^{2013}+\cdots +2012^{2013}+2013^{2013}$ is divided by $2014$?
When we rotate an integer, we take the last digit (right most) and move it to the front of the number. For example, if we rotate $12345$, we will get $51234$.
What is the smallest (positive) integer $N$, such that when $N$ is rotated, we obtain $\frac{2}{3} N$?
Find the remainder when $6^{98}+8^{98}$ is divided by 98.
Find the smallest positive integer $k$ such that $1^2+2^2+3^2+\ldots+k^2$ is a multiple of 200.
${a^x \equiv a-2 \pmod{{\small(a-1)}}}$
If $a$ and $x$ are positive integers greater than 2, what is the value of $a?$
×
|
2020-08-07 15:52:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.393297016620636, "perplexity": 131.56361502075276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00426.warc.gz"}
|
https://www.physicsforums.com/threads/potential-energies-of-two-charged-cylinders.938250/
|
# Potential Energies of Two Charged Cylinders
## Homework Statement
Problem 1.24 (this is unimportant; it's just a different way of calculating the potential energy of a solid cylinder) gives one way of calculating the energy per unit length stored in a solid cylinder with radius a and uniform volume charge density ##\rho##. Calculate the energy here by using ##U = \frac{\epsilon_0}{2} \int_{entire \ surface} E^2 dv## to find the total energy per unit length stored in the electric field. Don’t forget to include the field inside the cylinder.
You will find that the energy is infinite, so instead calculate the energy relative to the configuration where all the charge is initially distributed uniformly over a hollow cylinder with large radius ##R##. (The field outside radius ##R## is the same in both configurations, so it can be ignored when calculating the relative energy.) In terms of the total charge ##\lambda## per unit length in the final cylinder, show that the energy per unit length can be written as ##\frac{\lambda^2}{4\pi\epsilon_0}\left(1/4+ln(R/a)\right)##
(It's important to note that the potential energy involved in this problem is NOT the potential energy of a particle in the field created by the charged cylinders, but the potential energy of the charged cylinders themselves.)
## Homework Equations
##U = \frac{\epsilon_0}{2} \int_{entire \ surface} E^2 dv##
## The Attempt at a Solution
The first part of the problem, involving solving for the energy of a solid cylinder, is pretty simple. For this, I got that the potential energy density per length of the inside of the cylinder being:
##U_{in}/h = \frac{\pi \rho^2 R^4}{16 \epsilon_0}##
The external potential energy of the cylinder goes to infinity, as the problem states, as you get:
## U_{out}/h = \left. \frac{\pi \rho^2 R^4}{4 \epsilon_0} \ln(r) \right|_R^\infty ##
The second part of the problem, though, is confusing to me. It would seem that the potential energy of the hollow cylinder is zero, because the electric field inside is zero. In addition, the problem says that you can "ignore" the field outside ##R##, but I'm not sure how exactly that can be possible if the total potential energy is the sum of the potential energies of the internal and external areas.
I thought that maybe I could calculate the potential energy inside the hollow cylinder by doing it piece by piece, i.e. with differential areas rather than with the equation given, but logically it should still be zero as well.
## Answers and Replies
Related Introductory Physics Homework Help News on Phys.org
kuruman
|
2020-10-28 03:50:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565994501113892, "perplexity": 309.9302065281524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896048.53/warc/CC-MAIN-20201028014458-20201028044458-00531.warc.gz"}
|
http://calculus123.com/wiki/Dual_space
|
This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.
# Dual space
## Vectors and covectors
What is the relation between $2$, the counting number, and "doubling", the function $f(x)=2\cdot x$?
Linear algebra helps one appreciate this seemingly trivial relation. Indeed, the answer is a linear operator $$D : {\bf R} \rightarrow L({\bf R},{\bf R}),$$ from the reals to the vector space of all linear functions. In fact, it's an isomorphism!
More generally, suppose $V$ is a vector space. Let $$V^* = \{ \alpha \colon V \rightarrow {\bf R}, \alpha {\rm \hspace{3pt} linear}\}.$$ It's the set of all linear "functionals", also called covectors, on $V$. It is called the dual of $V$.
An illustration of a vector in $V={\bf R}^2$ and a covector in $V^*$.
Here a vector is just a pair of numbers, while a covector is a correspondence of each unit vector with a number. The linearity is visible.
Note. If $V$ is a module over a ring $R$, the dual space is still the set of all linear functionals on $V$: $$V^* = \{ \alpha \colon V \rightarrow R, \alpha {\rm \hspace{3pt} linear}\}.$$ The results below apply equally to finitely generated free modules.
In the above example, it is easy to see a way of building a vector from this covector. Indeed, let's pick the vector $v$ such that
• the direction of $v$ is that of the one that gives the largest value of the covector $w$ (i.e., $2$), and
• the magnitude of $v$ is that value of $w$.
So the result is $v=(2,2)$. Moreover, covector $w$ can be reconstructed from this vector $v$. (Cf. gradient and norm of a linear operator.)
This is how the covector above can be visualized:
It is similar to an oil spill.
## Properties
Fact 1: $V^*$ is a vector space.
Just as any set of linear operators between two given vector spaces, in this case $V$ and ${\bf R}$. We define the operations for $\alpha, \beta \in V^*, r \in {\bf R}$: $$(\alpha + \beta)(v) = \alpha(v) + \beta(v), v \in V,$$ $$(r \alpha)(w) = r\alpha(w), w \in V.$$
Exercise. Prove it. Start with indicating what $0, -\alpha \in V^*$ are. Refer to theorems of linear algebra, such as the "Subspace Theorem".
Below we assume that $V$ is finite dimensional.
Fact 2: Every basis of $V$ corresponds to a dual basis of $V^*$, of the same size, built as follows.
Given $\{u_1,\ldots,u_n\}$, a basis of $V$. Define a set $\{u_1^*,\ldots,u_n^*\} \subset V^*$ by setting: $$u_i^*(u_j)=\delta _{ij} ,$$ or $$u_i^*(r_1u_1+\ldots+r_nu_n) = r_i, i = 1,\ldots,k.$$
Exercise. Prove that $u_i^* \in V^*$.
Example. Dual bases for $V={\bf R}^2$:
Theorem. The set $\{u_1^*,\ldots,u_n^*\}$ is linearly independent.
Proof. Suppose $$s_1u_1^* + \ldots + s_nu_n^* = 0$$ for some $r_1,\ldots,r_k \in {\bf R}$. This means that $$s_1u_1^*(u)+\ldots+s_nu_n^*(u)=0 \hspace{7pt} (1)$$ for all $u \in V$.
We choose $u=u_i, i=1,\ldots,n$ here and use $u_j^*(u_i)=\delta_{ij}$. Then we can rewrite (1) with $u=u_i$ for each $i=1,\ldots,n$ as: $$s_i=0.$$ Therefore $\{u_1^*,\ldots,u_n^*\}$ is linearly independent. $\blacksquare$
Theorem. $\{u_1^*,\ldots,u_n^*\}$ spans $V^*$.
Proof. Given $u^* \in V^*$, let $r_i = u^*(u_i) \in {\bf R},i=1,...,n$. Now define $$v^* = r_1u_1^* + \ldots + r_nu_n^*.$$ Consider $$v^*(u_i) = r_1u_1^*(u_i) + \ldots + r_nu_n^*(u_i) = r_i.$$ So $u^*$ and $v^*$ match on the elements of the basis of $V$. Thus $u^*=v^*$. $\blacksquare$
Conclusion 1: $$\dim V^* = \dim V = n.$$
So by the Classification Theorem of Vector Spaces, we have
Conclusion 2: $$V^* \simeq V.$$
A two-line version of the proof: $V$ with basis of $n$ elements $\simeq {\bf R}^n$. Then $V^* \simeq {\bf M}(1,n)$. But $\dim {\bf M}(1,n)=n$, etc.
Even though a space is isomorphic to its dual, their behavior is not "aligned" (with respect to linear operators), as we show below. In fact, the isomorphism is dependent on the choice of basis.
Note 1: The relation between a vector space and its dual can be revealed by looking at vectors as column-vectors (as always) and covectors as row-vectors: $$V = \left\{ x=\left[ \begin{array}{} x_1 \\ \vdots \\ x_n \end{array} \right] \right\}, V^* = \{y=[y_1,\ldots,y_n]\}.$$ This way we can multiply the two as matrices: $$xy=[y_1,\ldots,y_n] \left[ \begin{array}{} x_1 \\ \vdots \\ x_n \end{array} \right] = [y_1,\ldots,y_n][x_1,\ldots,x_n]^T =x_1y_1+...+x_ny_n.$$ The result is their dot product which can also be understood as a linear operator $y\in V^*$ acting on $x\in V$.
Note 2: $$\dim \mathcal{L}(V,U) = \dim V \cdot \dim U,$$ if the spaces are finite dimensional.
Exercise. Find and picture the duals of the vector and the covectors depicted in the first section.
Exercise. Find the dual of ${\bf R}^2$ for two different choices of basis.
## Operators and naturality
That's not all.
Note. The diagram also suggests that the reversal of the arrows has nothing to do with linearity. The issue is "functorial".
Theorem. For finite dimensional $V,W$, the matrix of $A^*$ is the transpose of that of $A$: $$A^*=A^T.$$
Proof. Exercise. $\blacksquare$
The composition is preserved but in reverse:
Theorem. $(AB)^*=B^*A^*.$
Finish (Exercise.) $\blacksquare$
As you see, the dual $A^*$ behaves very much like but is not to be confused with the inverse $A^{-1}$. Of course, the former is much simpler!
Exercise. Why not?
Exercise. Prove that. Demonstrate that it is independent from the choice of basis.
When the dot product above is replaced with a particular choice of inner product, we have an identical effect. A general term related to this is adjoint operator.
A major topological application of the idea is in chains vs cochains.
## Change of basis
The reversal of arrows also reveals that a change of basis of $V$ affects differently the coordinate representation of vectors and covectors.
|
2021-01-18 20:22:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570544958114624, "perplexity": 231.51063671155794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00125.warc.gz"}
|
https://tilings.math.uni-bielefeld.de/substitution/single-bat/
|
## Single Bat
### Info
Denote the elements of the field $F_4$ by $\{0, 1, w, w + 1\}$, where $w$ satisfies the following equation with coefficients in $F_2: w^2 + w + 1 = 0$. Single Bat is a recurrent double sequence defined by $a(i, 0) = a(0, j) = 1$ and $a(i, j) = f(a(i, j-1), a(i-1, j-1), a(i-1, j))$, where $f(x, y, z) = x + (w + 1) x^2 + w y^2 + z + (w + 1) z^2$. This recurrent double sequence can be also obtained using a system of substitutions of type 4 -> 8 with 43 rules.
The system of substitutions is too large to be presented here.
|
2020-10-28 09:11:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246136546134949, "perplexity": 257.4789785502078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107897022.61/warc/CC-MAIN-20201028073614-20201028103614-00174.warc.gz"}
|
https://intelligencemission.com/free-electricity-using-overunity-free-electricity-token.html
|
Look in your car engine and you will see one. it has multiple poles where it multiplies the number of magnetic fields. sure energy changes form, but also you don’t get something for nothing. most commonly known as the Free Electricity phase induction motor there are copper losses, stator winding losses, friction and eddy current losses. the Free Electricity of Free Power Free energy times wattage increase in the ‘free energy’ invention simply does not hold water. Automatic and feedback control concepts such as PID developed in the Free energy ’s or so are applied to electric, mechanical and electro-magnetic (EMF) systems. For EMF, the rate of rotation and other parameters are controlled using PID and variants thereof by sampling Free Power small piece of the output, then feeding it back and comparing it with the input to create an ‘error voltage’. this voltage is then multiplied. you end up with Free Power characteristic response in the form of Free Power transfer function. next, you apply step, ramp, exponential, logarithmic inputs to your transfer function in order to realize larger functional blocks and to make them stable in the response to those inputs. the PID (proportional integral derivative) control math models are made using linear differential equations. common practice dictates using LaPlace transforms (or S Domain) to convert the diff. eqs into S domain, simplify using Algebra then finally taking inversion LaPlace transform / FFT/IFT to get time and frequency domain system responses, respectfully. Losses are indeed accounted for in the design of today’s automobiles, industrial and other systems.
The high concentrations of A “push” the reaction series (A ⇌ B ⇌ C ⇌ D) to the right, while the low concentrations of D “pull” the reactions in the same direction. Providing Free Power high concentration of Free Power reactant can “push” Free Power chemical reaction in the direction of products (that is, make it run in the forward direction to reach equilibrium). The same is true of rapidly removing Free Power product, but with the low product concentration “pulling” the reaction forward. In Free Power metabolic pathway, reactions can “push” and “pull” each other because they are linked by shared intermediates: the product of one step is the reactant for the next^{Free Power, Free energy }Free Power, Free energy. “Think of Two Powerful Magnets. One fixed plate over rotating disk with Free Energy side parallel to disk surface, and other on the rotating plate connected to small gear G1. If the magnet over gear G1’s north side is parallel to that of which is over Rotating disk then they both will repel each other. Now the magnet over the left disk will try to rotate the disk below in (think) clock-wise direction. Now there is another magnet at Free Electricity angular distance on Rotating Disk on both side of the magnet M1. Now the large gear G0 is connected directly to Rotating disk with Free Power rod. So after repulsion if Rotating-Disk rotates it will rotate the gear G0 which is connected to gear G1. So the magnet over G1 rotate in the direction perpendicular to that of fixed-disk surface. Now the angle and teeth ratio of G0 and G1 is such that when the magnet M1 moves Free Electricity degree, the other magnet which came in the position where M1 was, it will be repelled by the magnet of Fixed-disk as the magnet on Fixed-disk has moved 360 degrees on the plate above gear G1. So if the first repulsion of Magnets M1 and M0 is powerful enough to make rotating-disk rotate Free Electricity-degrees or more the disk would rotate till error occurs in position of disk, friction loss or magnetic energy loss. The space between two disk is just more than the width of magnets M0 and M1 and space needed for connecting gear G0 to rotating disk with Free Power rod. Now I’ve not tested with actual objects. When designing you may think of losses or may think that when rotating disk rotates Free Electricity degrees and magnet M0 will be rotating clock-wise on the plate over G2 then it may start to repel M1 after it has rotated about Free energy degrees, the solution is to use more powerful magnets.
According to the second law of thermodynamics, for any process that occurs in Free Power closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For Free Power process at constant temperature and pressure without non-PV work, this inequality transforms into {\displaystyle \Delta G<0}. Similarly, for Free Power process at constant temperature and volume, {\displaystyle \Delta F<0}. Thus, Free Power negative value of the change in free energy is Free Power necessary condition for Free Power process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. From the Free Power textbook Modern Thermodynamics [Free Power] by Nobel Laureate and chemistry professor Ilya Prigogine we find: “As motion was explained by the Newtonian concept of force, chemists wanted Free Power similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked Free Power clear definition. ”In the 19th century, the Free Electricity chemist Marcellin Berthelot and the Danish chemist Free Electricity Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for Free Power large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of Free Power system of bodies which liberate heat. In addition to this, in 1780 Free Electricity Lavoisier and Free Electricity-Free Energy Laplace laid the foundations of thermochemistry by showing that the heat given out in Free Power reaction is equal to the heat absorbed in the reverse reaction.
For Free Power start, I’m not bitter. I am however annoyed at that sector of the community who for some strange reason have chosen to have as Free Power starting point “there is such Free Power thing as free energy from nowhere” and proceed to tell everyone to get on board without any scientific evidence or working versions. How anyone cannot see that is appalling is beyond me. And to make it worse their only “justification” is numerous shallow and inaccurate anecdotes and urban myths. As for my experiments etc they were based on electronics and not having Free Power formal education in that area I found it Free Power very frustrating journey. Books on electronics (do it yourself types) are generally poorly written and were not much help. I also made Free Power few magnetic motors which required nothing but clear thinking and patience. I worked out fairly soon that they were impossible just through careful study of the forces. I am an experimenter and hobbyist inventor. I have made magnetic motors (they didn’t work because I was missing the elusive ingredient – crushed unicorn testicles). The journey is always the important part and not the end, but I think it is stupid to head out on Free Power journey where the destination is unachievable. Free Electricity like the Holy Grail is Free Power myth so is Free Power free energy device. Ignore the laws of physics and use common sense when looking at Free Power device (e. g. magnetic motors) that promises unending power.
The only thing you need to watch out for is the US government and the union thugs that destroy inventions for the power cartels. Both will try to destroy your ingenuity! Both are criminal elements! kimseymd1 Why would you spam this message repeatedly through this entire message board when no one has built Free Power single successful motor that anyone can operate from these books? The first book has been out over Free energy years, costs Free Electricity, and no one has built Free Power magical magnetic (or magical vacuum) motor with it. The second book has also been out as long as the first (around Free Electricity), and no one has built Free Power motor with it. How much Free Power do you get? Are you involved in the selling and publishing of these books in any way? Why are you doing this? Are you writing this from inside Free Power mental institution? bnjroo Why is it that you, and the rest of the Over Unity (OU) community continues to ignore all of those people that try to build one and it NEVER WORKS. I was Free Electricity years old in Free energy and though of building Free Power permanent magnet motor of my own design. It looked just like what I see on the phoney internet videos. It didn’t work. I tried all kinds of clever arrangements and angles but alas – no luck.
I built my own generator to charge the batteries in my camper, it has Free Power 6v deep cycle batteries and i took an 8h gas engine and love joyed it to Free Power 150amp Free Electricity alt. At full throttle i can only get 50amps out of it, anything over that and it would kill the motor. A magnetic motor will have to be huge to run Free Power generator that would put Free Power load out for Free Power home. I am not saying for us all to give up but there are not many of us that have that kind of money. Even solar and wind energy are not that good because unlles you live where the wind blows all the time or the sun is out every day the batteries kill any savings you might have had on your power bill. Batteries are not cheep anymore. If we keep working at it somebody will come up with the new working invention. Free Power Hey G Free Electricity, Thanks but i have looked at ebay and they do not have the sizes i am looking for, everything seems to be the small stuff and i now need to go big and that is where i get stopped, the prices on them are just outragouse, it would take thousands of to buy them. I have made Free Power small motor that will turn Free Power bicycle headlite generator but at first it would not turn the generator because i did’nt have any shielding around the magnets. Thanks, Free Power. This video: electricity energy Free Electricityyoutube power was made by Free Power guy named Free Electricity Free Electricity – he sold shares in his magnetic motor to over Free Power Free Energy investors and curiously he is now in jail for fraud as he didn’t deliver any working magnetic motors. It is generally accepted he attaches Free Power drive motor to the far side during the videoed start up. This “motor” has been duplicated but no one has gotten one to run. Wow, am I ever so glad you paid attention to comments I made on the video. The first thing to point out is that the magnets I was using to spin the cd with were NOT stationary. It is very difficult to get it spinning while holding the magnets by Free Power. Having to get the correct angle and distance is not easy to do by Free Power. Secondly, when I tried to get it to spin again I wasn’t able to.
I am doing more research for increasing power output so that it can be used in future in cars. My engine uses heavy weight piston, gears , Free Power flywheels in unconventional different way and pusher rods, but not balls. It was necessary for me to take example of ball to explain my basic idea I used in my concept. (the ball system is very much analogous to the piston-gear system I am using in my engine). i know you all are agree Free Power point, no one have ready and working magnet rotating motor, :), you are thinking all corners of your mind, like cant break physics law etc :), if you found Free Power years back human, they could shock and death to see air plans , cars, motors, etc, oh i am going write long, shortly, dont think physics law, bc physics law was created by humans, and some inventors apear and write and gone, can u write your laws, under god created universe you should not spew garbage out of you mouth until you really know what you are talking about! Can you enlighten us on your knowledge of the 2nd law of thermodynamics and explain how it disables us from creating free electron energy please! if you cant then you have no right to say that it cant work! people like you have kept the world form advancements. No “free energy magnetic motor” has ever worked. Never. Not Once. Not Ever. Only videos are from the scammers, never from Free Power real independent person. That’s why only the plans are available. When it won’t work, they blame it on you, and keep your money.
Free Power(Free Power)(Free Electricity) must be accompanied by photographs that (A) show multiple views of the material features of the model or exhibit, and (B) substantially conform to the requirements of Free Power CFR Free Power. Free energy. See Free Power CFR Free Power. Free Power(Free Electricity). Material features are considered to be those features which represent that portion(s) of the model or exhibit forming the basis for which the model or exhibit has been submitted. Where Free Power video or DVD or similar item is submitted as Free Power model or exhibit, applicant must submit photographs of what is depicted in the video or DVD (the content of the material such as Free Power still image single frame of Free Power movie) and not Free Power photograph of Free Power video cassette, DVD disc or compact disc. <“ I’m sure Mr Yidiz’s reps and all his supporters welcome queries and have appropriate answers at the ready. Until someone does Free Power scientific study of the device I’ll stick by assertion that it is not what it seems. Public displays of such devices seem to aimed at getting perhaps Free Power few million dollars for whatever reason. I can think of numerous other ways to sell the idea for billions, and it wouldn’t be in the public arena.
Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible.
In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous “bulk” materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation.
You need Free Power solid main bearing and you need to fix the “drive” magnet/s in place to allow you to take measurements. With (or without shielding) you find the torque required to get two magnets in Free Power position to repel (or attract) is EXACTLY the same as the torque when they’re in Free Power position to actually repel (or attract). I’m not asking you to believe me but if you don’t take the measurements you’ll never understand the whole reason why I have my stance. Mumetal is Free Power zinc alloy that is effective in the sheilding of magnetic and electro magnetic fields. Only just heard about it myself couple of days ago. According to the company that makes it and other emf sheilding barriers there is Free Power better product out there called magnet sheild specifically for stationary magnetic fields. Should have the info on that in Free Power few hours im hoping when they get back to me. Hey Free Power, believe me i am not giving up. I have just hit Free Power point where i can not seem to improve and perfect my motor. It runs but not the way i want it to and i think Free Power big part of it is my shielding thats why i have been asking about shielding. I have never heard of mumetal. What is it? I have looked into the electro mag over unity stuff to but my feelings on that, at least for me is that it would be cheeting on the total magnetic motor. Your basicaly going back to the electric motor. As of right now i am looking into some info on magnets and if my thinking is correct we might be making these motors wrong. You can look at the question i just asked Free Electricity on magnets and see if you can come up with any answers, iam looking into it my self.
I am doing more research for increasing power output so that it can be used in future in cars. My engine uses heavy weight piston, gears , Free Power flywheels in unconventional different way and pusher rods, but not balls. It was necessary for me to take example of ball to explain my basic idea I used in my concept. (the ball system is very much analogous to the piston-gear system I am using in my engine). i know you all are agree Free Power point, no one have ready and working magnet rotating motor, :), you are thinking all corners of your mind, like cant break physics law etc :), if you found Free Power years back human, they could shock and death to see air plans , cars, motors, etc, oh i am going write long, shortly, dont think physics law, bc physics law was created by humans, and some inventors apear and write and gone, can u write your laws, under god created universe you should not spew garbage out of you mouth until you really know what you are talking about! Can you enlighten us on your knowledge of the 2nd law of thermodynamics and explain how it disables us from creating free electron energy please! if you cant then you have no right to say that it cant work! people like you have kept the world form advancements. No “free energy magnetic motor” has ever worked. Never. Not Once. Not Ever. Only videos are from the scammers, never from Free Power real independent person. That’s why only the plans are available. When it won’t work, they blame it on you, and keep your money.
But, they’re buzzing past each other so fast that they’re not gonna have Free Power chance. Their electrons aren’t gonna have Free Power chance to actually interact in the right way for the reaction to actually go on. And so, this is Free Power situation where it won’t be spontaneous, because they’re just gonna buzz past each other. They’re not gonna have Free Power chance to interact properly. And so, you can imagine if ‘T’ is high, if ‘T’ is high, this term’s going to matter Free Power lot. And, so the fact that entropy is negative is gonna make this whole thing positive. And, this is gonna be more positive than this is going to be negative. So, this is Free Power situation where our Delta G is greater than zero. So, once again, not spontaneous. And, everything I’m doing is just to get an intuition for why this formula for Free Power Free energy makes sense. And, remember, this is true under constant pressure and temperature. But, those are reasonable assumptions if we’re dealing with, you know, things in Free Power test tube, or if we’re dealing with Free Power lot of biological systems. Now, let’s go over here. So, our enthalpy, our change in enthalpy is positive. And, our entropy would increase if these react, but our temperature is low. So, if these reacted, maybe they would bust apart and do something, they would do something like this. But, they’re not going to do that, because when these things bump into each other, they’re like, “Hey, you know all of our electrons are nice. “There are nice little stable configurations here. “I don’t see any reason to react. ” Even though, if we did react, we were able to increase the entropy. Hey, no reason to react here. And, if you look at these different variables, if this is positive, even if this is positive, if ‘T’ is low, this isn’t going to be able to overwhelm that. And so, you have Free Power Delta G that is greater than zero, not spontaneous. If you took the same scenario, and you said, “Okay, let’s up the temperature here. “Let’s up the average kinetic energy. ” None of these things are going to be able to slam into each other. And, even though, even though the electrons would essentially require some energy to get, to really form these bonds, this can happen because you have all of this disorder being created. You have these more states. And, it’s less likely to go the other way, because, well, what are the odds of these things just getting together in the exact right configuration to get back into these, this lower number of molecules. And, once again, you look at these variables here. Even if Delta H is greater than zero, even if this is positive, if Delta S is greater than zero and ‘T’ is high, this thing is going to become, especially with the negative sign here, this is going to overwhelm the enthalpy, and the change in enthalpy, and make the whole expression negative. So, over here, Delta G is going to be less than zero. And, this is going to be spontaneous. Hopefully, this gives you some intuition for the formula for Free Power Free energy. And, once again, you have to caveat it. It’s under, it assumes constant pressure and temperature. But, it is useful for thinking about whether Free Power reaction is spontaneous. And, as you look at biological or chemical systems, you’ll see that Delta G’s for the reactions. And so, you’ll say, “Free Electricity, it’s Free Power negative Delta G? “That’s going to be Free Power spontaneous reaction. “It’s Free Power zero Delta G. “That’s gonna be an equilibrium. ”
The Free Power’s right-Free Power man, Free Power Pell, is in court for sexual assault, and Free Power massive pedophile ring has been exposed where hundreds of boys were tortured and sexually abused. Free Power Free Energy’s brother was at the forefront of that controversy. You can read more about that here. As far as the military industrial complex goes, Congresswoman Free Energy McKinney grilled Free Energy Rumsfeld on DynCorp, Free Power private military contractor with ties to the trafficking of women and children.
I do not fear any conspiracy from any nook & corner. I am simply taking my time and my space to stage the inevitable confrontation in the frozen face of the industry and geopolitics tycoons. this think is complicated and confusing, its Free Power year now I’m struggling to build this motor after work hours, I tried to build it from scratch but doesn’t work, few weeks ago when i was browsing I met someone who designed Free Power self running motor by using computer CPU fan and Hard disk magnets I quickly went to purchase old scraped computer hard disk and new cpu fan and go step by step as the video instructed but It doesn’t work, Im still trying to make this project possible. Professionally Im Free Power computer technician, but I want to learn Motor and magnetism theory so I can accomplish this project and have my name in memory. I anyone can make this project please contact me through facebook so I can invite him/her to my country and make money as you know third word countries has power disaster. My facebook Id is Elly Maduhu Nkonya, or use my E-mail. [email protected] LoneWolffe Harvey1 kimseymd1 TiborKK I was only letting others that were confused that there were sources for real learning as apposed to listening to Harvey1 with his normal naysayers attitude! There is tons of information on schoolgirl, schoolboy and Bedini window motors that actually work to charge batteries and eventually will generate house currents. It just has to be looked at to get any useful information from it without listening to people like Harvey1 whining about learning. Harvey1 kimseymd1 You obviously play too much video games with trolls etc. in them. Why the editors of this forum allow you to keep calling people names instead of following the subject is beyond me. This must be the last site to allow you on it. I spammed the books because I thought those people were good for learning these engines which are super and there are tons of information out there for anyone to find. You seem to only want to learn to be rude instead of electronics.
I am doing more research for increasing power output so that it can be used in future in cars. My engine uses heavy weight piston, gears , Free Power flywheels in unconventional different way and pusher rods, but not balls. It was necessary for me to take example of ball to explain my basic idea I used in my concept. (the ball system is very much analogous to the piston-gear system I am using in my engine). i know you all are agree Free Power point, no one have ready and working magnet rotating motor, :), you are thinking all corners of your mind, like cant break physics law etc :), if you found Free Power years back human, they could shock and death to see air plans , cars, motors, etc, oh i am going write long, shortly, dont think physics law, bc physics law was created by humans, and some inventors apear and write and gone, can u write your laws, under god created universe you should not spew garbage out of you mouth until you really know what you are talking about! Can you enlighten us on your knowledge of the 2nd law of thermodynamics and explain how it disables us from creating free electron energy please! if you cant then you have no right to say that it cant work! people like you have kept the world form advancements. No “free energy magnetic motor” has ever worked. Never. Not Once. Not Ever. Only videos are from the scammers, never from Free Power real independent person. That’s why only the plans are available. When it won’t work, they blame it on you, and keep your money.
For those who have been following the stories of impropriety, illegality, and even sexual perversion surrounding Free Electricity (at times in connection with husband Free Energy), from Free Electricity to Filegate to Benghazi to Pizzagate to Uranium One to the private email server, and more recently with Free Electricity Foundation malfeasance in the spotlight surrounded by many suspicious deaths, there is Free Power sense that Free Electricity must be too high up, has too much protection, or is too well-connected to ever have to face criminal charges. Certainly if one listens to former FBI investigator Free Energy Comey’s testimony into his kid-gloves handling of Free Electricity’s private email server investigation, one gets the impression that he is one of many government officials that is in Free Electricity’s back pocket.
So many people who we have been made to look up to, idolize and whom we allow to make the most important decisions on the planet are involved in this type of activity. Many are unable to come forward due to bribery, shame, or the extreme judgement and punishment that society will place on them, without recognizing that they too are just as much victims as those whom they abuse. Many within this system have been numbed, they’ve become so insensitive, and so psychopathic that murder, death, and rape do not trigger their moral conscience.
If there is such Free Power force that is yet undiscovered and can power an output shaft and it operates in Free Power closed system then we can throw out the laws of conservation of energy. I won’t hold my breath. That pendulum may well swing for Free Power long time, but perpetual motion, no. The movement of the earth causes it to swing. Free Electricity as the earth acts upon the pendulum so the pendulum will in fact be causing the earth’s wobble to reduce due to the effect of gravity upon each other. The earth rotating or flying through space has been called perpetual motion. Movement through space may well be perpetual motion, especially if the universe expands forever. But no laws are being bent or broken. Context is what it is all about. Mr. Free Electricity, again I think the problem you are having is semantics. “Perpetual- continuing or enduring forever; everlasting. ” The modern terms being used now are “self-sustaining or sustainable. ” Even if Mr. Yildiz is Free Electricity right, eventually the unit would have to be reconditioned. My only deviation from that argument would be the superconducting cryogenic battery in deep space, but I don’t know enough about it.
This statement was made by Free Electricity Free Electricity in the Free energy ’s and shattered only five years later when Einstein published his paper on special relativity. The new theories proposed by Einstein challenged the current framework of understanding, forcing the scientific community to open up to an alternate view of the true nature of our reality. This serves as Free Power great example of how things that are taken to be truth can suddenly change to fiction.
#### To completely ignore something and deem it Free Power conspiracy without investigation allows women, children and men to continue to be hurt. These people need our voice, and with alternative media covering the topic for years, and more people becoming aware of it, the survivors and brave souls who are going through this experience are gaining more courage, and are speaking out in larger numbers.
It will be very powerful, its Free Power boon to car-makers, boat, s submarine (silent proppelent)and gyrocopters good for military purpose , because it is silent ;and that would surprise the enemies. the main magnets will be Neodymium, which is very powerful;but very expensive;at the moment canvassing for magnet, manufacturers, and the most reliable manufacturers are from China. Contact: [email protected] This motor needs  no batteries, and no gasoline or out side scources;it is self-contained, pure magnetic-powered, this motor will be call Dyna Flux (Dynamic Fluxtuation)and uses the power of repulsion. Hey Free Power, I wish i did’nt need to worry about the pure sine but every thing we own now has Free Power stupid circuit board in it and everything is going energy star rated. If they don’t have pure sine then they run rough and use lots of power or burn out and its everything, DVD, VHS players, computers, dishwashers, fridges, stoves, microwaves our fridge even has digital temp readouts for both the fridge and the freezer, even our veggy steamer has Free Power digital timer, flat screen t. v’s, you can’t get away from it anymore, the world has gone teck crazzy. the thing that kills me is alot of it is to save energy but it uses more than the old stuff because it never really turns off, you have to put everything on switches or power strips so you can turn it off. I don’t know if i can get away from using batteries for my project. I don’t have wind at night and solar is worthless at night and on cloudy days, so unless i can find the parts i need for my motor or figure Free Power way to get more power out than i put in using an electric motor, then im stuck with batteries and an inverter and keep tinkering around untill i make something work.
I wanted to end with Free Power laugh. I will say, I like Free Electricity Free Power for his comedy. Sure sometimes I am not sure if it comes across to most people as making fun of spirituality and personal work, or if it just calls out the ridiculousness of some of it when we do it inauthentically, but he still has some great jokes. Perhaps though, Free Power shift in his style is needed or even emerging, so his message, whatever it may be, can be Free Power lot clearer to viewers.
Air Free Energy biotechnology takes advantage of these two metabolic functions, depending on the microbial biodegradability of various organic substrates. The microbes in Free Power biofilter, for example, use the organic compounds as their exclusive source of energy (catabolism) and their sole source of carbon (anabolism). These life processes degrade the pollutants (Figure Free Power. Free energy). Microbes, e. g. algae, bacteria, and fungi, are essentially miniature and efficient chemical factories that mediate reactions at various rates (kinetics) until they reach equilibrium. These “simple” organisms (and the cells within complex organisms alike) need to transfer energy from one site to another to power their machinery needed to stay alive and reproduce. Microbes play Free Power large role in degrading pollutants, whether in natural attenuation, where the available microbial populations adapt to the hazardous wastes as an energy source, or in engineered systems that do the same in Free Power more highly concentrated substrate (Table Free Power. Free Electricity). Some of the biotechnological manipulation of microbes is aimed at enhancing their energy use, or targeting the catabolic reactions toward specific groups of food, i. e. organic compounds. Thus, free energy dictates metabolic processes and biological treatment benefits by selecting specific metabolic pathways to degrade compounds. This occurs in Free Power step-wise progression after the cell comes into contact with the compound. The initial compound, i. e. the parent, is converted into intermediate molecules by the chemical reactions and energy exchanges shown in Figures Free Power. Free Power and Free Power. Free Power. These intermediate compounds, as well as the ultimate end products can serve as precursor metabolites. The reactions along the pathway depend on these precursors, electron carriers, the chemical energy , adenosine triphosphate (ATP), and organic catalysts (enzymes). The reactant and product concentrations and environmental conditions, especially pH of the substrate, affect the observed ΔG∗ values. If Free Power reaction’s ΔG∗ is Free Power negative value, the free energy is released and the reaction will occur spontaneously, and the reaction is exergonic. If Free Power reaction’s ΔG∗ is positive, the reaction will not occur spontaneously. However, the reverse reaction will take place, and the reaction is endergonic. Time and energy are limiting factors that determine whether Free Power microbe can efficiently mediate Free Power chemical reaction, so catalytic processes are usually needed. Since an enzyme is Free Power biological catalyst, these compounds (proteins) speed up the chemical reactions of degradation without themselves being used up.
This simple contradiction dispels your idea. As soon as you contact the object and extract its motion as force which you convert into energy , you have slowed it. The longer you continue the more it slows until it is no longer moving. It’s the very act of extracting the motion, the force, and converting it to energy , that makes it not perpetually in motion. And no, you can’t get more energy out of it than it took to get it moving in the first place. Because this is how the universe works, and it’s Free Power proven fact. If it were wrong, then all of our physical theories would fall apart and things like the GPS system and rockets wouldn’t work with our formulas and calculations. But they DO work, thus validating the laws of physics. Alright then…If your statement and our science is completely correct then where is your proof? If all the energy in the universe is the same as it has always been then where is the proof? Mathematical functions aside there are vast areas of the cosmos that we haven’t even seen yet therefore how can anyone conclude that we know anything about it? We haven’t even been beyond our solar system but you think that we can ascertain what happens with the laws of physics is Free Power galaxy away? Where’s the proof? “Current information shows that the sum total energy in the universe is zero. ” Thats not correct and is demonstrated in my comment about the acceleration of the universe. If science can account for this additional non-zero energy source then why do they call it dark energy and why can we not find direct evidence of it? There is much that our current religion cannot account for. Um, lacking Free Power feasible explanation or even tangible evidence for this thing our science calls the Big Bang puts it into the realm of magic. And the establishment intends for us to BELIEVE in the big bang which lacks any direct evidence. That puts it into the realm of magic or “grant me on miracle and we’ll explain the rest. ” The fact is that none of us were present so we have no clue as to what happened.
|
2019-02-18 23:49:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4711752235889435, "perplexity": 1385.1569959533604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00362.warc.gz"}
|
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-cm6573-10-2015
|
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
## Colloquium Mathematicum
2016 | 145 | 1 | 137-148
Tytuł artykułu
### Leibniz's rule on two-step nilpotent Lie groups
Autorzy
Treść / Zawartość
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Let 𝔤 be a nilpotent Lie algebra which is also regarded as a homogeneous Lie group with the Campbell-Hausdorff multiplication. This allows us to define a generalized multiplication $f # g = (f^{∨} ∗ g^{∨})^{∧}$ of two functions in the Schwartz class 𝓢(𝔤*), where $^{∨}$ and $^{∧}$ are the Abelian Fourier transforms on the Lie algebra 𝔤 and on the dual 𝔤* and ∗ is the convolution on the group 𝔤.
In the operator analysis on nilpotent Lie groups an important notion is the one of symbolic calculus which can be viewed as a higher order generalization of the Weyl calculus for pseudodifferential operators of Hörmander. The idea of such a calculus consists in describing the product f # g for some classes of symbols.
We find a formula for $D^{α}(f # g)$ for Schwartz functions $f,g$ in the case of two-step nilpotent Lie groups, which includes the Heisenberg group. We extend this formula to the class of functions f,g such that $f^{∨}, g^{∨}$ are certain distributions acting by convolution on the Lie group, which includes the usual classes of symbols. In the case of the Abelian group $ℝ^{d}$ we have f # g = fg, so $D^{α}(f # g)$ is given by the Leibniz rule.
Słowa kluczowe
Kategorie tematyczne
Czasopismo
Rocznik
Tom
Numer
Strony
137-148
Opis fizyczny
Daty
wydano
2016
Twórcy
autor
• Institute of Mathematics, University of Wrocław, Pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland
Bibliografia
Typ dokumentu
Bibliografia
Identyfikatory
|
2021-01-21 18:39:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6316829919815063, "perplexity": 478.55735816263797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00356.warc.gz"}
|
https://eprint.iacr.org/2018/008
|
### Quantum Algorithms for Boolean Equation Solving and Quantum Algebraic Attack on Cryptosystems
Yu-Ao Chen and Xiao-Shan Gao
##### Abstract
Decision of whether a Boolean equation system has a solution is an NPC problem and finding a solution is NP hard. In this paper, we present a quantum algorithm to decide whether a Boolean equation system F has a solution and compute one if F does have solutions with any given success probability. The complexity of the algorithm is polynomial in the size of F and the condition number of F. As a consequence, we have achieved exponential speedup for solving sparse Boolean equation systems if their condition numbers are small. We apply the quantum algorithm to the cryptanalysis of the stream cipher Trivum, the block cipher AES, the hash function SHA-3/Keccak, and the multivariate public key cryptosystems, and show that they are secure under quantum algebraic attack only if the condition numbers of the corresponding equation systems are large.
Note: The paper is on arXiv 1712.06239.
Available format(s)
Category
Foundations
Publication info
Preprint. MINOR revision.
Keywords
quantum algorithmBoolean equation solvingquantum algebraic attac
Contact author(s)
xgao @ mmrc iss ac cn
History
Short URL
https://ia.cr/2018/008
CC BY
BibTeX
@misc{cryptoeprint:2018/008,
author = {Yu-Ao Chen and Xiao-Shan Gao},
title = {Quantum Algorithms for Boolean Equation Solving and Quantum Algebraic Attack on Cryptosystems},
howpublished = {Cryptology ePrint Archive, Paper 2018/008},
year = {2018},
note = {\url{https://eprint.iacr.org/2018/008}},
url = {https://eprint.iacr.org/2018/008}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2023-03-27 00:00:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.360805869102478, "perplexity": 1871.1062867737737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00490.warc.gz"}
|
https://cran.stat.unipd.it/web/packages/foreSIGHT/vignettes/Vignette_Tutorial.html
|
1. Introduction
A variable and changing climate presents significant challenges to the functioning and/or performance of both natural and engineered systems. Managed systems—both engineered and managed natural systems—traditionally have been designed under the assumption that future climate conditions will mirror those experienced in the past. Yet with the continuing advance of climate change, there is a need to understand how systems might perform under a range of plausible future climate conditions or, conversely, what system interventions might be required so that systems continue to achieve desired levels of performance. Given the complexity of most climate-sensitive systems, formalised approaches are required to understand likely climate impacts and evaluate the viability of adaptive measures to minimise climate vulnerability.
To this end, scenario-neutral (or ‘bottom-up’) approaches (Prudhomme et al. 2010, Brown 2011, Culley et al. 2016) are advocated as a means of rigorously stress testing a system under a range of plausible future climate conditions. These approaches treat the system’s behaviour and performance as the central concerns of the analysis, and enable better understanding of the complex climate-system relationships to support adaptation decision making. These approaches can be combined with ‘top-down’ climate impact assessment methods through the integration of projections from climate models and/or other lines of evidence. The foreSIGHT package contains functions that support both ‘bottom-up’ system stress testing, and the analysis of the implication of ‘top-down’ climate projections on system performance.
This vignette demonstrates the options available for climate ‘stress-testing’ a system using foreSIGHT by applying the inverse approach (Guo et al. 2018) to optimise the parameters of one or more stochastic weather generators. The examples in this vignette both collate—and provide context to—information scattered in the function help files to discuss the considerations for application of foreSIGHT to more complex case studies and systems. It is assumed that the reader is familiar with the basic work flow of the package functions as has been demonstrated in the Quick Start Guide vignette.
1.1. Objectives and application areas of foreSIGHT
The objectives of foreSIGHT are to support climate impact and vulnerability assessments and the assessment of adaptation options by:
1. stress testing climate-sensitive systems, including both ‘current’ system configurations, as well as potential alternative system configurations that may be considered as part of the development of adaptation strategies;
2. comparing the climate sensitivity of multiple alternative system configurations to inform adaptation decision making; and
3. comparing stress-testing outcomes with the results from ‘top-down’ climate impact assessments to better understand future risk for each system configuration.
The foreSIGHT modelling software adopts a rigorous quantitative approach to stress testing that has been designed with several core assumptions in mind:
• that the system dynamics (either ‘current’ or alternative system configurations) can be represented and adequately described by a numerical system model that provides a mapping between weather/climate variables and relevant system performance metrics; and
• that the system model is forced by hydroclimatic time series data.
Indeed, it is this latter feature that gives the software its name (the SIGHT in foreSIGHT stands for System Insights from the Generation of Hydroclimatic Timeseries). In particular, foreSIGHT has been designed specifically for the quantitative analysis of systems that exhibit dynamics in time, with examples of such systems including:
• environmental systems (either natural or managed) that may be resilient to individual natural hazards but become vulnerable to multiple sequential hazards or long-term structural shifts in the climate;
• water resource systems with natural (e.g. soil moisture, groundwater) and/or human-constructed (e.g. reservoirs, managed aquifer recharge) storages, for which past weather can affect current system performance;
• agricultural systems where crop outcomes (e.g. yield and various quality measures) are influenced by the weather throughout a growing season and even between seasons;
• renewable energy systems such as solar, wind and hydroelectricity and/or coupled storage solutions (e.g. pumped hydroelectricity or lithium battery systems); and
• systems that depend on one or several of the above systems, such as mining (often dependent on groundwater and/or surface water reserves), transportation (often sensitive to flooding and various other natural hazards), tourism (often highly dependent on ecosystem health) and so forth.
The focus on detailed numerical modelling and system ‘stress testing’ highlights that foreSIGHT is particularly suited to situations where the consequences of system performance degradation and/or failure as a result of climate change are likely to be significant, as well as for quantitative decision making and/or engineering design. It is assumed that a high-level (qualitative) risk assessment would have already been conducted and the outcome of that assessment is that a detailed quantitative analysis is required.
1.2. foreSIGHT workflow for climate stress-testing
The foreSIGHT workflow is shown the diagram below, and comprises five distinct steps that collectively address the three objectives outlined above. A core aspect of the foreSIGHT functionality is to evaluate how the system performs under a range of plausible climate scenarios created by perturbing statistical properties of observed climate time series. The workflow involves the steps shown in the following diagram, each of which are discussed in the case study presented in Section 2. As highlighted in the previous section, at this point it is assumed that a detailed quantitative analysis of a system is required (based, for example, on the outcomes of a qualitative risk assessment) and that a numerical system model is available or can be developed as part of the analysis.
Each of the modelling steps are elaborated upon below.
Step A. The process of system stress testing involves assessing how a system’s behaviour (including its ‘function’ or ‘performance’) varies as a result of plausible climatic changes. These changes are described by means of climate attributes, which we define as statistical measures of weather variables. Examples of attributes are annual total rainfall, annual number of wet days, and annual average temperature. In this step, the attributes that are deemed to be most relevant for a particular system are identified. These attributes are generally selected based on a priori understanding of system dynamics and likely system vulnerability. The minimum-maximum bounds of the perturbations in the selected attribute, and the type of sampling within this range, are also decided. The attributes and perturbations are used to create an ‘exposure space’. The outcome of this step is a set of sampled points within an exposure space, that provide the ‘targets’ for time series generation algorithms in Step B.
Step B. This step involves generation of perturbed time series corresponding to the sampled points of target perturbations created in Step A. A reference (typically observed) time series of the relevant hydro-climate variables is required to create the perturbed time series using a selected method of perturbation. The supported perturbation methods in foreSIGHT include the application of scaling factors to the supplied time series, or the use of the ‘inverse method’ of Guo et al (2018) to optimise the parameters of stochastic weather generator type models to generate time series with desired perturbed attributes. If stochastic models are used for time series generation, multiple replicates of time series that correspond to the same target can be generated to better represent stochastic (‘natural’) variability. The outcome of this step is a set of perturbed time series that correspond as closely as possible to each point in the exposure space.
Step C. The perturbed time series generated in Step B are used to drive the system model and simulate system ‘performance’. The performance metrics should represent measures that are most relevant to the system under consideration, and can include a variety of economic, social and/or environmental measures. It is assumed that the performance metrics are calculated within the system model and thus represent the outputs from that model (i.e. the foreSIGHT package does not calculate the performance metrics itself). The outcome of this step is a quantitative representation of how system performance varies across the exposure space.
Step D. This step visualises the system performance metrics calculated in Step C to understand the system responses to the perturbations in the selected climate attributes. If minimum or maximum threshold criteria of the performance metrics are defined, these thresholds can be used to identify instances of unsatisfactory system performance/system failure. In this step, the performance metrics are visualised in the perturbation space of the climate attributes; in other words, the axes used for visualisation are the perturbed climate attributes. Such figures are henceforth named ‘performance spaces’—these visualisations enable identification of combinations of perturbations that result in changes to system performance. In cases where the ‘stress-test’ includes multiple perturbed attributes and performance metrics, multiple visualisations of performance spaces are used to represent all combinations of attributes/metrics. If alternate climate information is available from other sources of evidence (for example, from ‘top-down’ approaches), they can be superimposed on the visualisations generated in this step. Inclusion of this additional climate data may provide information about the plausibility of the perturbations in the attributes. The outcome of this step are plots of the system performance spaces/thresholds and understanding of the system responses to the climate perturbations.
Step E. This step involves analysis of alternate system configurations/policies in order to support decision making. Visualisations are created for the alternate system choices to compare their performance. The outcomes of this step are plots of the performance spaces/thresholds for all system choices and understanding of the preferred choices under climate perturbations.
These five steps complete the framework of climate impact assessment using foreSIGHT, and are discussed at length in the following sections.
1.3. What’s covered in this tutorial?
This tutorial will provide a detailed description of the core functionality of the foreSIGHT software, broken down into each of the five steps described in the previous section. The basic structure of each section is as follows:
• Each section will commence with a brief overview of the purpose of the Step and key learning outcomes.
• Following the overview, there will be a series of subsections that focus on key decisions that need to be made in implementing the step. These subsections commence with a box providing basic theory and key considersations that should be taken into account in making these decisions, followed by a description of software functionality needed to implement each decision.
• These elements are then combined to describe a series of ‘use cases’, in which the basic foreSIGHT workflow is implemented against a set of hypothetical anticipated applications of the software.
• specifying exposure space perturbations on irregular grids
• using stochastic models to generate perturbed series and modifying the default settings for stochastic models/parameters
• modifying default optimisation settings and specifying penalty attributes and weights
• assessing the fitness of stochastic time series relative to the desired target attributes
• defining custom wrapper functions for system models in R
• using system models external to R
• exercising finer control on plotting performance spaces
• parallelising computationally intensive functions in the package
The implementation of the foreSIGHT methodology, and the bottom-up framework more generally, requires the use of a consistent set of terminology to describe key concepts. This terminology is summarised in a glossary section, and we use bold font when making reference to key terms defined in the glossary.
Finally, we have developed a Frequently Asked Questions section which we’ll be expanding on over time, and included references to a small number of key scientific papers.
3. Step A: Identify attributes for perturbation and create an exposure space (createExpSpace)
In this step you’ll learn…
• What’s meant by the terms exposure space, climate attributes and exposure space targets, as well as the difference between perturbed and held attributes
• How to select attributes for stress testing
• How to select reasonable bounds for each attribute
• How to determine the appropriate sampling strategy for the exposure space
• How to decide which attributes to hold at historical levels
In this step, we’ll take you through the basic process of creating an exposure space. The term exposure space refers to the set of future climate conditions to which a system might be exposed. This is a multidimensional space that in principle could represent any feature of the climate that might impact a given system, such as the averages, variability, seasonality, intermittency, extremes, interannual variability of a range of weather variables including rainfall, temperature, wind, solar radiation, and so forth. We henceforth refer to these features as attributes, which are formally defined as statistical measures of weather time series.
While stress testing a system against the full set of plausible changes in all relevant attributes might sound good in theory, this would lead to an infinite number of future climate states and thus is not feasible in practice. Rather, it will be necessary to:
• Select the attributes that are likely to be the most important for the system in question;
• Select reasonable bounds for each attribute, to capture a plausible set of climatic changes;
• Provide an appropriate sampling strategy to generate the exposure space; and
• Select the attributes to ‘hold’ at historical levels
The following subsections will guide you through each of decisions.
3.1. Step A1: Selecting Attributes for Stress Testing
Key Considerations: Step A1
The purpose of climate stress testing is to understand system sensitivity to a range of plausible future climates, as a core foundation for making informed adaptation decisions. In foreSIGHT this is achieved by perturbing a number of climate attributes, and seeing how the system responds to those perturbations.
The first part of the stress test is to choose the relevant attributes to perturb. This creates a dilemma: without having yet having evaluated system sensitivity, how should one go about choosing the attributes to use for stress testing? We suggest that the following considerations be taken into account:
• Consider any a priori knowledge of likely system sensitivity and vulnerability. This can come from expert understanding of system dynamics (e.g. dominant processes and key timescales over which the system operates), knowledge of historical system sensitivity and/or evidence associated with any previous system ‘failures’, or any other information that could give insight into the most important climate attributes
• Consider likely climatic changes in the region based on available lines of evidence (e.g. climate projections and other relevant information). If there is reasonable confidence that an attribute is not likely to change in the future, then there is less value for including it in a stress test.
• If in doubt, then it is generally worth erring on the side of caution and including the attribute in question in the analysis, rather than risking the possibility of missing a major area of system vulnerability.
Beyond these high-level considerations, there are also several further practical considerations:
• Is it possible to perturb the attributes using the available perturbation method? This is relevant as part of the choice between ‘simple/seasonal scaling’ and stochastic methods, and also as part of the choice of specific stochastic weather generator to use (discussed further in Step B).
• Is there enough marginal value of including an attribute given others that have already been included? For example, if a stress test has already been conducted on the 95 percentile of daily rainfall, then perhaps stress testing it against the 96 percentile is unlikely to deliver much additional insight.
• Can the system model take the relevant perturbed weather generator values as inputs? For example if a system model runs at a daily timescale, then its capacity to account for sub-daily inputs is limited. Note that in this case, if a priori knowledge suggests that variability at the sub-daily scale is very important, then it may be a case of developing a better system model!
The outcome of attribute selection could be a large number of attributes identified for perturbation, which may then need to be reduced in subsequent steps. This was discussed at length by Culley et al (2020) and will likely also dictate the sampling strategy discussed in Step A3.
foreSIGHT has the relative unique capability to perturb a large number of climate attributes, either jointly or in isolation. In foreSIGHT attribute names are defined using the format “var_strat_ funcPars_op”, where
• “var” is the variable name,
• “strat” is temporal stratification,
• “func” is function name (with “Pars” denoting additional optional parameters), and
• “op” is an optional operation.
For example, “P_ann_tot_m” is the mean of the total rainfall calculated over each year (i.e. mean annual rainfall) and “P_ann_P99” is 99th percentile of rainfall calculated over all days.
Variable names include
• “P”: rainfall
• “T”: temperature
• “PET”: Potential evapotranspiration
Temporal stratifications are
• “ann”: data from entire year
• “JJA”, “SON”, etc: seasons
• “Jan”, “Feb”, etc: months
Function names (and optional parameters) include
• “tot”: total
• “P99”: 99th percentile
• “maxDSDthresh1”: maximum dry-spell duration (with threshold above 1mm)
A full list of attribute functions supported in foreSIGHT can be viewed using the helper functions viewAttributeFuncs().
foreSIGHT also allows users to define their own custom functions. The names of these functions must have the format “func_customName”, where “customName” is the custom attribute function. These functions must have the argument “data”, which represents the time series of climate data. For example,
func_happyDays = function(data){
return(length(which((data>25)&(data<30))))
}
can be used in the attribute “T_ann_happyDays_m” for calculating the average number of days each year with temperatures between 25oC and 30oC.
Operator name is optional, and currently is limited to
• “m”: mean value of metric calculated over all years
• Often “m” will be left empty.
The use of the operator “m” is a subtle, but important, choice. It determines whether the metric is calculated once using all the data (when operator is not specified), or whether the metric is calculated for each year, and then averaged over all years (when operator specified as “m”). For example, “P_ann_P99” refers to the 99th percentile of daily rainfall calculated over all days, while “P_ann_P99_m” would be the mean value of the 99th percentile of daily rainfall from each year.
The definition of each climate attribute supported by foreSIGHT can be viewed using the helper function viewAttributeDef() available in the package.
viewAttributeDef("P_ann_tot_m")
#> [1] "Mean annual total rainfall"
It is noted that attributes are specified either as a fractional change relative to historical levels (and thus do not have units), or they are specified using the metric system. For situations where the units are not consistent with those adopted by the preferred system model, then unit conversions will need to be included as part of the system model wrapper function. This is covered further as part of Step C.
The large number of supported attributes in foreSIGHT does not mean that all attributes can be used in all situations. In particular, foreSIGHT has two key perturbation methods – scaling (“simple” or “seasonal”) and stochastic generation (with an ever-increasing library of stochastic generators).
In the case of simple scaling, the only attributes that can be perturbed are annual averages, while for with other attributes changing in a proportional manner. For seasonal scaling, annual averages and “seasonality ratio” can be perturbed in conjunction.
In contrast, when using stochastic generation, then the attributes that can be perturbed will depend on the stochastic generator. For example, stochastic generators with seasonal variations in parameters can be used to perturb seasonally stratified attributes, whereas annual models (with no parameter variation) cannot. In general, annual parameter variation should not be used for perturbing attributes with seasonal or monthly temporal stratification (e.g. “P_JJA_tot_m” and “T_Jan_ave_m”) or seasonality ratios (e.g. “P_ann_seasRatio”). Seasonal parameter variation should not be used for perturbing attributes with monthly temporal stratification (e.g. “T_Jan_ave_m”).
Once the attributes are selected, we need to include these as a vector into the function createExpSpace via the argument attPerturb. For example if we are interested in the total annual rainfall, the mean annual number of wet days, the total JJA rainfall, we define the argument attPerturb as:
attPerturb <- c("P_ann_tot_m", "P_ann_nWet_m", "P_JJA_tot_m")
3.2. Step A2: Selecting reasonable bounds for each attribute
Key Considerations: Step A2
The purpose of the stress test is to evaluate system model performance against a range of plausible future changes. Thus far we have been vague about what we mean by the word plausible, but it’s an absolutely fundamental element of the stress test.
One of the primary objectives of stress testing is we want to minimise the likelihood of surprises and unexpected system failures. These are often called ‘Black Swan’ events and represent situations that were not foreseen when the system was originally designed.
To minimise the risk of not capturing future changes, we first need to make sure we capture the right types of changes, which are defined by the attribute selection step (Step A1). But we also need to ensure that we get the right magnitudes of overall changes.
Thus, by plausible, we mean that the changes are deemed to be physically possible, albeit not necessarily likely in all cases. For this reason, our guidance generally has been to select attribute ranges (minimum and maximum values) that roughly represent the ‘worst case’ of what is possible based on current understanding. In practice this could mean selecting bounds that are slightly wider than the range identified by climate models (recognising that climate models may not capture all plausible future changes), or consideration of various other lines of evidence.
Yet we want to emphasise the word ‘slightly’ here. We know the world is not going to warm by 100oC, so let’s not get carried away imagining worse case scenarios that are almost certainly not going to occur in reality! This not only would represent a waste of computational resources, but also would direct analytical attention away from the sorts of changes that are more likely to occur in practice.
A final note. foreSIGHT—as well as every other application of ‘bottom-up’ or ‘scenario-neutral’ analysis we have seen thus far—defines the bounds of the exposure space by the bounds in individual attributes. In other words, the exposure space becomes a (hyper) cube defined by the univariate bounds. Yet certain combinations of changes may be more or less likely, and indeed some combinations may not be physically possible. This is a current limitation of bottom-up methods, and is addressed somewhat by the subsequent superposition of ‘top-down’ projections onto the exposure space to highlight parts of the space that are more or less probable.
As highlighted in the above box, there are a lot of factors to consider in setting the bounds of the exposure space. However, much of this analysis needs to happen before starting to use foreSIGHT, based on a combination of expert knowledge, and potentially the interrogation of climate model output. To assist with this, foreSIGHT contains a function, named calculateAttributes, to calculate the values of attributes from climate data. By using this function with multiple climate model output time series (potentially in combination with some form of downscaling and/or bias correction), the function may be used to estimate the range of the attribute projections, which can be used as one of the information sources to determine attribute bounds. The usage of the function is illustrated below.
# load example climate data
data("tankDat")
# select attributes
attSel <- c("P_ann_tot_m", "P_ann_nWet_m", "P_ann_R10_m", "Temp_ann_rng_m", "Temp_ann_avg_m")
# calculate attributes
tank_obs_atts <- calculateAttributes(tank_obs, attSel = attSel)
tank_obs_atts
#> P_ann_tot_m P_ann_nWet_m P_ann_R10_m Temp_ann_rng_m Temp_ann_avg_m
#> 449.93000 132.20000 11.50000 18.55000 17.43714
Once we’ve decided upon the bounds, we put the minima and maxima in as vectors, corresponding to the entries into the attPerturb argument. So if we are planning on perturbing the three attributes described in the previous section, we would define the bounds as:
attPerturbMin = c(0.7, 0.8, 0.6)
attPerturbMax = c(1.1, 1.2, 1.1)
3.3. Step A3: Determining the appropriate sampling strategy for the exposure space
Key Considerations: Step A3
For climate stress testing the system, we analyse the changes in system performances over an exposure space defined by the selected perturbed attributes (Step A1) and their respective minimum-maximum bounds (Step A2). To do that, we need to sample the exposure space, to obtain a set of target attribute values for subsequent perturbation in Step B.
The method of sampling to create the target attribute values within the exposure space is what is referred to as the sampling strategy. Imagine an exposure space, the axes of which are the perturbed attributes. When there are multiple perturbed attributes, the exposure space targets can have perturbations in one perturbed attribute (while others are held at historical levels), a subset of the perturbed attributes (while the remaining are held at historical levels), or all of them.
Ideally, we want to have many samples within the minimum-maximum bounds of all perturbed attributes for comprehensive analysis of the system performances over the entire exposure space. However, we may end up with too many exposure space targets than it is possible to analyse using the computational resources available. Hence we need sampling strategies to reduce the number of exposure space targets, while sampling the space adequately.
Thus, the primary concerns in selecting a sampling strategy is the desired resolution of the exposure space and the computational resources available to conduct the stress-test. Generating the perturbed time series (Step C), and running the system model using the perturbed time series (Step D) are typically computationally intensive, and influence this decision.
The number of exposure space targets increase exponentially if there are multiple attributes that need to be perturbed simultaneously. Practically, if computational constrains exists, we perform a preliminary assessment using one-at-a-time sampling of attPerturb to select the most relevant attributes for the stress-test.
Three arguments—attPerturbType, attPerturbSamp, and attPerturbBy—determine the sampling strategy input to createExpSpace. The type of sampling is specified using the attPerturbType argument. The function currently supports two types of sampling - ‘one-at-a-time’ ('OAT'), and regular grid ('regGrid'). In the case of an ‘OAT’ sampling, each attribute is perturbed one-at-a-time while holding all other attributes constant. In contrast, in a ‘regGrid’ sampling the attributes are perturbed simultaneously to evenly sample an exposure space encompassing all combination of perturbations in the selected attributes. The number of samples or the increment of perturbation is prescribed using the arguments attPerturbSamp or attPerturbBy, respectively, for each perturbed attribute.
Once the sampling strategy for the perturbing the three attributes selected in the previous section has been decided, we specify these arguments as shown below.
attPerturbType <- "OAT"
attPerturbSamp <- NULL
attPerturbBy <- c(0.1, 0.2, 0.1)
# Or equivalently, by specifying the number of samples as attPerturbSamp
attPerturbType <- "OAT"
attPerturbSamp <- c(5, 3, 5)
attPerturbBy <- NULL
In addition to OAT and regGrid sampling methods, it is also possible to put customised target combinations as inputs, and this option can provide flexibility to include a much wider range of sampling methods (such as Latin hypercube sampling or other sparse sampling approaches). In these cases, the targets are provided in a matrix to attTargetsFile, with other arguments (attPerturbType, attPerturbSamp, attPerturbBy, attPerturbMin and attPerturbMax) set to NULL. This usage is illustrated in one of the ‘irregular perturbations’ Use Case provided later in this chapter
3.4. Step A4: Deciding which attributes to hold at historical levels
Key Considerations: Step A4
So, we’ve identified the attributes, identified the plausible ranges and even identified a carefully developed strategy for sampling the exposure space. Done, right? Wrong.
There is a significant downside in the flexibility achieved by using stochastic generators to perturb climate, and if we’re not careful we can easily get weather sequences that are simply not physically realistic.
Let’s illustrate this using a simple example. Suppose we know that a system is primarily influenced by the annual average temperature. Thus, we have a very nice one-attribute problem, and we simply select this attribute in foreSIGHT, identify a plausible range, and in this case we’d just select a one-at-a-time sampling strategy.
Now let’s assume that in Step B we choose a sophisticated weather generator that can simulate daily temperature, and we apply the ‘inverse method’ (more on what this means in the discussion of Step B) to simulate any possible temperature time series that achieves the target annual temperature series.
Wait. Any possible time series? Yes - without providing more information, it might simulate a 1oC rise in average temperature in such a way that the day-to-day variability of the temperature can blow up (for example one day could be +100oC and the next -100oC), or the seasonality could reverse (winter hotter than summer), or indeed the time series could reflect any other change consistent with the specified target.
The way around this is tell the computer to perturb the temperature time series to achieve the target, subject to keeping all other attributes as close to the historical (or baseline) levels as possible. Thus could mean asking for a 1oC rise in temperature while keeping day-to-day variability and seasonality roughly constant. There’s lots of theory developed on this, and if you’re interested in learning more refer to Culley et al (2019).
As described in the Box above, holding some attributes at constant levels (and commonly at the level of the reference time series can be important to generate realistic weather time series. This is where the attHold argument comes in, and represents a list of all attributes to keep at historical levels. Something like:
attHold <- c("P_ann_P99", "P_ann_maxWSD_m", "P_ann_seasRatio")
In this case, these three attributes are held at reference levels. The attributes 99th percentile rainfall, maximum length of the wet spells are held at reference levels so that the perturbations do not result in unrealistic extreme rainfall events. In addition, the attribute “P_ann_seasRatio” is held at reference levels so that the seasonal cycle of the generated data is realistic. If you’re still not convinced why this is important, go to Step B where we provide an example of not holding some attributes at constant values. There are also mechanisms to ensure to preferentially matching the desired values of some attributes during time series generation, by prescribing attribute penalty. More on this in Step B.
Now that we’ve discussed the theoretical considerations and core functionality associated with Step A, we now bring all the pieces together to show some realistic ‘use cases’ for the createExpSpace function.
3.5. Use Case A1: An ‘OAT’ exposure space
Consider the starting phase of a case study, where a number climate attributes have been identified for perturbation based on a priori knowledge of the system dynamics. If the identified attributes are large in number, simultaneously perturbing them would result in an exposure space with too many target points for the stress test. However, it could be viable to create an “OAT” exposure space containing one-at-a-time perturbations in the identified attributes to perform a preliminary assessment. The assessment can help identify if there a subset of attributes to which the system performance is most sensitive to that can be used for the analysis, noting the important caveat that this form of assessment is not capable of analysing possible higher-order sensitivities due to interactions between attributes.
Suppose that an understanding of the dynamics of the system under consideration suggests that the system is sensitive to changes in annual and JJA rainfall totals. In addition, changes in the number of wet days are also expected to affect the system performance. So these attributes are selected for perturbation to create an “OAT” exposure space for the preliminary stress-test. Regional climate projections for the region indicate that the annual and JJA total rainfalls are expected to decrease. The range of the projected decreases in annual total rainfall is -20% to 0%, the range of projected decreases in JJA total is -30% to -0%, and the projected changes in the number of wet days is -10% to +10% for the future time-slice of interest. The minimum and maximum perturbation bounds encompassing these projected changes are selected for the stress-test, to cover the range of expected changes in the region. The generated exposure space is selected to be larger than the expected changes from climate projections by 10%. Thus the min-max bounds selected are: (0.7 to 1.1) for “P_ann_tot_m”, (0.6 to 1.1) for “P_JJA_tot_m”, and (0.8 to 1.2) for “P_ann_nWet_m”. Based on computational resources required to generate the data and run the system model we decide to perform the analysis using a total of 15 target locations in the exposure space. This information is used to select the perturbation increment of the perturbed attributes.
To maintain the realism of the perturbed time series, some attributes have to be held at reference levels. The attributes “P_ann_P99”, “P_ann_maxWSD_m” are selected to be held at reference levels so that the generated time series have realistic wet extremes, with decreasing annual and JJA totals. The hypothetical climate projections also indicate that there are no changes in the extremes, so these are assumed to stay the same as the reference series. In addition, the seasonal rainfall ratio is held at existing levels so that the seasonal cycle represented in the generated time series are realistic. It is noted that as part of the “OAT” sampling strategy, each of the “perturbed” attributes are being changed individually with the remaining perturbed attributes staying at the levels of the reference time series, and so effectively become “held” attributes as part of the perturbation.
The following example code illustrates the creation of an ‘OAT’ exposure space using the createExpSpace function.
# specify attributes
attPerturb <- c("P_ann_tot_m", "P_ann_nWet_m", "P_JJA_tot_m")
attHold <- c("P_ann_P99", "P_ann_maxWSD_m", "P_ann_seasRatio")
# ***** OAT Exposure Space ******
# specify perturbation type and minimum-maximum ranges of the perturbed attributes
attPerturbType <- "OAT"
attPerturbBy <- c(0.1, 0.1, 0.1)
attPerturbMin = c(0.7, 0.8, 0.6)
attPerturbMax = c(1.1, 1.2, 1.1)
# create the exposure space
expSpace_OAT <- createExpSpace(attPerturb = attPerturb,
attPerturbSamp = NULL,
attPerturbMin = attPerturbMin,
attPerturbMax = attPerturbMax,
attPerturbType = attPerturbType,
attPerturbBy = attPerturbBy,
attHold = attHold)
# plot the exposure space
plotExpSpace(expSpace_OAT, y = attPerturb[1], x = attPerturb[2])
plotExpSpace(expSpace_OAT, y = attPerturb[2], x = attPerturb[3])
3.6. Use Case A2: A ‘regGrid’ exposure space
For most practical applications of the scenario-neutral method published to-date, the mode of presentation involves plotting system performance as a set of changing contours on a (usually but not necessarily two-dimensional) exposure space. In foreSIGHT this mode of presentation is facilitated through the ‘regGrid’ sampling of the exposure space, enabling the presentation of joint variations of a range of perturbed attributes. Although this method is not limited to two dimensions, there is a clear trade-off in number of attributes, grid resolution and runtimes; for example having 10 attributes with 10 samples each between the minimum and maximum bounds, would lead to the requirement of 1010 separate time series to be generated and run through a system model. Therefore in practice this approach is best done once a critical set of attributes are identified, either through the one-at-a-time method described in Use Case A1, or more sophisticated methods described in Culley et al (2020).
In terms of implementation, after identifying the attributes and deciding on the bounds and perturbation increments (or number of samples), a ‘regGrid’ exposure space is created, which consists of target points with simultaneous perturbations in the selected perturbed attributes. The createExpSpace function can be used as follows.
# ***** regGrid Exposure Space *****
attPerturbType <- "regGrid"
expSpace_regGrid <- createExpSpace(attPerturb = attPerturb,
attPerturbSamp = NULL,
attPerturbMin = attPerturbMin,
attPerturbMax = attPerturbMax,
attPerturbType = attPerturbType,
attPerturbBy = attPerturbBy,
attHold = attHold)
# plot the exposure space
plotExpSpace(expSpace_regGrid, y = attPerturb[1], x = attPerturb[2])
plotExpSpace(expSpace_regGrid, y = attPerturb[2], x = attPerturb[3])
3.7. Use Case A3: Exposure space with irregular perturbations
Sometimes when multiple attributes are selected for perturbation, it may not be feasible to use a ‘regGrid’ exposure space for the stress test because the higher dimensions of the exposure space result in an infeasibly large number of target points. As an alternative to the one-at-a-time method described in Use Case A1, the user may wish to input custom target combinations, which potentially could be obtained from a sampling method such as Latin hypercube sampling (see Culley et al, 2020).
To this end, the createExpSpace function offers the functionality to input target points created externally to foreSIGHT through the function argument attTargetsFile. The argument is intended for users who want to sample target locations using alternate sampling techniques not currently available in createExpSpace.
The target locations are created by the user outside foreSIGHT, saved in a CSV file and provided as an input to createExpSpace. In this case, the arguments attPerturbSamp, attPerturbMin, attPerturbMax, attPerturbType, and attPerturbBy should be set to NULL. It is to be noted that createExpSpace does not perform checks on the user input target locations read in from the CSV file. The user must therefore ensure that the perturbations specified in this file are feasible. The CSV file should contain column headers that correspond to all attributes specified as attPerturb and attHold. The rows of the file should correspond to the target locations in the exposure space.
The below code provides an example of this usage.
attPerturb <- c("P_ann_tot_m", "P_ann_nWet_m", "P_JJA_tot_m")
attHold <- c("P_ann_P99", "P_ann_maxWSD_m", "P_ann_seasRatio")
# creating example target locations and saving it in a CSV file for illustration
# note that the user would create these target locations using a sampling method of their choice
# the file should contain all perturbed and held attributes
tempFile <- paste0(tempdir(), "\\targetsFile.csv")
attTargets <- rbind(c(1, 1, 1, 1, 1, 1),
c(0.7, 1, 0.6, 1, 1, 1),
c(0.8, 1, 0.7, 1, 1, 1),
c(0.9, 1, 0.8, 1, 1, 1),
c(1.1, 1, 1, 1, 1, 1),
c(0.7, 1.2, 0.6, 1, 1, 1),
c(0.8, 1.2, 1, 1, 1, 1),
c(0.9, 1.2, 1.1, 1, 1, 1),
c(1, 0.8, 1, 1.1, 0.7, 1),
c(1.1, 0.8, 1.1, 0.8, 1, 1))
colnames(attTargets) <- c(attPerturb, attHold)
write.table(attTargets, file = tempFile, sep = ",")
# creating exposure space using targets from csv file
expSpace_Irreg <- createExpSpace(attPerturb = attPerturb,
attPerturbSamp = NULL,
attPerturbMin = NULL,
attPerturbMax = NULL,
attPerturbType = NULL,
attPerturbBy = NULL,
attHold = attHold,
attTargetsFile = tempFile)
#> [1] "READING ATTRIBUTE TARGETS FROM FILE"
# plot the exposure space
plotExpSpace(expSpace_Irreg, y = attPerturb[1], x = attPerturb[2])
plotExpSpace(expSpace_Irreg, y = attPerturb[2], x = attPerturb[3])
4. Step B: Generate perturbed climate time series (generateScenarios)
In this step you’ll learn…
• What’s meant by the terms reference period, simple and seasonal scaling, stochastic generation, stochastic weather generator, attribute penalty, realisation and random seed
• How to select an appropriate reference period for subsequent analysis
• How to select the time series perturbation method
• How to select the stochastic generator
• Whether and how to select attribute penalties
• How to select length of the perturbed time series, the number of replicates and how to control the random seed
Now that we have identified the specific points in the exposure space to analyse, we turn to the challenge of generating time series that correspond to those attribute values. In this step, we create perturbed hydro-climate time series with attributes corresponding to the target attribute values. The function generateScenarios can be used to create the perturbed time series. The mandatory arguments required are a reference (typically observed) time series (argument obs) and the exposure space created in Step A (argument expSpace).
Before doing this, the user needs to make a number of decisions:
• What is the reference period for analysis, which determines the baseline for subsequent perturbations?
• What is the time series perturbation method?
If the time series perturbation method is a stochastic one (i.e. one that uses stochastic weather generators), there are several other decisions that need to be made:
• Which stochastic weather generator to use;
• How to weight the different perturbed and held attributes to balance trade-offs and achieve the desired time series; and
• Other considerations such as length of each stochastic realisation, number of realisations and control of the random seed.
The subsections below provide a guide to each of these decisions, followed by a set of practical ‘Use Case’ examples to show how these come together.
4.1. Step B1. Selecting the reference time series
Key Considerations: Step B1
As part of the scenario neutral methodology, all perturbations are described against some reference attribute values, which in turn are usually calculated from a reference weather time series. This reference period is generally synonymous with the notion of a climatological baseline.
There are no requirements in the foreSIGHT software for the nature of the reference time series other than that it must conform to certain formatting requirements described in the examples below. However, in practice, there are a number of considerations in choosing the reference period:
• Purpose of analysis: In many cases it is anticipated that the focus of the stress testing will be to evaluate plausible changes in system performance either against current system performance or system performance over some historical period of record.
• Length of reference period: The length of the reference period must be sufficient to obtain appropriately precise estimates of relevant climate attributes. The World Meteorological Organisation generally suggests a minimum period of 30 years, although the specific decision will depend on the data availability, the degree of non-stationarity and various other considerations.
• Integration with top-down climate impact assessments. For situations where top-down projections will be included as part of the analysis—perhaps by superimposition of top-down projections on the scenario-neutral performance space, or by way of a comparative analysis—it might be necessary to ensure all the approaches are calculated relative to a consistent climatological baseline.
• Availability of (high-quality) historical weather data. Data availability can be a major constraint to stochastic modelling, with even relatively densely gauged regions experiencing regular interruptions in the historical record or other data anomalies. The quality of reference weather time series should be evaluated using established methods where possible.
• The potential non-stationarity of historical weather data. Given that climate change is increasingly detectable in weather time series data, system performance can be expected to be different across different reference periods such as: (i) aggregated over the instrumental record; (ii) aggregated over the recent record such as the last decade or two; or (iii) estimated based on the ‘current’ climate.
The above factors highlight that it is not possible to provide prescriptive guidance on the choice of reference period, and will require careful tailoring to the unique circumstances of each foreSIGHT application.
The variables available in foreSIGHT and their units can be viewed using the function viewVariables(). It is important to ensure that the input reference time series are specified in these units if stochastic models are used to generate the perturbed time series. This is because the default bounds of the stochastic model parameters in the package are based on these units. The input reference data should contain the variables that are required for the system under consideration (i.e. as input to the system model described in Step C). These reference data need not contain all the variables that are available in foreSIGHT, but can be a subset of them which are to be used in the stress-test.
# Hydro-climate variables available
viewVariables()
#> shortName longName units
#> [1,] "Simple" NA NA
#> [2,] "P" "Precipitation" "mm"
#> [3,] "Temp" "Temperature" "°C"
#> [4,] "PET" "Evapotranspiration" "mm"
#> [5,] "Radn" "Radiation" "MJ/m2"
The reference data can either be a data.frame or a list. List format is required for multi-site data. When using a data.frame, the first three columns should contain the year, month, and day of the data named accordingly. Further columns of the data.frame should contain the hydro-climate variables of interest named by their short names as listed in the viewVariables() function. An example reference data.frame object representing a general Adelaide (South Australia) climate is available in the package and can be loaded using the data command as shown below. This is intended to illustrate the expected data.frame format of reference.
# Load example climate data
data(tankDat)
# Expected data.frame format of the input obs climate data
#> year month day P Temp
#> 1 2007 1 1 0.0 25.50
#> 2 2007 1 2 0.0 24.50
#> 3 2007 1 3 0.0 29.75
#> 4 2007 1 4 0.0 32.25
#> 5 2007 1 5 0.0 32.50
#> 6 2007 1 6 4.5 26.50
When specifying multi-site reference data as a list, the list should contain vectors for the year, month, and day, and matrices for multi-site data hydro-climate variables. An example with multi-site rainfall data in the Barosssa Valley (South Australia) is provided.
# Load example multi-site rainfall data
# Expected format of multi-site rainfall data within reference list.
head(barossa_obs$P) #> X23300 X23302 X23305 X23309 X23312 X23313 X23317 X23318 X23321 X23363 #> [1,] 24.6 24.4 16.8 25.4 17.3 33.8 17.2 20.5 17.3 28.8 #> [2,] 2.5 3.8 1.4 3.0 1.8 5.6 1.7 1.6 1.8 4.1 #> [3,] 0.0 0.3 0.8 0.0 0.0 0.5 0.0 0.0 0.0 0.2 #> [4,] 1.3 2.0 0.0 0.0 0.0 2.0 0.0 0.3 0.0 1.3 #> [5,] 0.0 0.0 0.0 0.0 0.0 0.8 0.0 0.0 0.0 0.4 #> [6,] 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 #> X23373 X23752 X23756 #> [1,] 17.3 26.7 31.2 #> [2,] 1.8 4.4 5.1 #> [3,] 0.0 0.0 0.3 #> [4,] 0.0 1.0 1.8 #> [5,] 0.0 0.0 0.3 #> [6,] 0.0 0.0 0.0 The climate time series can be input to the generateScenarios function using the reference argument. For example, for using the single site tank_obs data, we would specify reference = tank_obs foreSIGHT also contains a function that can be used to calculate the attributes of interest for climate data supplied by the user: calculateAttributes(). The usage of this function is shown below. The function is intended for use with the reference data or additional climate data from other sources that will be used with the plotting functions in foreSIGHT (see Step D). attSel <- c("P_ann_tot_m", "P_MAM_tot_m", "P_JJA_tot_m", "Temp_ann_avg_m", "Temp_ann_rng_m") tank_obs_atts <- calculateAttributes(tank_obs, attSel) tank_obs_atts #> P_ann_tot_m P_MAM_tot_m P_JJA_tot_m Temp_ann_avg_m Temp_ann_rng_m #> 449.93000 127.47000 167.99000 17.43714 18.55000 4.2. Step B2. Selecting the time series perturbation method Key Considerations: Step B2 As highlighted in the Introduction, a key role of the foreSIGHT software is to enable the quantitative stress testing of climate-sensitive systems using perturbed hydroclimatic time series. But what is the best way of achieving the perturbations? There are two approaches supported in foreSIGHT, and these reflect the main approaches that have been adopted in most published scenario-neutral applications thus far. The first approach is based on scaling the observed hydroclimatic time series, which can be performed in foreSIGHT using either Simple Scaling or Seasonal Scaling. Simple Scaling scales the weather time series by specified additive or multiplicative time-invariant factors to achieve the desired perturbed time series. Seasonal scaling follows a similar approach, but allows the multiplicative factors to vary throughout the year. Although the scaling methods have the benefit of simplicity, there are a number of disadvantages: • some statistical properties such as the rainfall wet-dry patterns or extremes cannot be perturbed • many attributes cannot be perturbed in combination • it is not possible to hold some desired attributes at historical levels while perturbing others • the length of the generated time series cannot be longer than the supplied reference time series. The second method involves the use of Stochastic Weather Generators to generate perturbed weather time series that correspond to the target attribute values. This approach has the advantage of considerably more flexibility, in that it can perturb complex combinations of changes such as the simultaneous decrease in the averages, increase in the intermittency and increase in the extremes of rainfall. It also has the advantage of being able to represent stochastic variability, in the sense that it is possible to generate multiple ‘realisations’ or Stochastic Replicates of future weather that each share the same attribute values but evolve differently over time, and also generate realisations of different lengths. Yet this approach also has several disadvantages: • The process of calibrating stochastic generators to achieve particular attribute values is much slower and can involve considerable runtimes • Care is needed to identify useful Perturbed Attributes that are physically feasible (for example it is not possible to simulate a rainfall time series that simultaneously shows both an increase in the annual total rainfall and number of wet days, and yet a decrease in the amounts per wet day). • If both the Perturbed Attributes and Held Attributes are not carefully specified, it is possible to generate unrealistic time series • Care is needed to match the specific stochastic generator to the problem requirements. It is difficult to provide definitive advice on the most appropriate method to select, as it will depend on the unique aspects of each problem. However the best guide will come from the attributes that were selected during the analysis of Step A1—if individual attributes or attribute combinations have been selected that cannot be generated using Simple or Seasonal Scaling, then this provides strong indication that stochastic methods are likely to be most appropriate. As highlighted in the above box, care is needed to decide select the specific perturbation method, with the decision depending significantly on the attributes selected for stress testing as part of Step A1. Once the method of perturbation has been selected, you need to supply this information to the function generateScenarios via the argument controlFile. In particular, if simple or seasonal scaling is required, then simply use the argument: controlFile = "scaling" If a stochastic model is required, there are two options. Firstly, one can simply use the default stochastic model and associated settings in foreSIGHT by not specifying a controlFile argument, or by setting the controlFile argument to NULL. controlFile = NULL The default assumes WGEN with a harmonic function to capture seasonality, and will simulate all the supported WGEN weather variables that are input as part of the reference (e.g. observed) climate time series. It also assumes that every attribute is treated equally (i.e. no penalty applied to different attributes). In contrast, to provide a greater level of customisation, the argument is entered as: # input a user created JSON file that specifies the selection of model options controlFile = path-to-a-user-created-JSON-file where the JSON file contains a range of advanced options including selecting the type of stochastic model, overwriting the default model parameter bounds, changing default optimisation arguments, and setting penalty attributes to be used in optimisation. If you elect to use simple or seasonal scaling you can skip straight to Use Case 1 (Simple Scaling) or Use Case 2 (Seasonal Scaling) at the end of this chapter. If you elect to use the default stochastic method, you can skip straight to Use Case 3 (Using the Default Stochastic Models). However if you would like to choose the stochastic generator or add penalty weights to the attributes, then continue reading (for other settings in the JSON file such as default weather generator parameter bounds or optimisation arguments, refer to Options for Advanced Users chapter at the end of this tutorial). 4.3. Step B3: Selecting the stochastic generator Key Considerations: Step B3 There is a plethora of options for stochastic weather generators described in the scientific literature, each with different features and assumptions. This variety provides a high degree of flexibility for perturbing weather time series in a variety of ways to comprehensively ‘stress test’ a climate-sensitive system. The foreSIGHT software currently supports a small number of weather generators; however the software has been developed in such a way that additional weather generators can be added over time. There is no single ‘correct’ weather generator for all applications, with the choice depending on a range of considerations: • Data timestep. Most weather generators run at a daily timestep, which is a common timestep for reporting of key weather variables. However there are also weather generators available at various sub-daily timesteps, as well as longer aggregated timesteps such as monthly or annual. The key consideration here is to ensure the time step is consistent with the likely timescales of system performance sensitivity, which in turn need to correpsond to the relevant time scales of weather inputs that are require for the system model. • The key timescales of system sensitivity. In addition to data timestep, it is important that the stochastic generator is able to simulate variability in Climate Attributes representing key timescales of system sensitivity. For example some systems may respond at sub-seasonal and seasonal timescales, and others at interannual timescales. It is noted that a stochastic generator may be able to simulate variability at timescales that are equal to or longer than its timestep, but not shorter. • Relevant weather variables. Weather generators have the capability of simulating a range of surface variables, including precipitation, temperature, wind, solar radiation, humidity and so forth—as well as derived variables such as potential evapotranspiration that are calculated through various recognised formulations. In many cases weather generators simulate precipitation first, followed by the other variables that are then conditioned to the precipitation time series; however each weather generator is different and it is necessary to review the documentation to understand the basis for generating the weather time series. • Other key structural features that drive the weather generator’s capacity to simulate individual or combined changes in attributes. For example, some weather generators use ‘harmonic’ functions to simulate the seasonal cycle, which may enable capacity to simulate key shifts in seasonality but may necessarily be capable of simulating changes at the month-by-month level. It is beyond the scope of this tutorial to review the structure of each weather generator supported by foreSIGHT, and the reader is referred to relevant references for further information. Beyond developing a theoretical understanding of the structure (and potential structural limitations) of individual weather generators, a pragmatic approach to assess the appropriateness of weather generation choice is through evaluating the relevant diagnostics in achieving specified target attributes. Poor performance in diagnostics may be due to several issues, including weather generator suitability. The use of weather generator diagnostics is discussed later in this chapter. Finally, if you have a preferred stochastic weather generator that you’d like to have included in the overall foreSIGHT software, then please contact the software developers. foreSIGHT includes a few stochastic models that the user can select to generate the scenarios. The options differ in the model formulation and the temporal variation of the model parameters (refer to the package description using the command packageDescription("foreSIGHT") to view the references to each stochastic generator). The models available in foreSIGHT can be viewed using the function viewModels as shown below. The defaultModel column indicates the default stochastic model that will be used if the controlFile argument is NULL. The usage of this function is demonstrated below. The compatibleAtts argument can be set to TRUE to view the attributes that are compatible with each model. viewModels() #> [1] "Please select a valid variable. The valid variable names are:" #> [1] "P" "Temp" "PET" "Radn" # View the models available for a specific climate variable viewModels("P") #> modelType modelParameterVariation modelTimeStep defaultModel #> 1 wgen annual daily FALSE #> 2 wgen seasonal daily FALSE #> 3 wgen harmonic daily TRUE #> 4 latent annual daily FALSE #> 5 latent harmonic daily FALSE # View models available for temperature viewModels("Temp") #> modelType modelParameterVariation modelTimeStep defaultModel #> 1 wgen harmonic daily TRUE #> 2 wgen-wd harmonic daily FALSE #> 3 wgen-wdsd harmonic daily FALSE The stochastic models used by generateScenarios can be modified using the controlFile argument. If the controlFile argument is not specified or set to NULL, the default stochastic model and associated settings will be used to generate the scenarios. To use stochastic models different from the default models in the package, the user can input a JSON file via the controlFile argument specifying the model choices. The models are defined using the modelType and modelParameterVariation fields in the controlFile; both these fields should be specified. The helper function writeControlFile available in foreSIGHT can be used to create a sample JSON file that provides a template to create control files to specify alternate models that the user needs. The writeControlFile function can be used without arguments as shown below. Note that the following function call will write a JSON file named sample_controlFile.json into your working directory. writeControlFile() The user can create a JSON file in the same format for input to generateScenarios. As an example, the following text may be used in the JSON file to select alternate models for precipitation and temperature. # Example text to be copied to a text JSON file { "modelType": { "P": "latent", "Temp": "wgen-wd" }, "modelParameterVariation": { "P": "harmonic", "Temp": "harmonic" } } Alternatively, the toJSON function from the jsonlite package can be used to create a JSON file from an R list as shown below. The file can be used as an input to generateScenarios using the controlFile argument. # create a list containing the specifications of the selected models modelSelection <- list() modelSelection[["modelType"]] <- list() modelSelection[["modelType"]][["P"]] <- "latent" modelSelection[["modelType"]][["Temp"]] <- "wgen-wd" modelSelection[["modelParameterVariation"]] <- list() modelSelection[["modelParameterVariation"]][["P"]] <- "harmonic" modelSelection[["modelParameterVariation"]][["Temp"]] <- "harmonic" utils::str(modelSelection) #> List of 2 #>$ modelType :List of 2
#> ..$P : chr "latent" #> ..$ Temp: chr "wgen-wd"
#> $modelParameterVariation:List of 2 #> ..$ P : chr "harmonic"
#> $penaltyWeights : num [1:4] 20 15 10 10 # write the list into a JSON file penaltySelectionJSON <- jsonlite::toJSON(penaltySelection, pretty = TRUE, auto_unbox = TRUE) write(penaltySelectionJSON, file = paste0(tempdir(), "\\eg_controlFile.json")) # input a the JSON file controlFile = paste0(tempdir(), "\\eg_controlFile.json") If you have elected to use penalty attributes to generate scenarios, refer to Use Case ‘Specifying penalty attributes’ for an example. 4.5. Step B5. Length of the perturbed time series, the number of replicates and controlling the random seed Key Considerations: Step B5 The issues in this section only pertain to stochastically generated series; for simple/seasonal scaling, the length of Perturbed Time Series is equivalent to the Reference Time Series, and given there is no random element to the perturbation, issues such as number of replicates and randomisation process are not relevant. For the stochastic generation algorithm, it is possible to generate time series of any arbitrary length, which can be significantly longer than the reference time series. For example, one may have a 30 year historical weather time series as the reference, yet each stochastic replicate (including, if desired, for the ‘no change’ situation) can be much longer, such as 100s or 1000s of years of length. Advantages for long replicates are that this can often result in a smoother Performance Space, by improving the signal-to-noise ratio (the signal being the climatic changes represented by the Perturbed Attributes, and the noise being the stochastic Weather Noise). Disadvantages are largely linked to run-time, both for optimisation of the stochastic generator (i.e. as done during Step B), and for running the system model (see Step C). A similar but subtly different approach to addressing stochastic variability is to alter the number of replicates (or Stochastic Realisations). In this case, for each attribute target one might wish to generate multiple realisations (often but not necessarily of the same length as the Reference Time Series), which provide alternative versions of the weather that correspond to the same attribute values. This can be used for statistical analysis purposes, and also can provide a useful indicator of whether the system model is sensitive to elements of the weather that are not included as part of the Attribute Targets. Finally, although stochastic sequences are generally viewed as ‘random’, they can better be described as Pseudo Random Numbers, in which the stochastic sequences have the appearance of random but are in fact completely determined by the initial conditions provided to the random number generator. To create the appearance of randomness, the initial conditions are usually based on a varying number such as the system clock; however it is also possible to set the initial value of the generator (called a random seed) to achieve reproducibility in the code (e.g. by enabling a peer reviewer or other interested party to completely replicate a set of results). The argument simLengthNyrs can be used to specify the desired length of the stochastically generated perturbed time series in years using generateScenarios # simulation length of 100 years simLengthNyrs = 100 By default, generateScenarios will generate a single replicate (or stochastic realisation) of the perturbed time series. More replicates can be generated by specifying the numReplicates argument of generateScenarios. numReplicates = 5 The random seed used for stochastic generation of the first replicate is selected by generateScenarios by randomly sampling a number between 0 and 10,000. The random seeds for the subsequent replicates are incremented by 1. Thus, the perturbed stochastic data generated using generateScenarios for the same function arguments would typically be different. The function saves the value of the random seed used for each replicate in the output list containing the perturbed time series. Sometimes, it may be of interest to the user to reproduce a previous stochastic simulation. It is possible to achieve this by specifying the seedID argument of the generateScenarios function to that of the first replicate of the previous simulation. The function would use the specified seedID as the random seed. Note that it is recommended to specify seedID only to reproduce a prior result. # the seed of the first replicate from a previous result is set as seedID seedID = 1234 4.6. Step B6. Diagnosing stochastic model performance Key Considerations: Step B6 The quality of the stochastic replicates, in terms of the extent to which they represent the climate conditions of interest, is critical to ensure the interpretability of the ensuing stress test. We strongly recommend taking the time to carefully review stochastic model performance prior to subsequent inclusion as inputs to a system model. As discussed earlier, the flexibility of stochastic generators to obtain a diverse set of weather conditions represents both its primary advantage and a significant disadvantage. In particular, we have often found that it is necessary to constrain the stochastic model using Held Attributes, in order to minimise the risk of generating stochastic sequences that are physically unrealistic (keep in mind that weather generators are in essence just complicated probability distributions, so one could easily generate temperature values hotter than the sun or below absolute zero if we don’t tell it to do otherwise!). As a result, we recommend focusing the diagnostics on three elements: • To what extent do the stochastic sequences reflect the Perturbed Attribute value targets? • To what extent do the stochastic sequences succeed in keeping the Held Attributes at their target values? • Are other attribute values (i.e. ones that are neither perturbed or held) reasonable, which potentially be defined as being broadly consistent with the values of the Reference Time Series? If the answer to any of the above is no, then it will be necessary to commence a process of diagnosis to understand the causes of the poor performance, and identify measures to rectify this. Key areas for exploration are as follows: • Have the right attributes been held? If an unconstrained attribute (i.e. one that is neither perturbed or held) yields unrealistic values, then the first response might be to request additional attributes to be held at the levels of the reference time series in order to provide additional constraints to the optimiser. • Has the optimiser had sufficient opportunity to find the best possible solution? If so, it is necessary to change the configuration of the optimiser to provide additional opportunity to find the appropriate solution. • Is the stochastic model structurally able to simulate the desired attribute target combinations? If not, then it may be useful to select an alternative model structure. • Is the model over-constrained, in the sense of asking for attribute combinations that are not possible to achieve (e.g. an increase in both average number of wet days and rainfall per wet day, yet a decrease in average annual rainfall)? If so, one can adjust the attribute targets to focus on more realistic combinations, or alternatively one could adjust the attribute penalty values to priortise certain attributes (usually but not necessarily the perturbed attributes) over others. In the event of poor stochastic model performance, the diagnostic approach described above will help identify an alternative approach to achieving the desired stochastic time series. However, in our experience a significant amount of trial-and-error can be required to achieve the desired outcomes, and thus this tends to be a highly iterative approach. foreSIGHT contains a function named plotScenarios which can be used to create plots of the biases in attributes of the simulated data relative to the specified target values, for both perturbed and held attributes. The function uses a simulation performed using generateScenarios as input and plots the mean and standard deviation of the absolute biases of each attribute and target, across all the replicates in heatmap-like plots. The function can be called using a single argument, which is the simulation generated using generateScenarios. Additional arguments allow finer control. If the scenarios contain attributes that use multiplicative changes (like precipitation) as well as attributes that are use additive changes (like temperature), the figures would contain two panels to show the biases in both type of attributes the different units. The use cases at the end of this section contain some examples of the figures created using this function. p <- plotScenarios(sim) # sim the output from generateScenarios The figures can be used to assess how well the simulations capture the desired target values of the attributes. As a rough estimate, biases around or less than 5% are acceptable. If there are larger mean biases, we recommend that you use the diagnostic approaches described in the box above to identify alternatives to achieve the desired outcomes. If the standard deviation of the absolute biases across the replicates are high, this indicates that the attribute value is highly variable across the replicates in the generated data. You may need to adjust the optimisation parameters or increase the number of replicates to address this variability. Now that we’ve discussed the theoretical considerations and core functionality associated with Step B, we bring all the pieces together to show some realistic ‘use cases’ for the generateScenarios function. Use Case B1: Simple Scaling Consider a simple system that is affected only by changes in annual totals or means of one or more hydroclimate variables. Simple scaling can be used to generate perturbed time series in this case. The below code provides an example of the usage. # specify perturbed attributes attPerturb <- c("Temp_ann_avg_m", "P_ann_tot_m") # specify perturbation type and minimum-maximum ranges of the perturbed attributes attPerturbType <- "regGrid" attPerturbSamp <- c(9, 13) attPerturbMin = c(-1, 0.80) attPerturbMax = c(1, 1.2) # create the exposure space expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = NULL) # no attributes held at historical levels #> Note: There are no attributes held at historical levels # Load example obs climate data data(tankDat) # generate perturbed time series using simple scaling sim <- generateScenarios(reference = tank_obs, # input observed data expSpace = expSpace, # exposure space created by the user controlFile = "scaling") # using scaling #> Generating replicate number 1 out of 1 replicates... #> Simulation completed Simple scaling can also be applied to multi-site climate data. See the help file for generateScenarios for details and a working example. Use Case B2: Seasonal Scaling Seasonal scaling can be used to perturb the seasonal pattern in a hydroclimate variable, in addition to perturbing annual totals/means. The following code provides an example where the “seasRatio” – defined as the ratio of total wet season rainfall to dry season rainfall – is perturbed. # specify perturbed attributes attPerturb <- c("P_ann_tot_m","P_ann_seasRatio") # specify perturbation type and minimum-maximum ranges of the perturbed attributes attPerturbType <- "regGrid" attPerturbSamp <- c(9, 9) attPerturbMin = c(0.8, 0.9) attPerturbMax = c(1.2, 1.3) # create the exposure space expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = NULL) # no attributes held at historical levels #> Note: There are no attributes held at historical levels # Load example obs climate data data(tankDat) # generate perturbed time series using simple scaling sim <- generateScenarios(reference = tank_obs, # input observed data expSpace = expSpace, # exposure space created by the user controlFile = "scaling") # using scaling #> Generating replicate number 1 out of 1 replicates... #> Simulation completed Use Case B3: Using the default stochastic models The default stochastic models in foreSIGHT can be used to generate data in most cases. These models are compatible with all the hydro-climate attributes in foreSIGHT. This use case illustrates stochastic generation using the default models. To use generateScenarios with the defaults, and without penalty attributes, only two input arguments are mandatory: the target attribute values and the reference time series. Optional additional arguments comprising the length of the generated perturbed time series (simLengthNyrs) and the number of replicates (numReplicates) may be specified as desired. Consider the following exposure space of precipitation and temperature attributes. After deciding the attributes, perturbed bounds, and sampling strategy as described in Step A, the target attribute values that sample the exposure space is created using the createExpSpace function. The total annual precipitation and mean annual temperature are perturbed while holding the attributes "P_ann_R10_m", "P_DJF_tot_m","Temp_ann_rng_m", "Temp_DJF_avg_m" at existing levels (Use viewAttributeDef() for definitions of these attributes). The createExpSpace function call returns the exposure space in an R list. The targetMat (named after “target matrix”) element of the list contains the locations of the four selected target locations in the exposure space with perturbations in annual total precipitation and annual mean temperature. Each row of this matrix is a target location. In this exposure space, target 1 corresponds to perturbation of 0.8 in “P_ann_tot_m”, and -0.5 in “Temp_ann_avg_m”, while the other attributes are held at existing levels and so on. Remember that the perturbations in precipitation are multiplicative while that in temperature is additive. # Selected attributes attPerturb <- c("P_ann_tot_m", "Temp_ann_avg_m") attHold <- c("P_ann_R10_m", "P_DJF_tot_m","Temp_ann_rng_m", "Temp_DJF_avg_m") # Sampling bounds and strategy attPerturbType = "regGrid" attPerturbSamp = c(2, 2) attPerturbMin = c(0.8,-0.5) attPerturbMax = c(1.2,0.5) # Creating the exposure space expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) utils::str(expSpace) #> List of 8 #>$ targetMat :'data.frame': 4 obs. of 6 variables:
#> ..$P_ann_tot_m : num [1:4] 0.8 1.2 0.8 1.2 #> ..$ Temp_ann_avg_m: num [1:4] -0.5 -0.5 0.5 0.5
#> ..$P_ann_R10_m : num [1:4] 1 1 1 1 #> ..$ P_DJF_tot_m : num [1:4] 1 1 1 1
#> ..$Temp_ann_rng_m: num [1:4] 0 0 0 0 #> ..$ Temp_DJF_avg_m: num [1:4] 0 0 0 0
#> $attRot : NULL #>$ attPerturb : chr [1:2] "P_ann_tot_m" "Temp_ann_avg_m"
#> $attHold : chr [1:4] "P_ann_R10_m" "P_DJF_tot_m" "Temp_ann_rng_m" "Temp_DJF_avg_m" #>$ attPerturbSamp: num [1:2] 2 2
#> $attPerturbMin : num [1:2] 0.8 -0.5 #>$ attPerturbMax : num [1:2] 1.2 0.5
#> $attPerturbType: chr "regGrid" # Four target locations in the exposure space expSpace$targetMat
#> P_ann_tot_m Temp_ann_avg_m P_ann_R10_m P_DJF_tot_m Temp_ann_rng_m Temp_DJF_avg_m
#> 1 0.8 -0.5 1 1 0 0
#> 2 1.2 -0.5 1 1 0 0
#> 3 0.8 0.5 1 1 0 0
#> 4 1.2 0.5 1 1 0 0
Having generated the exposure space, we progress to Step B of the work flow. The example climate time series, tank_obs, available in the package is used as the reference to create perturbed time series. The default stochastic models in foreSIGHT are used by not specifying a controlFile argument in the generateScenarios function call. Note that the following function call takes about 10 minutes to execute.
# ******************************** NOTE ****************************
# The following generateScenarios call takes ~10 mins to complete
# ******************************************************************
data("tankDat")
sim <- generateScenarios(reference = tank_obs, # reference time series
expSpace = expSpace, # exposure space
numReplicates = 3) # number of replicates
The biases in the simulated attributes with respect to the specified target values of the attributes for each target are examined using the plotScenarios function. The following function call creates heatmap plots of the biases.
plotScenarios(sim)
The plot of the mean of absolute biases of the simulated values relative to the target values in this simulation is shown below. Note that the above generateScenarios function call has been set to generate three stochastic replicates. If you are running the code in this tutorial, you won’t necessarily reproduce the figure below as generateScenarios randomly selects a seedID for each simulation. If you wish to reproduce the simulation in this use case set the seedID argument to 2407.
The mean of the absolute biases in each simulated attribute (both perturbed and held) for each target across the three replicates are plotted. The biases in the scenarios are typically low as indicated by the green shades in the plot. In other words, the attributes of the generated perturbed time series correspond well to the desired target values of the attributes. This means we may proceed with analysing other characteristics of the perturbed time series, and simulating system performance.
In this example the use of default stochastic models without penalty attributes yields satisfactory results for the target values of the specified attributes.
Use Case B4: Why holding attributes at reference levels are necessary
When using stochastic models to generate the perturbed time series, it is necessary to hold some attributes at reference levels to ensure the realism of the simulated climate data. This use case provides an example to illustrate why.
Consider an exposure space with perturbations only in mean annual total rainfall (“P_ann_tot_m”). First, let’s create an exposure space that contains only a single perturbed target of this attribute, with no other attributes held at reference levels. The below code generates data corresponding to the single target location.
attPerturb <- c("P_ann_tot_m")
attHold <- NULL
attPerturbType = "regGrid"
attPerturbSamp = c(1)
attPerturbMin = c(1.3)
attPerturbMax = c(1.3)
expSpace <- createExpSpace(attPerturb = attPerturb,
attPerturbSamp = attPerturbSamp,
attPerturbMin = attPerturbMin,
attPerturbMax = attPerturbMax,
attPerturbType = attPerturbType,
attHold = attHold)
#> Note: There are no attributes held at historical levels
expSpace$targetMat # exposure space containing a single target #> P_ann_tot_m #> 1 1.3 data(tankDat) # reference data sim <- generateScenarios(reference = tank_obs[, 1:4], expSpace = expSpace) # simulation #> Generating replicate number 1 out of 1 replicates... #> Warning: No attributes held at historical levels #> #> Simulation completed We can use the calculateAttributes function to calculate the values of various attributes of both the reference and simulated time series as shown below. The percentage differences in various attributes of the simulated data with respect to the reference is also calculated. The perturbations in the attribute “P_ann_tot_m” is close to the desired increase of 30%. However, the other attributes show large differences from the reference. This is because we created the exposure space to generate perturbations in “P_ann_tot_m”, without any constraints in other attributes, rendering the simulation unrealistic. # calculate selected attributes from reference attSel <- c("P_ann_tot_m", "P_ann_seasRatio", "P_ann_nWet_m", "P_ann_maxDSD_m", "P_ann_maxWSD_m", "P_ann_R10_m", "P_ann_dyWet_m", "P_ann_P99") obsAtts <- calculateAttributes(tank_obs, attSel) # get the simulated precipitation and dates from sim & calculate the same attributes P <- sim[["Rep1"]][["Target1"]][["P"]][["sim"]] simData <- cbind(as.data.frame(sim[["simDates"]]), P) simAtts <- calculateAttributes(simData, attSel) # calculate the % differences between simulated attributes and reference percAttDiff <- (simAtts - obsAtts)/obsAtts*100 percAttDiff #> P_ann_tot_m P_ann_seasRatio P_ann_nWet_m P_ann_maxDSD_m P_ann_maxWSD_m P_ann_R10_m #> 29.929849 -25.185409 -7.488654 79.051383 46.000000 41.739130 #> P_ann_dyWet_m P_ann_P99 #> 39.765861 59.748261 Thus, we need to select some other attributes to hold at existing levels to make sure that the simulated data is physically realistic. Consider the other precipitation attributes calculated above. Some of them are related to number and sequence of wet precipitation days (number of wet days, wet & dry spell lengths), while others are related to the intensity of precipitation (mean wet day rainfall, 99th percentile rainfall). “P_ann_R10_m” is actually a combined measure of frequency and intensity, and “P_ann_seasRatio” is a measure of wet to dry seasonal rainfall (Note: use viewAttributeDef for attribute definitions). If we select all these attributes to be held at existing levels, it would become almost impossible to perturb the annual precipitation as desired since a 30% increase in “P_ann_tot_m” warrants changes in atleast some of these rainfall characteristics. So, we need to select a viable subset of these other attributes to hold at reference levels. Suppose a priori knowledge suggests that an increase in rainfall intensity is the typical driving mechanism behind increases in annual rainfall in the region, and that a change in seasonal ratio is unrealistic. We can decide to hold the wet day frequency, spell length, and seasonal ratio related attributes at reference levels, while allowing changes in the intensity attributes. An updated exposure space and simulation are generated as shown below. attPerturb <- c("P_ann_tot_m") attHold <- c("P_ann_seasRatio", "P_ann_nWet_m", "P_ann_maxDSD_m", "P_ann_maxWSD_m") attPerturbType = "regGrid" attPerturbSamp = c(1) attPerturbMin = c(1.3) attPerturbMax = c(1.3) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) expSpace$targetMat # exposure space containing a single target
#> P_ann_tot_m P_ann_seasRatio P_ann_nWet_m P_ann_maxDSD_m P_ann_maxWSD_m
#> 1 1.3 1 1 1 1
data(tankDat) # reference data
simHold <- generateScenarios(reference = tank_obs[ ,1:4], expSpace = expSpace) # simulation
#> Generating replicate number 1 out of 1 replicates...
#> Simulation completed
The updated simulated shows much lower differences in most attributes from the reference series. As expected, the attributes related to the intensity of the rainfall show large differences - generating the desired perturbation in mean annual rainfall.
P <- simHold[["Rep1"]][["Target1"]][["P"]][["sim"]]
simHoldData <- cbind(as.data.frame(simHold[["simDates"]]), P)
simHoldAtts <- calculateAttributes(simHoldData, attSel)
percDiff <- (simHoldAtts - obsAtts)/obsAtts*100
percDiff
#> P_ann_tot_m P_ann_seasRatio P_ann_nWet_m P_ann_maxDSD_m P_ann_maxWSD_m P_ann_R10_m
#> 29.4688402 -0.5900381 2.0423601 -0.3952569 -2.0000000 39.1304348
#> P_ann_dyWet_m P_ann_P99
#> 27.6677559 49.3839005
Typically, we would also look at other characteristics of the simulation, to ensure that the perturbed series are suitable for the specific stress test. For example, let us consider the monthly rainfall climatology of the reference and simulated data. The mean monthly rainfall (in mm/day) is calculated and plotted below. We find that the simulation with held attributes are more similar to the reference series.
# calculate mean monthly rainfall in mm/day
tank_obs_monClim <- aggregate(tank_obs[,4], by = list(tank_obs[,2]), FUN = mean)$x sim_monClim <- aggregate(simData[,4], by = list(simData[,2]), FUN = mean)$x
simHold_monClim <- aggregate(simHoldData[,4], by = list(simHoldData[,2]), FUN = mean)$x # plot monthly climatology yMax <- max(tank_obs_monClim, sim_monClim, simHold_monClim) colSel <- c("black", "red", "forestgreen") lwdSel <- 2 plot(tank_obs_monClim, type = "l", ylim = c(0,yMax), lwd = lwdSel, ylab = "Monthly P (mm/day)", xlab = "months", col = colSel[1]) lines(sim_monClim, col = colSel[2], lwd = lwdSel) lines(simHold_monClim, col = colSel[3], lwd = lwdSel) legend("topright", legend = c("reference", "sim", "simHold"), col = colSel, lwd = lwdSel) We can conduct further analyses of the characteristics of the simulation to decide if other attributes need to be included in attHold. We leave it to the reader to build on this use case to explore what adding other attributes does. In some cases, it may become difficult to obtain the desired target values (perturbations or existing levels), due to intrinsic dependencies between the attributes (eg: totals are related to intensity & frequency). In these instances, the functionality to prescribe penalty attributes and weights can be used to set preferences for lower biases in some attributes over others to obtain desired target values. More on penalty attributes in further use cases. Use Case B5: Specifying penalty attributes In some cases, the attributes of the generated perturbed time series can show large biases in the target values of some attributes. Specifying penalties for biases in these attributes can reduce the biases. However, the reduction is often at the expense of increased biases in other attributes. Therefore, you’ll typically need a few trials to identify appropriate penalty settings that provide desired results for specific scenarios. This use case provides such an example. Consider the following exposure space of precipitation attributes. The annual total and DJF total precipitation are perturbed while holding the attributes "P_MAM_tot_m", "P_JJA_tot_m","P_ann_nWet_m" at existing levels (use viewAttributeDef() to view the definitions of these attributes). After deciding the perturbation bounds of the relevant attributes and the sampling strategy, the createExpSpace function is used to create the exposure space. The function call returns an R list containing the exposure space. The targetMat (named after “target matrix”) element of the list contains the locations of the four selected target locations in the exposure space. Each row of this matrix is a target location. In this exposure space, target 1 corresponds to perturbations of 0.9 each in “P_ann_tot_m” and “P_DJF_tot_m”, while the other attributes are held at existing levels, and so on. # Selected attributes attPerturb <- c("P_ann_tot_m", "P_DJF_tot_m") attHold <- c("P_MAM_tot_m", "P_JJA_tot_m", "P_ann_nWet_m") # Sampling bounds and strategy attPerturbType = "regGrid" attPerturbSamp = c(2, 2) attPerturbMin = c(0.9, 0.9) attPerturbMax = c(1.3, 1.3) # Creating the exposure space expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) utils::str(expSpace) #> List of 8 #>$ targetMat :'data.frame': 4 obs. of 5 variables:
#> ..$P_ann_tot_m : num [1:4] 0.9 1.3 0.9 1.3 #> ..$ P_DJF_tot_m : num [1:4] 0.9 0.9 1.3 1.3
#> ..$P_MAM_tot_m : num [1:4] 1 1 1 1 #> ..$ P_JJA_tot_m : num [1:4] 1 1 1 1
#> ..$P_ann_nWet_m: num [1:4] 1 1 1 1 #>$ attRot : NULL
#> $attPerturb : chr [1:2] "P_ann_tot_m" "P_DJF_tot_m" #>$ attHold : chr [1:3] "P_MAM_tot_m" "P_JJA_tot_m" "P_ann_nWet_m"
#> $attPerturbSamp: num [1:2] 2 2 #>$ attPerturbMin : num [1:2] 0.9 0.9
#> $attPerturbMax : num [1:2] 1.3 1.3 #>$ attPerturbType: chr "regGrid"
# Four target locations in the exposure space
expSpace$targetMat #> P_ann_tot_m P_DJF_tot_m P_MAM_tot_m P_JJA_tot_m P_ann_nWet_m #> 1 0.9 0.9 1 1 1 #> 2 1.3 0.9 1 1 1 #> 3 0.9 1.3 1 1 1 #> 4 1.3 1.3 1 1 1 Having generated the exposure space, we progress to Step B of the work flow. The example climate time series, tank_obs, available in the package is used as the reference to create perturbed time series. Consider the case where the default stochastic models in foreSIGHT without penalty attributes are used to generate the time series for the target locations in the exposure space, similar to use case B2. The function call is set up without specifying the controlFile argument of generateScenarios. The below code generates the perturbed time series and heatmap plots to evaluate the ‘fitness’. Note that the generateScenarios function call takes about 10 minutes to execute. The ‘fitness’ of the scenarios in terms of the biases in simulated attributes relative to the targets values of the attributes generated using the plotScenarios function is shown in the subsequent figure. # ******************************** NOTE **************************** # The following generateScenarios call takes ~10 mins to complete # ****************************************************************** data("tankDat") sim <- generateScenarios(reference = tank_obs, # reference time series expSpace = expSpace, # exposure space numReplicates = 3) # number of replicates plotScenarios(sim) Note that the above generateScenarios function call is set to generate 3 stochastic replicates. If you run the code in this tutorial, you won’t necessarily reproduce the figure below as generateScenarios randomly selects a seedID for each simulation. If you wish to reproduce the simulation in this use case set the seedID argument to 2851. There are large biases (>7.5%) in the perturbed attribute annual total rainfall of the first target. To lower this bias, we can add a penalty for biases in this attribute while using generateScenarios via a JSON controlFile. Consider the case where the penalty weight of this attribute to 10, as a first guess. The below code shows the function call with this penalty setting. Note that we have specified the seedID as 2851 - which is the random seed selected by the function for the simulation shown in the previous figure. Thus, this new simulation starts from the same seed as the previous one, but has an additional penalty attribute and weight (which translates to an additional term in the objective function in terms of the calculations inside the function). More on the seed towards the end of this use case. Thus, in the new simulation we specify penalties for biases in “P_ann_tot_m” with weight set to 10. The fitness of the generated perturbed series is assessed using the heatmaps created using the plotScenarios function. # specify the penalty settings in a list penaltySelection <- list() penaltySelection[["penaltyAttributes"]] <- c("P_ann_tot_m") penaltySelection[["penaltyWeights"]] <- c(10) # write the list into a JSON file penaltySelectionJSON <- jsonlite::toJSON(penaltySelection, pretty = TRUE, auto_unbox = TRUE) write(penaltySelectionJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios with penalty setting sim_wPenalty <- generateScenarios(reference = tank_obs, expSpace = expSpace, numReplicates = 3, seedID = 2851, controlFile = paste0(tempdir(), "controlFile.json")) plotScenarios(sim_wPenalty) The figure shows that the biases in the attribute for which penalty is applied, “P_ann_tot_m”, is close to zero. However, the biases in the attributes that are held at historical levels are too high. Thus, it appears that the application of this penalty setting is geared towards lower biases in “P_ann_tot_m” too strongly in these scenarios. To balance the errors, suppose we lower the weight of the penalty attribute to 0.5 instead of 10. The below code shows the corresponding function calls. # specify the penalty settings in a list penaltySelection <- list() penaltySelection[["penaltyAttributes"]] <- c("P_ann_tot_m") penaltySelection[["penaltyWeights"]] <- c(0.5) # write the list into a JSON file penaltySelectionJSON <- jsonlite::toJSON(penaltySelection, pretty = TRUE, auto_unbox = TRUE) write(penaltySelectionJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios with penalty setting sim_wPenalty2 <- generateScenarios(reference = tank_obs, expSpace = expSpace, numReplicates = 3, seedID = 2851, controlFile = paste0(tempdir(), "controlFile.json")) plotScenarios(sim_wPenalty2) The figure shows that the biases in the attributes of the simulated time series are more evenly distributed among the attributes in the latest simulation. The biases in all the attributes are about 5% or lower, and the generated perturbed time series correspond well to the desired target values of the attributes. We may proceed with analysing other characteristics of the perturbed time series, and simulating system performances. A note about the use of seedID in these examples: The seedID of the simulations using penalty settings are set to that of the first simulation without using penalty attributes to highlight the differences in simulation fitness with the addition of the penalty setting. If sufficient replicates are generated for the scenarios, the difference would be apparent in the mean fitness without setting the seedID. Three replicates are generated in this use case so that the simulations can be performed without much computational effort. We leave it to the reader to try performing similar simulations with more replicates and a longer time series length to assess the differences. The examples presented in this use case are relatively simple, but illustrates the use of penalty attribute functionality and the trade-offs in fitness involved. We expect that the users would need to apply penalty attributes and weights in most of their stress-tests to obtain the desired perturbed time series. The application becomes more complex in cases where a penalty has to be applied to multiple attributes, and one needs to decide the penalty weights for all of them. A few trial simulations may be necessary to decide the penalty settings to be used for the final stress-test in real world applications. Use Case B6: Choosing a different stochastic model In some cases, one might want to select an alternate stochastic model different from the default models in foreSIGHT. Note that the viewModels() function can be used to view the details of all the stochastic models available in the package. A different model may be selected based on prior knowledge about the ability of the stochastic model to represent characteristics of the climate data that are relevant for the specific case study. This use case provides an example to show how to select a different stochastic generator using the controlFile argument. Consider the the following exposure space that consists of attributes pertaining to annual statistics of rainfall. After deciding the perturbation bounds and the sampling strategy, the createExpSpace function is used to create the exposure space. The function call returns an R list containing the exposure space. The default precipitation stochastic model in foreSIGHT can be used to generate the perturbed time series as shown below. # create the exposure space attPerturb <- c("P_ann_tot_m", "P_ann_P99") attHold <- c("P_ann_maxWSD_m", "P_ann_nWet_m") attPerturbType = "regGrid" attPerturbSamp = c(2, 2) attPerturbMin = c(0.9, 0.9) attPerturbMax = c(1.3, 1.3) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) # specify the penalty settings in a list controlFileList <- list() controlFileList[["penaltyAttributes"]] <- c("P_ann_tot_m") controlFileList[["penaltyWeights"]] <- c(0.5) # write the list into a JSON file controlFileJSON <- jsonlite::toJSON(controlFileList, pretty = TRUE, auto_unbox = TRUE) write(controlFileJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios sim <- generateScenarios(reference = tank_obs, expSpace = expSpace, controlFile = paste0(tempdir(), "controlFile.json")) Now, suppose you want to select an alternate stochastic generator to generate the perturbed time series, the “wgen” model that has an annual variation in the parameters (modelType = "wgen", and modelParameterVariation = "annual"). These changes can be specified along with the penalty attribute settings in the controlFile as shown below. # specify the penalty settings in a list controlFileList <- list() controlFileList[["penaltyAttributes"]] <- c("P_ann_tot_m") controlFileList[["penaltyWeights"]] <- c(0.5) controlFileList[["modelType"]] <- list() controlFileList[["modelType"]][["P"]] <- "wgen" controlFileList[["modelParameterVariation"]] <- list() controlFileList[["modelParameterVariation"]][["P"]] <- "annual" # write the list into a JSON file controlFileJSON <- jsonlite::toJSON(controlFileList, pretty = TRUE, auto_unbox = TRUE) write(controlFileJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios sim <- generateScenarios(reference = tank_obs, expSpace = expSpace, controlFile = paste0(tempdir(), "controlFile.json")) foreSIGHT also has the capability to perform multisite stochastic rainfall simulations, where changes can be specified in the attributes at each site, as well as in the spatial correlation between sites. See the help file for generateScenarios() for details and a working example. 5. Step C: Simulate system performance (runSystemModel) In this step you’ll learn… • What’s meant by the terms System Model, System Performance and Performance Metrics. • Key considerations for selecting a system model and interpreting the results • Key considerations for selecting appropriate performance metrics for analysis • How to integrate foreSIGHT with system models that are either native to the R programming language, as well as those that are written in other programming languages. In this step, we’ll be taking the perturbed climate time series generated in Step B, and running them through a System Model to produce estimates of System Performance. This seems fairly basic, but there are a few things to consider here: • To ensure generality of the modeling framework, the concept of a System Model within foreSIGHT is simply defined as any numerical model that takes time series of hydroclimate data as inputs, and produces one or more quantitative measures of system performance as an output. • The foreSIGHT software doesn’t include any system models (other than an example rainwater tank model for illustrative purposes), but instead the software provides functionality to integrate with a range of third-party system models. This is designed to maximise the utility of the foreSIGHT software by enabling the coupling with any compatible system model. • The Stress Test is of the System Model, not of the system itself. This is an important distinction to keep in mind, since the system model may be a poor representation of the system itself, particularly when it is recalled that the purpose of a stress test is to evaluate system dynamics outside of the range of the historical climate. This is an important caveat that should be articulated as a key assumption underpinning all foreSIGHT results. In the following sections we will present an overview of the key considerations for selecting a given system model, including the identification of relevant performance metrics. This will then be followed by the description of several options for coupling foreSIGHT to a given system model. 5.1. Step C1. Selecting the system model Key Considerations: Step C1 As discussed in the Introduction, among the primary objectives of foreSIGHT are the requirements to (1) enable quantitative stress testing of climate-sensitive systems against a range of plausible climate scenarios, and (2) enable comparison between multiple alternative system configurations to support options analysis and adaptation planning. Both of these objectives require a quantitative System Model that can simulate a system’s response to each of the climate scenarios. Before delving more deeply into the requirements of a system model, it is worth reflecting on what is meant by a system. Common definitions of a system are that it is made up of interacting parts or components that come together to achieve a particular function or purpose, with the former term more commonly used for natural systems whereas the latter term is more commonly used for (human) designed systems. The following concepts are commonly associated with the system definition: - The system boundaries delineate what is contained within the system, and what’s outside (with the latter generally referred to as the system’s environment). In foreSIGHT, the hydroclimate time series generated in Step B define the climate-relevant boundaries to the system, with the system model taking those time series as inputs for subsequent simulation, and with all physical processes that lead to those time series encompassed as part of the system’s environment. We note that other (non-climatic) elements of a system’s environment—such as population growth, societal or technological changes and so forth—currently do not fall within the scope of foreSIGHT and must be included within the system model. - The system is made up of a number of connected components that are effectively the ‘building blocks’ of the system, and within foreSIGHT these are assumed to be represented appropriately within the system model. It is noted that in some cases, separate numerical (computational) models may exist for separate components (for example a regional-scale agricultural system may have surface water, groundwater and crop models representing the various subsystems); however for the purposes of foreSIGHT, it is assumed that all sub-system models are coupled in such a way as to yield a complete mapping between the hydroclimate time series and the system performance. - The system’s function or purpose can be quantitatively described through one or more performance metrics. In many cases these will represent a combination of economic, social and environmental measures that collectively describe the overall system performance. Having described the core elements of a typical system, it is necessary to identify a quantitative system model (or coupled series of models) that are able to represent system response to a range of hydroclimatic conditions. The development and testing of numerical system models is a large topic that is outside the scope of this tutorial and often involves a range of discipline-specific issues and conventions. However as a starting point, the following are a range of key issues to consider in developing the system model: • Performance criteria. What are the key performance criteria or elements of system function/purpose that are relevant for a given analysis? In keeping with the bottom-up philosophical approach to climate impact assessments, understanding and properly defining the key outcomes that a system is achieving or should achieve is the fundamental consideration that should drive all other aspects of system model development. • System components/interactions. What are the key system components and interactions (sometimes referred to as ‘processes’) that collectively enable the system to achieve its function/purpose? In the context of climate-sensitive systems, this often will comprise a combination of natural and human elements, and in many cases will involve both ‘hard’ infrastructure as well as human behaviours/decisions. As part of this step it may be useful to develop a qualitative ‘model’ of how the system functions prior to implementing a more detailed quantitative system model. • System boundaries. Given the above, can clear system boundaries be drawn that delineate the key system components/interactions from its environment? This is often more difficult than it sounds, and is best illustrated by an example. Take the concept of a farmer who is interested in investigating the implications of a changing climate on her business. One might instinctively seek to model how the crops, soils and other ‘on-farm’ features might respond to changing atmospheric conditions, and thus place the system boundaries geographically around her farm. However if farm is irrigated, then there might also be sensitivities in water availability from the upstream catchment, and/or the aquifer if groundwater is an important resource—and of course as part of this one might also consider the other competing agricultural, industrial and/or municipal demands on those water resources. Yet of course it isn’t so much crop yields as farm profitability that would be the prime concern, so perhaps we should consider commodity prices as well (which can be influenced by regional and global climate phenomena). As this example illustrates, it doesn’t take long before the system model encompasses the entire planet! It is therefore important to take a pragmatic approach, recognising that placing a system model appropriately in its environment will be critical to manage these multi-scale issues. • Availability of options. What are the key ‘options’ or ‘levers’ that could be changed to help improve overall system resilience to climate? A system model would need to be able to simulate system response to each of those levers that are to be evaluated as part of the stress test. Articulation of options/levers also helps address the conundrum of the system boundaries, with the system boundaries often selected to encompass the key levers that are being assessed, while excluding those that are outside of the control of the assessor. Returning to the farm example, the farmer would most likely place boundaries around her farm enterprise since farm management is largely within her control, and relegate the other elements to the environment and address these through appropriately specified boundary conditions. However if the problem was one of global food security, then a very different delineation would be required. • Representation and level of detail. At what level of detail/granularity are the processes best represented? There are various extremes here; for example ‘physically based’ models often try to break down the system behaviour by exploring the behaviour of its fundamental elements, whereas more ‘conceptual’ or ‘empirical’ approaches take a more abstracted approach to the system. There are many subtle and not-so-subtle considerations associated with model selection, and we won’t go into the details here other than flagging that these choices are extremely important! Ultimately the key consideration is: how well does the system model enable the quantitative exploration of how system performance is likely to vary under a broad range of plausible climate conditions, in such a way that it allows alternative system options to be evaluated. • Additional practical considerations. In addition to the above considerations, there are many additional practical factors such as the availability/familiarity with a given system model, the availability of data needed to support the model, model runtimes and the feasibility to simulate multiple climate scenarios. Not surprisingly, the selection of a system model represents amongst the most critical decisions in the climate stress test, and given that foreSIGHT is intended to be used for a broad range of environmental, water resource, agricultural and renewable energy systems (amongst others), it is difficult to provide definitive advice. It is therefore strongly recommended that domain experts are properly engaged during this process. If the system model is coded as a wrapper function in R, the system performances can be simulated using the runSystemModel function in foreSIGHT using the perturbed data generated in Step B. Alternately, the perturbed time series can be written to output files in required formats and used to simulate system performances in another programming environment. But the performances would need to be read back into R to continue with the next steps of the work flow for climate stress-testing. We provide code templates for these options after discussing Step C2. If the system model wrapper function is coded in R, it should be provided as the input to the systemModel argument of the runSystemModel function.The systemModel function should simulate the system performances using climate data in a data.frame or list, and the required system model arguments in a list (systemArgs) as the inputs, and return a named list containing the system performance metrics. The selection of the system performance metrics for the stress test are influenced my multiple considerations described in the next section. systemModel = system-model-wrapper-function 5.2. Step C2. Selecting the performance metrics Key Considerations: Step C2 As discussed above, the System Model should be capable of simulating the change in system performance under a range of perturbed climate conditions. Yet there are many subtle issues associated with the concept of defining and measuring System Performance that we elaborate upon here. Generally, an overall system is considered to perform well if it performs well across a broad set of economic, social and environmental criteria. This requires a holistic perspective to representing (and ultimately measuring) system performance. For example, consider the following increasingly broad questions related to system performance: • Is a system (e.g. water supply, renewable energy or agricultural system) able to meet its intended purpose (e.g. to provide secure fresh water to a community, or high-reliability energy, or provision of food and fibre?) • Is a system able to meet its intended purpose at an affordable cost? • Is a system able to meet its intended purpose at an affordable cost, while mitigating negative externalies? As this example illustrates, there are usually a multitude of performance criteria that must be balanced to achieve successful outcomes. Moreover, in many cases a system’s performance criteria involves trade-offs—often but not always between cost and various other metrics of performance. This highlights the importance of taking care in selecting an appropriate mix of performance criteria as part of the broader ‘stress-testing’ exercise, recalling the old adage that ‘whatever gets measured gets managed’. In addition to measuring a range of facets of system performance, the performance metrics should be compatible with the stochastic nature of the climate forcing for each input time series, and thus reflect a statistical characterisation of performance rather than a deterministic one. Examples of statistical metrics include average (or ‘expected’) performance, or the probability of failure, but the notion that a system is not allowed to fail, since this would produce anomalous outcomes (e.g. the system performance could vary significantly for different stochastic replicates, or the performance would deteriorate the longer the stochastic replicate just because it leads to a greater likelihood of supplying the system model with the weather sequence that causes the failure). After selecting the system model and the performance metrics employing all the key considerations detailed above, the system performances can be simulated using the perturbed time series generated in Step B. The names of the performance metrics selected should be provided as the metrics input argument to the runSystemModel function. The system model wrapper function (systemModel) would typically simulate multiple performance metrics, the selected metrics should be a subset of them. metrics = vector-containing-names-of-performance-metrics The below code templates can be used to create scripts to simulated system performances in R or other languages based on how the system model is coded. Code Template C1: Creating wrappers for system models in R An example system model that represents a rain water tank system (named tankWrapper) is available in foreSIGHT and may be used as an example to create wrapper functions for other system models in R. Further details on the rainwater tank model are included in the Inbuilt System Models chapter towards the end of is tutorial. tankWrapper #> function (data, systemArgs, metrics) #> { #> performance <- tankPerformance(data = data, roofArea = systemArgs$roofArea,
#> nPeople = systemArgs$nPeople, tankVol = systemArgs$tankVol,
#> firstFlush = systemArgs$firstFlush, write.file = systemArgs$write.file,
#> fnam = systemArgs$fnam) #> performanceSubset <- performance[metrics] #> return(performanceSubset) #> } #> <bytecode: 0x00000000213d9988> #> <environment: namespace:foreSIGHT> To use custom system models in R, the user should define a wrapper function systemModel adhering to the input-output requirements described below. The code below shows the generalised structure of the systemModel wrapper function. systemModel <- function(data, # data.frame with columns: year, month, day, *var1*, *var2* etc. systemArgs, # list containing the arguments of simulateSystem metrics) { # names of performance metrics (with units of the metrics) # convert data to format required for simulateSystem # Note that "reformat" is a dummy function shown here for # illustration dataforSimulateSystem <- reformat(data) # call simulateSystem and get system performance metrics # simulateSystem is the core system model function systemPerformance <- simulateSystem(data = dataforSimulateSystem, arg1 = systemArgs[[1]], arg2 = systemArgs[[2]], ...) # subset & return metrics (can name performance metrics # here if required) performanceSubset <- systemPerformance[metrics] return(performanceSubset) } simulateSystem is the core system model function that simulates the system and calculates and returns multiple performance metrics. systemModel is a wrapper function that calls simulateSystem, which is intended to interface with runSystemModel(). The systemModel function: - receives data, systemArgs in the specific format - translates/reformats the inputs to the format required by simulateSystem (if necessary) - subsets (if necessary) and returns the relevant metrics systemModel takes in arguments data, systemArgs, and metrics. data is data.frame containing the columns year, month day, *var1*, *var2*. The format of data is the same as observed sample data available in the package shown below. data("tankDat") head(tank_obs) #> year month day P Temp #> 1 2007 1 1 0.0 25.50 #> 2 2007 1 2 0.0 24.50 #> 3 2007 1 3 0.0 29.75 #> 4 2007 1 4 0.0 32.25 #> 5 2007 1 5 0.0 32.50 #> 6 2007 1 6 4.5 26.50 systemArgs is a list containing the system arguments that are required by simulateSystem. metrics is a vector of strings containing the names of the performance metrics that systemModel should return. It is recommended that the names of the performance metrics also include the units of the metrics. This will ensure that the units are available in the names of the performance metrics outputs created using runSystemModel and will be included in the legend labels of plots created using the downstream performance plotting functions in foreSIGHT. Code Template C2: Using an external system model In some cases, the user may be interested in using the perturbed time series generated using generateScenarios() to simulate system performances using system models in other programming languages/environments. The perturbed time series generated using foreSIGHT can be written to a suitable format (e.g. CSV files), used to run an external system model, and the simulated system performances may be loaded back into an R workspace. The system performances can be visualised using the performance plotting functions in foreSIGHT as described in Step D. The below code provides templates to (1) write scenarios generated in Step B to CSV files to be used in other programming environments, and (2) read system performance metrics calculated in other languages and saved in CSV files into an R workspace. The templates can be modified by the user for the system models of their interest. Example code to write scenarios to CSV files for external system models: # ******************************** NOTE **************************** # The following generateScenarios call takes ~30 mins to complete # ****************************************************************** # Create an exposure space attPerturb <- c("P_ann_tot_m","P_ann_seasRatio", "Temp_ann_avg_m") attHold <- c("P_MAM_tot_m", "P_JJA_tot_m", "P_ann_R10_m", "Temp_ann_rng_m") attPerturbType <- "regGrid" attPerturbSamp <- c(2, 2, 2) attPerturbMin <- c(0.8, 0.9, -0.5) attPerturbMax <- c(1.2, 1.1, 0.5) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) # Generate perturbed time series data("tankDat") sim <- generateScenarios(reference = tank_obs, expSpace = expSpace, simLengthNyrs = 30, numReplicates = 2) # Example code to write the generated perturbed time series to csv files which may be used to run # system models in other software environments/programming languages #======================================================================================================== repNames <- names(sim[grep("Rep", names(sim))]) # replicate names tarNames <- names(sim[[repNames[1]]]) # target names nRep <- length(repNames) nTar <- length(tarNames) varNames <- c("P", "Temp") # variable names for(r in 1:nRep) { for (t in 1:nTar) { scenarioData <- sim[["simDates"]] # dates of the simulation, will add variables later for (v in varNames) { if (is.character(sim[["controlFile"]])) { if (sim[["controlFile"]] == "scaling") { varTemp <- as.data.frame(sim[[repNames[r]]][[tarNames[t]]][[v]]) } } else { varTemp <- as.data.frame(sim[[repNames[r]]][[tarNames[t]]][[v]][["sim"]]) } names(varTemp) <- v scenarioData <- cbind(scenarioData, varTemp) # add columns containing the variables } outCSVFile <- paste0("Scenario_Rep", r, "_Tar", t, ".csv") # name the csv file as desired write.table(scenarioData, file = outCSVFile, row.names = FALSE, quote = FALSE, sep = ",") } } # Scenario_*.csv files can be used to run external system models Example code to read system performance saved in CSV files to an R workspace # Example code to read the system performances calculated using the generated time series into an # R workspace. It is assumed that the system performances calculated in another software environment # are saved in separate files for each scenario (named by replicate and target numbers) #======================================================================================================== metrics <- c("performance metric 1 (%)", "performance metric 2 (fraction)") # metric names repNames <- names(sim[grep("Rep", names(sim))]) # replicate names tarNames <- names(sim[[repNames[1]]]) # target names nRep <- length(repNames) nTar <- length(tarNames) systemPerformance <- list() # initialised in the format that runSystemModel would return for (m in 1:length(metrics)) { systemPerformance[[metrics[m]]] <- matrix(NA, nrow = nTar, ncol = nRep) } # read from files containing metric values from an external system model for(r in 1:nRep) { for (t in 1:nTar) { # name of the csv file containing the system performance, header is the name of the metric inCSVFile <- paste0("SystemPerformances_Rep", r, "_Tar", t, ".csv") # check.names = FALSE is useful if the metric names contain brackets around the units systemPerfIn <- utils::read.table(inCSVFile, header = TRUE, sep = ",", check.names = FALSE) for (m in 1:length(metrics)) { systemPerformance[[metrics[m]]][t, r] <- systemPerfIn[[metrics[m]]] } } } # systemPerformance an be used as an input to plotPerformanceSpace, plotPerformanceOAT, # and plotPerformanceMulti 6. Step D: Visualise system performances (plotPerformanceSpace, plotPerformanceOAT, plotPerformanceSpaceMulti) In this step you’ll learn… • The different types of performance space visualisations that are available in foreSIGHT • How to add Performance Thresholds to plotting elements • How to combine Bottom-up and Top-down frameworks by integrating climate model projections into the plotting • How to plot performance spaces for systems with multiple performance metrics Now that we’ve generated samples of a Performance Space as part of Step C, we now turn to the challenge of visualisation. On the surface this might seem like a relatively trivial problem relative to the difficulties of generating the exposure space in the first place, however there are various issues to consider: • How to plot performance spaces for different numbers of attributes • Alternative visualisations of system performance, including for situations where there are clearly defined performance thresholds • How to overlay climate projections and other ‘lines of evidence’ onto the performance space • Troubleshooting when the performance spaces do not look the way they should Key considerations associated with plotting are discussed next, followed by various example ‘Use Cases’ of the plotting functionality. Key Considerations: Step D As highlighted in various other parts of this tutorial, most climate-sensitive systems can be extremely complex, and this complexity means that the system can be sensitive to a large number of climate features (or Climate Attributes, using the preferred foreSIGHT terminology). One of the primary objectives of climate stress tests is to uncover how systems might respond to plausible future changes, including the identification of possible modes of system failure. The plotting functions in foreSIGHT are designed with this purpose in mind. However, before delving into the mechanics of plotting, it’s worth stressing an extremely important caveat. We know that climate change can alter a broad range of statistical features (including changes to the averages, seasonality, intermittency, interannual variability, and extremes) of a broad range of hydroclimate variables (rainfall, humidity, wind, evapotranspiration, etc). We also know that systems can respond to climatic stressors in complex and unexpected ways. Yet traditional bottom-up stress tests tend to focus on only a small number of attributes, given limitations in both computational power (both in generating the perturbed time series and running it through the system model, as discussed in Steps B and C) and visualisation (we have difficulty looking at plots in more than two dimensions). This latter point in particular means that, if we are not careful, we could miss major modes of variability just because of the method we’ve chosen to visualise the results. This brings us to an important point: each visualisation contains (often very strong) ceteris paribus assumptions; or in plain English, the conventional plotting approaches assume that the elements not included in the plot will remain constant and usually at the levels of the reference (e.g. historical) climate. To minimise the risk of this issue, we make the following recommendations: • Experiment with multiple plotting options, always being aware of the ceteris paribus assumptions that are implicit in any decision not to plot a certain attribute • Compare the results from high-dimensional plots (that generally contain greater information content and fewer assumptions, but are also harder to interpret) with low-dimensional plots of key attributes • Always compare the results from top-down assessments with those from bottom-up assessments. This latter point is a particularly important one, and generally not discussed in most of the scenario-neutral literature. A top-down analysis involves running a set of (often downscaled) climate projections through a system model and plotting the system performance. These performance values can be superimposed on the results of a bottom-up analysis using plotting features that are available in foreSIGHT, to see whether these yield similar results. If they produce different results, it must be because the system is somehow sensitive to features (or attributes) that are not the ones being plotted on the performance space; or, in other words, the discrepancy suggests that the ceteris paribus assumption is not being met. Hopefully you don’t have this difficulty, but if you do we’ll cover possible approaches to address this discrepancy in the next update for this vignette. Finally, we note that just as every system is different, so too are the needs of each stress test. We’ve tried to make the plotting functions flexible to a range of visualisation options to enable high levels of customisation, and new options are regularly being added so make sure to check the plotting function help files for the latest information. foreSIGHT contains three functions to visualise the system performances - plotPerformanceOAT, plotPerformanceSpace, and plotPerformanceSpaceMulti.The performance plotting functions use the system performance and the simulation summary as input arguments. There are three functions available in foreSIGHT to plot performance metrics. Brief descriptions of these functions are provided below and detailed usages are illustrated in the use cases presented in the following sub-sections. plotPerformanceOAT: The function creates line plots (with shading to show the range from replicates) to show the variations in a system performance metric with one-at-a-time (OAT) perturbations in attributes. This function is intended for use with an “OAT” exposure space, assuming all other attributes are held constant (usually at their historical levels). However, if “OAT” perturbations exist in a “regGrid” exposure space, the function will subset these targets to create the plots. This subset can be thought of as a slice through the exposure space when the other attributes are kept at historical levels. If the exposure space does not contain attribute values at historical levels, the “OAT” plots cannot be created. plotPerformanceOAT will print an error to inform that there are no “OAT” perturbations in the exposure space in such an instance. plotPerformanceSpace: The function plots two-dimensional heatmaps and contours of the selected system performance metric at multiple target locations in the exposure space. The performance metric outputs created from a ‘regGrid’ exposure space is termed the “performance space” as it contains the system performance at multiple target locations in the exposure space, that can be visualised using two-dimensional plots. If the exposure space contains more than two dimensions, the function can be used to create a plot using two dimensions selected by the user. To analyse higher-dimensional performance spaces, the plots can be placed in panels to asses the impact of simultaneous perturbations in multiple attributes. In some cases, there may be a clear performance ‘threshold’, above or below which the system performance becomes undesirable and/or triggers a system ‘failure’ (for example, an agreed minimum specified level of system reliability). In this case, the user may specify the threshold value of the performance metric as an input argument, resulting in the addition of a thick contour line to the plot in order to mark this threshold in the performance space. It is possible to add various lines of evidence to provide guidance on which parts of the exposure space are more or less plausible in a future climate. For example it is possible to superimpose projections from climate models to the performance space plotted using plotPerformanceSpace. This climate data should contain values of projected changes in attributes that are used as the axes of the performance space, and which need to be developed separately from the foreSIGHT work flow. For example, one might extract relevant attribute values from a 30 year future timeslice from the relevant climate model output, potentially after downscaling, bias correction or other processing. One may also elect to use the climate model simulations (potentially after downscaling, bias correction or other processing) as inputs to the system model to generate new performance values corresponding to each projection time series, and in this case it is possible to plot the performance values corresponding to the climate model simulations as coloured data points in plots created using plotPerformanceSpace, using the same colour scale. plotPerformanceSpaceMulti: The third function available in foreSIGHT for plotting system performance is the joint presentation of multiple system performance metrics to facilitate decision making. The function plots contours showing the number of performance metric thresholds exceeded in the performance space. The user should specify the minimum or maximum thresholds of each performance metric as input arguments for calculation. If the exposure space contains many target locations and the perturbed time series contains multiple replicates, the simulation (sim) can be quite large in size, . The getSimSummary function in foreSIGHT can be used to get the summary metadata (exposure space, controlFile, simulation seed etc.) of a simulation, which is easy to store and use with the plotting functions in foreSIGHT. simSummary <- getSimSummary(sim) The mandatory arguments to the these plotting functions are the simulation (or simulation summary, see below) of the perturbed scenarios generated in Step B (sim), and the performance metric calculated by the system model in Step C (performance) to be plotted. plotPerformanceOAT(performance, sim) plotPerformanceSpace(performance, sim) plotPerformanceSpaceMulti(performance, sim) The functions contain other arguments to subset the data and control the appearance of the plots. We present some use cases to illustrate these capabilities in the following sub-sections. Use Case D1: Plotting performance metrics of OAT perturbations A climate stress-test typically starts with a preliminary assessment using ‘OAT’ perturbations in the climate attributes selected based on an understanding of the system dynamics. Such an assessment provides guidance for the selection of attributes for a more rigorous stress-test, with the caveat that system vulnerabilities arising from simultaneous perturbations in the attributes will not be considered. Suppose that you have selected the preliminary attributes, created an ‘OAT’ performance space, generated perturbed time series and run the system model to simulate system performance metrics employing all the key considerations outlined in this tutorial. As the next step, you would need to visualise the performance metrics to understand the system responses to ‘OAT’ perturbations in the selected climate attributes. The function plotPerformanceOAT can be used for this purpose. Fine resolution perturbations of the selected attributes, and multiple replicates are necessary to obtain a smoother picture of the changes in performance metric. Since it is not feasible to generate such perturbed time series quickly, we use example datasets available in the foreSIGHT package for illustration here. The data are the summary of an ‘OAT’ perturbed simulation (egSimOATSummary) and the performance metrics of one configuration of the rain water tank system model created using the simulation (egSimOATPerformance). The daily operation of the rain water tank is simulated using precipitation and temperature time series as input, and these metrics quantify the performance of the tank (see section on inbuilt system models) First lets understand the structure of the simulation and system performance metrics. # load data data("egSimOATSummary") data("egSimOATPerformance") egSimOATSummary$expSpace$attPerturb # the perturbed attributes #> [1] "P_ann_tot_m" "P_ann_seasRatio_m" "P_ann_nWet_m" "P_ann_R10_m" utils::str(egSimOATSummary$expSpace) # targets in the exposure space
#> List of 8
#> $targetMat :'data.frame': 88 obs. of 11 variables: #> ..$ P_ann_tot_m : num [1:88] 0.8 0.816 0.832 0.847 0.863 ...
#> ..$P_ann_seasRatio_m: num [1:88] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_ann_nWet_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_ann_R10_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_Feb_tot_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_SON_dyWet_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_JJA_avgWSD_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_MAM_tot_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_DJF_avgDSD_m : num [1:88] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$Temp_ann_rng_m : num [1:88] 0 0 0 0 0 0 0 0 0 0 ... #> ..$ Temp_ann_avg_m : num [1:88] 0 0 0 0 0 0 0 0 0 0 ...
#> $attRot : chr [1:88] "P_ann_tot_m" "P_ann_tot_m" "P_ann_tot_m" "P_ann_tot_m" ... #>$ attPerturb : chr [1:4] "P_ann_tot_m" "P_ann_seasRatio_m" "P_ann_nWet_m" "P_ann_R10_m"
#> $attHold : chr [1:7] "P_Feb_tot_m" "P_SON_dyWet_m" "P_JJA_avgWSD_m" "P_MAM_tot_m" ... #>$ attPerturbSamp: num [1:4] 20 32 14 22
#> $attPerturbMin : num [1:4] 0.8 0.8 0.85 0.9 #>$ attPerturbMax : num [1:4] 1.1 1.3 1.05 1.25
#> $attPerturbType: chr "OAT" utils::str(egSimOATSummary, max.level = 1) #> List of 13 #>$ Rep1 :List of 88
#> $Rep2 :List of 88 #>$ Rep3 :List of 88
#> $Rep4 :List of 88 #>$ Rep5 :List of 88
#> $Rep6 :List of 88 #>$ Rep7 :List of 88
#> $Rep8 :List of 88 #>$ Rep9 :List of 88
#> $Rep10 :List of 88 #>$ simDates :'data.frame': 109572 obs. of 3 variables:
#> $expSpace :List of 8 #>$ controlFile:List of 6
utils::str(egSimOATPerformance) # system performance metrics from simulations of the tank model
#> List of 2
#> $Avg. Deficit (L): num [1:88, 1:10] 26.3 25.9 26.7 27 26.6 ... #>$ Reliability (-) : num [1:88, 1:10] 0.813 0.816 0.813 0.811 0.812 ...
The simulation contains four perturbed precipitation attributes and a total of 88 target locations in the exposure space, generated using an ‘OAT’ perturbation method. The minimum-maximum bounds and the number of samples show that the perturbations have a resolution of 0.015 to 0.017. There are twenty replicates in the simulation to incorporate random variability into the generated data. The system performance data contains two performance metrics of the rain water tank model - the average daily deficit of water in litres, and the reliability of the tank in meeting the water demand as a fraction. The performance metrics can be plotted using the function plotPerformanceOAT. The function contains arguments to specify the metric to be plotted (metric), the colour of the plots (col), the number of top replicates (in terms of fitness) to be used for the plots (topReps), and the the y-axis limit (ylim). The topReps argument sorts the replicates by closeness of fit in terms of the objective function used for optimisation, and uses the specified topReps number replicates to create the plots. In the example code below, the top 8 replicates out of the total 10 are used. The function creates paneled plots showing the variations in the performance metric with changes in each perturbed attribute.
p1 <- plotPerformanceOAT(performance = egSimOATPerformance, # list of performance metrics
sim = egSimOATSummary, # simulation metadata
metric = "Reliability (-)", # the metric to be plotted
col ="orange", # colour of the plot
topReps = 8, # number of top replicates to be used
ylim = c(0.7, 0.9)) # y-axis limits
The figure shows the variation in the performance metric “Reliability (-)” with changes in the four perturbed attributes. The performance metric is most sensitive to two attributes - mean annual total rainfall, and the mean annual seasonal ratio. The results indicate that the attributes may be selected for a more rigorous stress-test using a ‘regGrid’ exposure space of multiple target locations involving simultaneous perturbations in these attributes.
Use Case D2: Plotting performance spaces
Comprehensive climate stress-test of a system typically involves the use of an exposure space with ‘regGrid’ perturbations in two or more attributes (i.e. a multi-dimensional exposure space). The performance metric values corresponding to all target locations in the exposure space is referred to as the performance space. Visualisation of performance spaces using multiple performance metrics are necessary to identify the most vulnerable areas in an exposure space.
Suppose you’ve conducted the first three steps of a stress-test using a ‘regGrid’ exposure space - creation of the exposure space, generation of perturbed time series, and simulations using the system model to calculate the system performance metrics. In the next step, you need to visualise the performance space using the functions plotPerformanceSpace and plotPerformanceSpaceMulti to draw conclusions about system vulnerability and system failure from the stress-test. Smooth performance spaces are often necessary to draw inferences from the data. Practically, finer resolutions of the exposure space and multiple stochastic replicates are required to obtain smooth performance spaces. We use example data available in foreSIGHT to illustrate. Consider the following example data - a stochastic simulation summary (egSimSummary), and corresponding performance metrics calculated using a configuration of the tank system model (egSimPerformance). The tank model simulated the daily operation of a rain water tank using precipitation and temperature time series as input and the calculated metrics quantify the performance of the tank (see section on inbuilt system models).
# load data
data("egSimSummary")
data("egSimPerformance")
egSimSummary$expSpace$attPerturb # the perturbed attributes
#> [1] "P_ann_tot_m" "P_ann_seasRatio"
utils::str(egSimSummary$expSpace) # target locations in the exposure space #> List of 8 #>$ targetMat :'data.frame': 160 obs. of 11 variables:
#> ..$P_ann_tot_m : num [1:160] 0.8 0.833 0.867 0.9 0.933 ... #> ..$ P_ann_seasRatio: num [1:160] 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 ...
#> ..$P_ann_nWet_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_ann_R10_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_Feb_tot_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_SON_dyWet_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_JJA_avgWSD_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_MAM_tot_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_DJF_avgDSD_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ Temp_ann_rng_m : num [1:160] 0 0 0 0 0 0 0 0 0 0 ...
#> ..$Temp_ann_avg_m : num [1:160] 0 0 0 0 0 0 0 0 0 0 ... #>$ attRot : NULL
#> $attPerturb : chr [1:2] "P_ann_tot_m" "P_ann_seasRatio" #>$ attHold : chr [1:9] "P_ann_nWet_m" "P_ann_R10_m" "P_Feb_tot_m" "P_SON_dyWet_m" ...
#> $attPerturbSamp: num [1:2] 10 16 #>$ attPerturbMin : num [1:2] 0.8 0.8
#> $attPerturbMax : num [1:2] 1.1 1.3 #>$ attPerturbType: chr "regGrid"
utils::str(egSimSummary, max.level = 1)
#> List of 13
#> $Rep1 :List of 160 #>$ Rep2 :List of 160
#> $Rep3 :List of 160 #>$ Rep4 :List of 160
#> $Rep5 :List of 160 #>$ Rep6 :List of 160
#> $Rep7 :List of 160 #>$ Rep8 :List of 160
#> $Rep9 :List of 160 #>$ Rep10 :List of 160
#> $simDates :'data.frame': 109572 obs. of 3 variables: #>$ expSpace :List of 8
#> $controlFile:List of 6 utils::str(egSimPerformance) # system performance metrics from simulations of the tank model #> List of 2 #>$ Avg. Deficit (L): num [1:160, 1:10] 25.3 22.9 21 22.5 21.3 ...
#> $Reliability (-) : num [1:160, 1:10] 0.818 0.836 0.85 0.84 0.845 ... The example simulation contains two perturbed precipitation attributes and a total of 160 target locations in a ‘regGrid’ exposure space. Considering the minimum-maximum bounds and the number of samples we see that the perturbations have a resolution of about 0.03. There are twenty replicates in the simulation to incorporate random variability in the generated data. The system performance data contains two performance metrics of the rain water tank model - the average daily deficit of water in litres, and the reliability of the tank in meeting the water demand as a fraction. These performance metrics can be visualised using the function plotPerformanceSpace. Consider the performance space of the metric - Avg. Deficit (L) can be plotted using the code shown below, which shows the variation in this metric with perturbations in the two perturbed attributes. p2 <- plotPerformanceSpace(performance = egSimPerformance, # list of performance metrics sim = egSimSummary, # simulation metadata metric = "Avg. Deficit (L)", # the metric to be plotted attX = "P_ann_tot_m", # x-axis perturbed attribute attY = "P_ann_seasRatio", # y-axis perturbed attribute topReps = 8, # number of top replicates to be used colMap = viridisLite::plasma(20), # colour map to use colLim = c(18, 34)) # colour limits The plotPerformanceSpace function creates 2-dimensional heatmaps and contours showing the performance space. The function arguments are used to specify the metric to be plotted (metric), the perturbed attributes to use on the x- and y-axes (attX, attY), the colour map and colour limits of the plot (colMap, colLim), the number of top replicates (in terms of fitness) to be used for the plots (topReps), and the the y-axis limit (ylim). The attX and attY arguments become especially relevant for performance spaces that contain more than two perturbed attributes - these arguments specify the slice of the performance space to be plotted. The topReps argument sorts the replicates by closeness of fit in terms of the objective function used for optimisation, and uses the specified topReps number replicates to create the plots. In the example code above, the top 8 replicates out of the total 10 are used. From the figure (below), we see that the most vulnerable areas (higher values of average daily deficit) of the performance space occur with simultaneous decreases in mean annual total rainfall, and increases in seasonal ratio - the upper left portion of the performance space. To understand the system performance better, we need to superimpose additional information on the performance space to understand (a) which areas of the performance space violate threshold criteria (if they exist) of the performance metric? (b) how plausible are the perturbed values of the attributes from alternate climate data like projections? Consider that the maximum threshold value of the average daily deficit beyond which the tank system becomes economically non-viable is 29 litres. This maximum threshold is one of the design criteria used while designing the rain water tank to operate under the current climate conditions and corresponds to about 10% of the water use of a single person house hold. We need to know which areas of the performance space exceed this maximum threshold under perturbations in climate. We can add this threshold as a thick contour line to the performance space plotted above using the perfThreshold and perfThreshLabel arguments of the plotPerformanceSpace function. In addition, suppose we have alternate climate information from climate projections for the region corresponding to a future time slice centered on the year 2050. We want to superimpose these top-down projections on the performance space to understand how plausible the perturbations simulated in the bottom-up climate impact assessment are. Here we use example climate data available in the package for demonstration. We can add additional information to the performance space as demonstrated below. data("egClimData") p3 <- plotPerformanceSpace(performance = egSimPerformance, # list of performance metrics sim = egSimSummary, # simulation metadata metric = "Avg. Deficit (L)", # the metric to be plotted attX = "P_ann_tot_m", # x-axis perturbed attribute attY = "P_ann_seasRatio", # y-axis perturbed attribute topReps = 8, # number of top replicates to be used colMap = viridisLite::plasma(20), # colour map to use colLim = c(18, 34), # colour limits perfThresh = 29, # thershold value perfThreshLabel = "Max. Deficit (29L)", # thershold label climData = egClimData # other climate data ) The above figure of the performance space shows that perturbations roughly higher than 1.2 in the seasonal rainfall ratio, combined with a reduction in annual total rainfall (perturbation values lower than 1) would breach the maximum average deficit threshold criteria of the tank model. Alternately, if the reduction if annual rainfall is higher (perturbation values lower than 0.9) combined with perturbations higher than about 1.1 in the seasonal ratio would also cause the maximum deficit threshold criteria to be breached. But looking at the super imposed climate projections, we see that 5 of the 6 data points fall in areas well below the threshold. One of the points is close to the threshold line, but has not breached the threshold. Thus, the climate perturbations that result in performance metric values higher than the maximum threshold do not appear to be very plausible based on these alternate lines of evidence. Suppose we use the precipitation and temperature time series from climate projections to run the system model (in this case the tank model) and obtain performance metrics in a future climate from a top-down assessment. Such performance metric estimates can be input to the plotPerformanceSpace function as a column in the data.frame input to the climData argument. In this case, the superimposed climate data points will be coloured using the same colour scale as the performance space to enable comparison of the performance metric estimates from bottom-up and top-down assessments. The column name should match the name of the metric plotted in the performance space. The sixth column of the data egClimData in the package contains values of average deficit, the column name of which is slightly different. The reader may rename this column and re-plot the above performance space for an example of how this works. data("egClimData") names(egClimData)[6] <- "Avg. Deficit (L)" p4 <- plotPerformanceSpace(performance = egSimPerformance, # list of performance metrics sim = egSimSummary, # simulation metadata metric = "Avg. Deficit (L)", # the metric to be plotted attX = "P_ann_tot_m", # x-axis perturbed attribute attY = "P_ann_seasRatio", # y-axis perturbed attribute topReps = 8, # number of top replicates to use colMap = viridisLite::plasma(20), # colour map to use colLim = c(18, 34), # colour limits perfThresh = 29, # thershold value perfThreshLabel = "Max. Deficit (29L)", # thershold label climData = egClimData # other climate data ) Imagine a case when you have more than two attributes that are perturbed using ‘regGrid’ sampling - resulting in a multi-dimensional performance space. You can specify attX and attY in the plotPerformanceSpace function call to select which slice of the performance space is to be plotted. The figures representing multiple slices can be placed together in panels to visualise the multiple dimensions of the performance space. The dimensions of the performance space that are not displayed (i.e., the perturbed attributes that are not attX or attY) are averaged in the figure. For example, imagine that the example data above has another perturbed attribute “P_ann_R10_m” with perturbations ranging from 0.8 to 1.2. In this case, the performance spaces displayed above would be averaged across the perturbations in this attribute. Now suppose you wish to subset the range of this hidden dimension before plotting - maybe you are interested in the performance spaces for only the perturbations that reduce or maintain the current levels of “P_ann_R10_m”. The argument attSlices of plotPerformanceSpace is intended for use in such a scenario. To specify the hypothetical slice described above, the attSlices argument would be specified as shown. attSlices <- list() attSlices[["P_ann_R10_m"]] <- c(0.8, 1) # the minimum & maximum bounds for subsetting This functionality is not demonstrated in this use case since the example contains only two perturbed attributes. We leave it to the reader to perform an experiment using a multi-dimensional performance space that uses the attSlices argument. From the above figures and discussion we understand the patterns in the performance space of the metric “Avg. Deficit (L)” in the example data. It is common to use more than one metric to assess multiple performance criteria for the same system. The example performance data used in this section contains two performance metrics—the average deficit and tank reliability. The maximum threshold value for the average deficit is 29 litres. Suppose, in addition, we also desire a rain water tank reliability of at least 0.82. In other words, we want to specify a minimum threshold of 0.82 for the metric “Reliability (-)” and a maximum threshold of 29 litres for the metric “Avg. Deficit (L)” and assess the vulnerability of the performance space using both these criteria. The function plotPerformanceSpaceMulti can be used for this purpose to create plots using multiple performance metrics. The function plots filled contours to show the number of performance thresholds exceeded in the performance space. # plot number of performance thresholds exceeded p5 <- plotPerformanceSpaceMulti(egSimPerformance, # 2 performance metrics egSimSummary, # simulation summary perfThreshMin = c(NA, 0.82), # min thresholds for each metric # use NA if not applicable perfThreshMax = c(29, NA), # max thresholds for each metric attX = "P_ann_tot_m", # x-axis perturbed attribute attY = "P_ann_seasRatio", # y-axis perturbed attribute topReps = 8, # number of top replicates to use climData = egClimData, # other climate data col = viridisLite::inferno(7, direction = -1) # colours to use ) When both the performance metrics are assessed together, at least one performance threshold is exceeded in larger areas of the performance space. Two out of six climate model projections are located in areas where one performance threshold is exceeded indicating that there is some plausibility that this rain water tank would not be viable in a future climate. 7. Step E: Analyse system performances and facilitate decision making (plotOptions) In this step you’ll learn… • Compare the performance spaces from multiple system configurations or operating policies The process of climate ‘stress-testing’ is often undertaken to facilitate decisions involving choices between multiple system configurations or operating policies. Step E of the process involves comparison of the results from these alternate choices under climate perturbations. Current foreSIGHT functionality for comparing multiple options involves interrogating each option as described in Step D, and creating difference plots of performance metrics for two alternate system options using a function plotOptions. It is intended that future versions of foreSIGHT will significantly expand the comparative capability including under a range of decision-theoretic frameworks. Key Considerations: Step E The comparison of multiple alternative options can be a valuable tool for adaptation decision making, with the core element of the comparison being an analysis of how different options impact on the Performance Spaces. This can provide a range of useful information including: • The overall sensitivity of alternative options to plausible climate changes • The climate conditions over which alternative system configurations are ‘acceptable’ or result in a ‘failure’ (for situations where there are clearly defined performance thresholds) • The extent to which alternative options improve overall system perforamnce relative to climate projections, by incorporating climate model output and/or other lines of evidence. By superimposing climate projections for different future time horizons, it may also be possible to use these analyses to inform adaptation triggers, by identifying conditions when the system performance is expected to become unacceptable. foreSIGHT contains a function named plotOptions that can be used to create plots of the differences in performance metrics calculated using two system options. The function uses the performance metrics calculated by running two alternate system model configurations using the same perturbed time series, and the perturbed simulation summary as inputs. The function call using the three mandatory function arguments is shown below. plotOptions(performanceOpt1, # performance metrics of system option 1 performanceOpt2, # performance metrics of system option 2 sim) # summary of the perturbed simulation The use case below demonstrates the usage of this function. Use Case E1: Plotting system options Climate stress-tests often involves comparison of the performance of two or more alternate configurations of the system to identify the best option. In this use case, we’ll compare the performance of two configurations of a rain water tank (for details of this system model see section on inbuilt system models). Consider the proposal to install a rain water tank in a house. Two alternate configurations are proposed for the tank. The cost of these alternate configurations are the same - the choice of the tank thus depends only on the differences in the performances of the two systems. The two configurations are described below. Based on the layout of the house, it is possible to harvest rain water from a total of 205 sq.m of roof area (including house and garage) to direct water to a rain water tank of volume 2400 litres. This tank requires 2 mm/m2 of the initial water collected from each storm (first flush) to be removed for water quality reasons. Let us name this system “Tank 1”. As an alternative, it is possible to install a rainwater tank to collect water only from the roof area of the house (155 sq.m) without including the garage. The location of the proposed installation in this case allows for a larger tank volume of 2750 litres. This tank requires 1mm/m2 of the first flush to be removed from each storm. Let’s name this system “Tank 2”. Steps A to C of the bottom-up climate impact assessment work flow detailed in this document are applied to the two systems, and the generated performance metrics are available as example data sets in the package. The structure of the data are shown below. # load data data("egSimSummary") # summary of the stochastic simulation data("egSimPerformance") # performance metrics of "Tank 1" data("egSimPerformance_systemB") # performance metrics of "Tank 2" egSimSummary$expSpace$attPerturb # the perturbed attributes #> [1] "P_ann_tot_m" "P_ann_seasRatio" utils::str(egSimSummary$expSpace) # target locations in the exposure space
#> List of 8
#> $targetMat :'data.frame': 160 obs. of 11 variables: #> ..$ P_ann_tot_m : num [1:160] 0.8 0.833 0.867 0.9 0.933 ...
#> ..$P_ann_seasRatio: num [1:160] 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 ... #> ..$ P_ann_nWet_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_ann_R10_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_Feb_tot_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_SON_dyWet_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_JJA_avgWSD_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$P_MAM_tot_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ... #> ..$ P_DJF_avgDSD_m : num [1:160] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$Temp_ann_rng_m : num [1:160] 0 0 0 0 0 0 0 0 0 0 ... #> ..$ Temp_ann_avg_m : num [1:160] 0 0 0 0 0 0 0 0 0 0 ...
#> $attRot : NULL #>$ attPerturb : chr [1:2] "P_ann_tot_m" "P_ann_seasRatio"
#> $attHold : chr [1:9] "P_ann_nWet_m" "P_ann_R10_m" "P_Feb_tot_m" "P_SON_dyWet_m" ... #>$ attPerturbSamp: num [1:2] 10 16
#> $attPerturbMin : num [1:2] 0.8 0.8 #>$ attPerturbMax : num [1:2] 1.1 1.3
#> $attPerturbType: chr "regGrid" utils::str(egSimPerformance) # system performance metrics #> List of 2 #>$ Avg. Deficit (L): num [1:160, 1:10] 25.3 22.9 21 22.5 21.3 ...
#> $Reliability (-) : num [1:160, 1:10] 0.818 0.836 0.85 0.84 0.845 ... utils::str(egSimPerformance_systemB) # system performance metrics #> List of 2 #>$ Avg. Deficit (L): num [1:160, 1:10] 23.5 21.4 19.1 20.7 19.2 ...
#> $Reliability (-) : num [1:160, 1:10] 0.832 0.847 0.864 0.854 0.861 ... The simulation contains two perturbed precipitation attributes and a total of 160 target locations in a ‘regGrid’ exposure space. Considering the minimum-maximum bounds and the number of samples we see that the perturbations have a resolution of about 0.03. The system performance data of both the system configurations contain two performance metrics of the rain water tank model - the average daily deficit of water in litres (“Avg. Deficit (L)”), and the reliability of the tank in meeting the water demand as a fraction (“Reliability (-)”). If you interrogate egSimSummary, you will see that the stochastic simulation contains twenty replicates to account for random variability in the generated data. The desired system performance criteria based on the requirements of the household are: (1) the maximum average daily deficit of water from the tank should not be higher than 29 litres, and (2) the reliability of the system should be at least 0.82. The performances of the two system configurations have to be assessed using these threshold criteria. The performance of the individual systems (“Tank 1”, “Tank 2”) can be assessed using performance spaces like the ones shown in Use Case D2. Such figures provide insight into the performance metrics and number of thresholds exceeded for each individual system configuration. After understanding the patterns in the performance spaces of the individual systems, we use the plotOptions function to create plots of the differences in performance metrics of “Tank 1” and “Tank 2”, and the shift in the performance threshold contour as a part of Step E of the work flow. This functionality is demonstrated in this use case. Similar to the performance spaces shown in Step D, these difference plots can also be superimposed with thick contour lines of the thresholds of the performance metric for each system, and climate data from alternate sources (eg: climate projections). The figure below shows the differences between “Tank 1” and “Tank 2” for both the performance metrics. Similar to plotPerformanceSpace, the plotOptions function contains arguments to control the appearance/labels of the plot (colMap, colLim, opt1Label, opt2Label, titletext), the axes and slices of the space (attX, attY, attSlices), and the number of replicates to use (topReps). data("egClimData") # load climate projections data p6 <- plotOptions(performanceOpt1 = egSimPerformance, # performance metrics of option 1 performanceOpt2 = egSimPerformance_systemB, # performance metrics of option 2 sim = egSimSummary, # simulation metadata metric = "Avg. Deficit (L)", # the metric to be plotted attX = "P_ann_tot_m", # x-axis perturbed attribute attY = "P_ann_seasRatio", # y-axis perturbed attribute topReps = 8, # number of top replicates to be used opt1Label = "Tank 1", # label of option 1 opt2Label = "Tank 2", # label of option 2 titleText = "Avg Deficit: Tank 2 - Tank 1", # plot title perfThresh = 29, # threshold value of the metric perfThreshLabel = "Max. Deficit (29L)", # label of the threshold contour climData = egClimData, # other climate data colMap = RColorBrewer::brewer.pal(9, "Blues"), # colour map to use colLim = c(-2, -1.4)) # colour limits The performance metric reliability can be plotted using the code below. data("egClimData") # load climate projections data # colour limits p7 <- plotOptions(performanceOpt1 = egSimPerformance, # performance metrics of option 1 performanceOpt2 = egSimPerformance_systemB, # performance metrics of option 2 sim = egSimSummary, # simulation metadata metric = "Reliability (-)", # the metric to be plotted attX = "P_ann_tot_m", # x-axis perturbed attribute attY = "P_ann_seasRatio", # y-axis perturbed attribute topReps = 8, # number of top replicates to be used opt1Label = "Tank 1", # label of option 1 opt2Label = "Tank 2", # label of option 2 titleText = "Reliability: Tank 2 - Tank 1", # plot title perfThresh = 0.82, # threshold value of the metric perfThreshLabel = "Min. Reliability (0.82)", # label of the threshold contour climData = egClimData, # other climate data colMap = viridisLite::plasma(50), # colour map to use colLim = c(0.01, 0.015)) # colour limits The figure shows the differences in one performance metric for the two system options. In the case of the metric average deficit , “Tank 2” shows lower values than “Tank 1” in all areas of the performance space. Consequently, the threshold contour of “Tank 2” shows a small area of the performance space that is vulnerable to perturbations in the selected climate attributes, compared to “Tank 1”. The inference is the same for the performance metric “reliability”. Hence, in this use case the results of the stress-test indicate that “Tank 2” is preferable as it should operate satisfactorily across a wider range of conditions, including the drier climate projected by the alternate climate data. 8. Inbuilt System Models As described as part of Step C, the foreSIGHT modelling software is designed to work with a range of third-party system models (in essence, any system model that is either programmed in R or that can be run from a command line with weather time series that can be modified from R), thereby maximising overall flexibility. However, to explore and illustrate key elements of foreSIGHT functionality, an inbuilt rainwater tank system model is provided as part of the software package. 8.1. Rainwater Tank Model The rainwater tank model is a representation of a domestic rainwater tank system, which has been designed to meet both indoor (grey water) and outdoor (garden irrigation) water demands. Although this system model example is simpler than anticipated real-world usages of the foreSIGHT model, it nevertheless provides important insights associated with system sensitivities, the role of temporal dynamics and the behaviour of storages, the interaction between supply and demand, and the identification and comparison of multiple system configurations. The core functionality of this model is now described. A schematic representation of the rainwater tank system model is shown in the figure below. Rain falling on the roof of a house is captured and directed towards the rainwater tank. Before the rainwater is able to enter the tank, a depth of water (called the first flush) is removed from the start of each storm for water quality reasons. The water remaining after the first flush extraction flows into the rainwater tank. The amount of water supplied by the tank is calculated based on the water level in the tank. The indoor and outdoor water demands deplete the water stored in the tank. The indoor water demand is assumed to be constant throughout the year, and the outdoor water demand varies seasonally. The outdoor seasonal demand pattern is also dependent upon the daily temperature. For example, on hot days (say above 28oC), the gardener is assumed to apply more than the seasonal average and vice versa. The operation of the rain water tank system model is thus dependent upon the climate variables rainfall and temperature. The tank model simulates rainwater capture and water use processes at a daily time step using rainfall and temperature time series as input. The parameters of the model that the user should specify are: the area of the roof used for rain water harvesting, the volume of the tank, the number of people using the water and the depth of water removed as the first flush. These parameters can be varied for alternate system designs. The system model estimates the performance of the rainwater tank using five metrics: • Average Daily Deficit - the volume of average deficit in water supplied by the tank in litres • Reliability - the fraction of days on which the full demand could be supplied • Volumetric reliability - the total water supplied as a fraction of the total demand • System efficiency - the amount of water used as a percentage of the water captured by the roof • Storage efficiency - the amount of water spilled as a percentage of the water captured by the rainwater tank • Average tank storage - the volume of average daily storage in the tank in litres This example system model provides sufficient scope for climate stress testing using foreSIGHT. This is because the tank responds to multiple climate drivers (i.e. rainfall and temperature), and the removal of the first flush at the start of the storm means that the wet-dry pattern of the rainfall and the seasonality of the demand pattern may become important in the functioning of the tank. The system model is available as the tankWrapper() function in foreSIGHT. The performance metrics available in the tank model can be viewed using the viewTankMetrics() function. Daily observed precipitation and temperature over the period from 2007 to 2016 obtained by combining data from multiple station locations to represent the general climate of Adelaide, South Australia is included in the demonstration, and may be be loaded using the data command. A typical function call to the rainwater tank system model (tankWrapper) is shown below. The call returns the system performance metrics specified by the user. # Load example climate data data(tankDat) # View the metrics available for use tankMetrics <- viewTankMetrics() #> [1] "volumetric reliability (fraction)" "reliability (fraction)" #> [3] "system efficiency (%)" "storage efficiency (%)" #> [5] "average tank storage (L)" "average daily deficit (L)" # User input: system model parameters systemArgs <- list(roofArea = 50, # roof area in m2 nPeople = 1, # number of people using water tankVol = 3000, # tank volume in L firstFlush = 1, # depth of water removed each event in mm write.file = FALSE, # write output tank timeseries to file T/F? fnam = "tankperformance.csv") # name of file # performance metric chosen for reporting metrics <- c("average daily deficit (L)", "reliability (fraction)") performanceOut <- tankWrapper(data = tank_obs, systemArgs = systemArgs, metrics = metrics) performanceOut #>$average daily deficit (L)
#> [1] 40.93813
#>
#> $reliability (fraction) #> [1] 0.6884752 # Now try a different metric e.g. volumetric reliability performanceOut <- tankWrapper(data = tank_obs, systemArgs = systemArgs, metrics = tankMetrics[1]) performanceOut #>$volumetric reliability (fraction)
#> [1] 0.4380711
In this section you’ll learn about advanced functionality of some of the functions in foreSIGHT…
• What’s meant by the terms optimisation arguments, stochastic model parameters
• How to modify the default optimisation arguments in foreSIGHT
• How to check the default bounds of stochastic model parameters and modify them
• How to use the computationally intensive functions in foreSIGHT in a parallel computing environment
9.1. The default optimisation arguments and how to modify them
The inverse method in foreSIGHT relies on optimisation using a genetic algorithm (using the package GA, Scrucca, 2013) of the parameters of the stochastic models to generate time series with target perturbations in selected climate attributes. The genetic algorithm requires specification of arguments that are used in the optimisation, which are specific to the ga function from the GA package which is used by generateScenarios (refer to the help file of the function using ?GA::ga for details of this function). The arguments that are used as input arguments of the ga function call inside generateScenarios are called optimisation arguments. There are default values for these arguments in foreSIGHT, which can be viewed using the viewDefaultOptimArgs() helper function in the package.
viewDefaultOptimArgs()
#> $pcrossover #> [1] 0.8 #> #>$pmutation
#> [1] 0.1
#>
#> $maxiter #> [1] 50 #> #>$maxFitness
#> [1] -0.001
#>
#> $popSize #> [1] 500 #> #>$run
#> [1] 20
#>
#> $seed #> NULL #> #>$parallel
#> [1] FALSE
#>
#> $keepBest #> [1] TRUE The controlFile argument of generateScenarios can be used to modify the default values of the optimisation arguments. The writeControlFile() helper function can be used to write a sample JSON file to obtain a template of the controlFile including the advanced option by setting the basic argument to FALSE. Note that the following function call would write a JSON file (named ‘sample_controlFile.json’) into your working directory writeControlFile(basic = FALSE) The code below demonstrates how user-specified optimisation arguments can be used in the controlFile input to generateScenarios. We recommend that you refer to the documentation of ga::ga prior to modifying the optimisation arguments. # create the exposure space attPerturb <- c("P_ann_tot_m", "P_ann_P99") attHold <- c("P_ann_maxWSD_m", "P_ann_nWet_m") attPerturbType = "regGrid" attPerturbSamp = c(2, 2) attPerturbMin = c(0.9, 0.9) attPerturbMax = c(1.3, 1.3) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) # specify the penalty settings in a list controlFileList <- list() controlFileList[["penaltyAttributes"]] <- c("P_ann_tot_m") controlFileList[["penaltyWeights"]] <- c(0.5) # add user-specified values for optimisation arguments controlFileList[["optimisationArguments"]] <- list() controlFileList[["optimisationArguments"]][["maxiter"]] <- 100 controlFileList[["optimisationArguments"]][["run"]] <- 40 # write the list into a JSON file controlFileJSON <- jsonlite::toJSON(controlFileList, pretty = TRUE, auto_unbox = TRUE) write(controlFileJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios data("tankDat") sim <- generateScenarios(reference = tank_obs[,1:4], expSpace = expSpace, controlFile = paste0(tempdir(), "controlFile.json")) The controlFile field in the output sim list will show the specified optimisation arguments have been used to generate the scenarios. 9.2. The default bounds of stochastic model parameters and how to modify them The generateScenarios function uses stochastic generators to create perturbed time series by optimising the parameters of stochastic models to obtain the specified target perturbations in climate attributes. Each stochastic model typically contains different number and type of parameters based on the structure of the model. The reader may refer to the publications detailing the model structure listed in the package description for details about the structure of the stochastic models (see utils::packageDescription("foreSIGHT")) foreSIGHT contains default settings for the bounds of each stochastic model parameter. These bounds are necessary to (1) ensure that the optimisation algorithm does not assign parameter bounds outside the feasible range during its iterations, and (2) provide a narrower search space so that optimisation algorithm can converge to the parameter values required to generate the target perturbations. So how are the bounds of the stochastic model parameters decided? The model structure definition and nature of the climate variables that the model simulates provide a feasibility range for each parameter. For example, model parameters that represent autocorrelation of a time series is bound to the range [-1, 1]. Similarly, model parameters that represent the angle of seasonal variation in the harmonic function of a parameter has to adhere to the range [0, 6.28], the parameters that represent probabilities has to fall in the range [0, 1], the parameters that represent mean or standard deviation of a precipitation or evapotranspiration time series has to be positive, and so forth. But these bounds that stem from the very nature of the parameters are often too wide with respect to the optimisation algorithm. As a result, a large number of iterations may be necessary for the algorithm to converge to a solution (if it does converge at all!), or the algorithm may start from a initial guess that would not converge to the global optimum solution. These problems can be reduced by specifying closer bounds for the parameters of the stochastic models that reduces the search space for optimisation. The model parameter estimates available from forward calibration of the models to data can be used to inform the parameter bounds. Indeed, the default parameter bounds of the stochastic models in foreSIGHT are based on expert knowledge of historical conditions in Australia. If the user has existing knowledge about the bounds of the model parameters of the selected stochastic model in their region of interest, we recommend that they modify the bounds of the stochastic models for their application. However, it is not recommended to randomly modify the parameter bounds in the package as it can have unintended consequences. The model parameters and their default bounds in foreSIGHT can be viewed using the helper function viewModelParameters(). The function requires the short name of the variable, the modelType and modelParameterVariation of the stochastic model as the input arguments. As you know modelType and modelParameterVariation uniquely define the stochastic models for each variable. Remember that the viewModels() helper function can be used to view the stochastic models available in foreSIGHT. The usage of the viewModelParameters() function is shown below. viewModelParameters(variable = "P", modelType = "wgen", modelParameterVariation = "harmonic") #> parameter min_bound max_bound #> 1 pdd_m 0.476 0.950 #> 2 pdd_amp 0.006 0.557 #> 3 pdd_ang 0.000 6.280 #> 4 pwd_m 0.093 0.728 #> 5 pwd_amp 0.004 0.519 #> 6 pwd_ang 0.000 6.280 #> 7 alpha_m 0.330 0.950 #> 8 alpha_amp 0.002 0.600 #> 9 alpha_ang 0.000 6.280 #> 10 beta_m 0.085 15.000 #> 11 beta_amp 0.028 10.000 #> 12 beta_ang 0.000 6.280 viewModelParameters(variable = "Temp", modelType = "wgen", modelParameterVariation = "harmonic") #> parameter min_bound max_bound #> 1 cor0 0.45 0.90 #> 2 WD-mCycle-m 7.00 28.00 #> 3 WD-mCycle-amp 1.00 9.00 #> 4 WD-mCycle-ang -0.05 0.81 #> 5 WD-sCycle-m 0.90 4.90 #> 6 WD-sCycle-amp 0.10 1.40 #> 7 WD-sCycle-ang -1.60 3.15 To modify the default bounds of the stochastic model parameters user-specified bounds may be input via the JSON file input to the controlFile argument of generateScenarios. To obtain a template JSON file containing parameter bounds that the user may modify, use the helper function writeControlFile() specifying the basic argument as FALSE. writeControlFile(basic = FALSE) The below code provides an examples to show how the the user can create JSON control files with parameter bounds for input to generateScenarios, for the default precipitation stochastic model in foreSIGHT. # create the exposure space attPerturb <- c("P_ann_tot_m", "P_ann_P99") attHold <- c("P_ann_maxWSD_m", "P_ann_nWet_m") attPerturbType = "regGrid" attPerturbSamp = c(2, 2) attPerturbMin = c(0.9, 0.9) attPerturbMax = c(1.3, 1.3) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) # specify the penalty settings in a list controlFileList <- list() controlFileList[["penaltyAttributes"]] <- c("P_ann_tot_m") controlFileList[["penaltyWeights"]] <- c(0.5) # add user-specified bounds for model parameters controlFileList[["modelParameterBounds"]] <- list() controlFileList[["modelParameterBounds"]][["P"]] <- list() controlFileList[["modelParameterBounds"]][["P"]][["pdd_m"]] <- c(0.35, 1) controlFileList[["modelParameterBounds"]][["P"]][["pwd_m"]] <- c(0.05, 0.65) # write the list into a JSON file controlFileJSON <- jsonlite::toJSON(controlFileList, pretty = TRUE, auto_unbox = TRUE) write(controlFileJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios data("tankDat") sim <- generateScenarios(reference = tank_obs[,1:4], expSpace = expSpace, controlFile = paste0(tempdir(), "controlFile.json")) The output sim list stores the parameter values that were used for the simulation inside the field controlFile. The parameter bounds saved in sim should now contain the values input by the user used for the simulation. If you wish to use an alternate stochastic model and modify the default bounds of the parameters of that model, the JSON control file input should contain specifications of the selected model and the new bounds. The below code provides such an example. # create the exposure space attPerturb <- c("P_ann_tot_m", "P_ann_P99") attHold <- c("P_ann_maxWSD_m", "P_ann_nWet_m") attPerturbType = "regGrid" attPerturbSamp = c(2, 2) attPerturbMin = c(0.9, 0.9) attPerturbMax = c(1.3, 1.3) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold) # specify the penalty settings in a list controlFileList <- list() controlFileList[["penaltyAttributes"]] <- c("P_ann_tot_m") controlFileList[["penaltyWeights"]] <- c(0.5) # specify the alternate model selections controlFileList[["modelType"]] <- list() controlFileList[["modelType"]][["P"]] <- "latent" controlFileList[["modelParameterVariation"]] <- list() controlFileList[["modelParameterVariation"]][["P"]] <- "harmonic" # add user-specified bounds for model parameters controlFileList[["modelParameterBounds"]] <- list() controlFileList[["modelParameterBounds"]][["P"]] <- list() controlFileList[["modelParameterBounds"]][["P"]][["mu_m"]] <- c(-5, 0) controlFileList[["modelParameterBounds"]][["P"]][["alpha_m"]] <- c(0.35, 0.95) # write the list into a JSON file controlFileJSON <- jsonlite::toJSON(controlFileList, pretty = TRUE, auto_unbox = TRUE) write(controlFileJSON, file = paste0(tempdir(), "controlFile.json")) # generate scenarios sim <- generateScenarios(reference = tank_obs[, 1:4], expSpace = expSpace, controlFile = paste0(tempdir(), "controlFile.json")) Again, the controlFile field saved in the output sim list should reflect the user-specified changes to the JSON control file that was used for the simulation. 9.3. Parallelising computationally intensive functions in foreSIGHT Consider an exposure space containing many climate attributes and several target locations in the exposure space. The computational resources and time required to generate perturbed time series corresponding to the target locations are often non-trivial, and increases manifold when multiple replicates of the time series are required to be generated. Thus, generateScenarios is one of the computationally intensive functions in foreSIGHT. Another potential computationally intensive function is runSystemModel(). The computational resources required for this function depends on the run-time of the system model under consideration, which can easily be non-trivial for complex system models. Remember that for a climate stress-test the system model would need to be run at least as many times as the number of perturbed time series. Most stress-tests also involve the assessment of multiple system options, resulting in the system model (with different system configurations) being run multiple times for each perturbed scenario. Thus in some cases runSystemModel() can be the computational bottle neck in the stress testing work flow. Thus, to generate comprehensive scenarios to stress-test complex system models parallelisation of these key functions are inevitable. Here we provide a template code to use the core functionality of generateScenarios in a parallel environment. The generateScenarios function calls the function generateScenario under its hood for each target location in the exposure space for each stochastic replicate. The function generateScenario is an exported function in foreSIGHT intended for use by the advanced users of the package to implement the core functionality of generateScenarios on parallel processors. Consider the code of the generateScenarios() function (you can view the R code of any function by simply typing the function name, here generateScenarios). This function intentionally does not use call any internal functions in package so that the code can easily be adapted for use in a script that can run in parallel on multiple CPUs. The equivalent parallel code implemented using the doParallel and foreach packages in R is shown below. # import packages library(foreach) library(doParallel) library(foreSIGHT) # set paths setwd(<path-to-working-directory>) controlFile <- <path-and-name-of-controlFile> # create exposure Space attPerturb <- c("P_ann_tot_m","P_ann_seasRatio") attHold <- c("P_ann_nWet_m", "P_ann_R10_m", "P_Feb_tot_m", "P_SON_dyWet_m", "P_JJA_avgWSD_m", "P_MAM_tot_m", "P_DJF_avgDSD_m", "Temp_ann_rng_m","Temp_ann_avg_m") attPerturbType = "regGrid" attPerturbSamp = c(10, 16) attPerturbMin = c(0.8, 0.80) attPerturbMax = c(1.1, 1.3) expSpace <- createExpSpace(attPerturb = attPerturb, attPerturbSamp = attPerturbSamp, attPerturbMin = attPerturbMin, attPerturbMax = attPerturbMax, attPerturbType = attPerturbType, attHold = attHold, attTargetsFile = NULL) # load reference data data("tankDat") # assign generateScenarios inputs reference <- tank_obs simLengthNyrs <- 300 numReplicates <- 20 seedID <- NULL # Number of targets nTarget <- dim(expSpace$targetMat)[1]
# Replicates and seed don't go with scaling
if (!is.null(controlFile)) {
if (controlFile == "scaling") {
if (numReplicates > 1) stop("Simple scaling cannot generate replicates.
if (!is.null(seedID)) stop("Simple scaling cannot use a seed.
}
}
# Create random seedID
if (is.null(seedID)) {
seedID <- round(stats::runif(1)*10000)
}
# Create seedID vector for all replicates
if (numReplicates>0 & numReplicates%%1==0) {
seedIDs <- seedID + seq(0, numReplicates-1)
nRep <- length(seedIDs)
} else {
stop("numReplicates should be a positive integer")
}
#================================================
#************************* NOTE *************************
# This part of the script to determine the number of cores depends upon
# the settings of your parallel computing environment & job scheduler
#*********************************************************
cores = slurm_ntasks # if slurm_ntasks is numerical, then assign it to cores
} else {
}
c1 <- makeCluster(cores)
registerDoParallel(c1)
allSim <- foreach (iRep=1:nRep) %:%
foreach (iTarg=1:nTarget) %dopar% {
library(foreSIGHT)
# Get the target location in the exposure space
expTarg <- expSpace
expTarg$targetMat <- expSpace$targetMat[iTarg, ]
if(!is.null(expSpace$attRot)) { expTarg$attRot <- expSpace$attRot[iTarg] } cat(paste0("=============================================================\n", "Commencing Replicate No. ", iRep, " Target No. ", iTarg, "\n=============================================================\n"), file = "fSrun_log.txt", append = TRUE) # Call generateScenario for the target to.allSim <- foreSIGHT::generateScenario(reference = reference, expTarg = expTarg, simLengthNyrs = simLengthNyrs, seedID = seedIDs[iRep], fSNamelist = fSNamelist) } stopCluster(c1) # End parallel tasks #================================================= save(allSim, file = "allSim_prelim.Rdata") names(allSim) <- paste0("Rep", 1:nRep) allSim[["simDates"]] <- allSim[[1]][[1]]$simDates
allSim[["expSpace"]] <- expSpace
allSim[["controlFile"]] <- allSim[[1]][[1]]$nml for (i in 1:nRep) { for (j in 1:nTarget) { allSim[[i]][[j]]$simDates <- NULL
allSim[[i]][[j]]\$nml <- NULL
names(allSim[[i]]) <- paste0("Target", 1:nTarget)
}
}
save(allSim, file = "allSim.Rdata")
Similarly, the runSystemModel is also coded to not use any internal functions in foreSIGHT for ease of use in a script and parallelisation of the core functionality. The function contains a loop across all target locations and replicates that may be run in parallel, similar to the code shown above. This section of the vignette will be updated with a code template for runSystemModel ease of the user in the next revision.
10. Glossary
The definitions of various terms and phrases used in this package and vignette are listed below. Where possible, definitions contained herein have been derived from published courses including IPCC reports.
• Alternate climate data: Changes in climate attributes from other sources of evidence, including climate model projections (from global/regional models), historical changes, expert judgment and/or analogues from paleo records.
• Attribute: Statistical measures of a weather time series, and representing the axes of an exposure space. Examples of attributes include: annual total precipitation, mean summer temperature, ratio of wet to dry season rainfall.
• Attribute Values: Specific values of Climate Attributes. Examples include a 10% decrease in annual total precipitation, or a 1C increase in mean summer temperature.
• Attribute Penalty: A multiplicative factor applied to individual attributes to increase or decrease the emphasis placed on those attributes by the optimisation algorithm when applying the ‘inverse approach’ to stochastic generation.
• Bottom-up Climate Impact Assessment: An approach to climate impact assessments that starts with the system being analysed, including the characterisation of its function or purpose as well as any alternative system options, followed by a system stress test to see how system performance changes as a function of plausible climatic changes. See also Top-down Climate Impact Assessment.
• Climate: The long-term statistical description of weather, calculated over a time period commonly of length 30 years or more. See also: climate attributes.
• Climate Attribute: See Attribute
• Climatological Baseline: The state against which a change is measured. See also: reference time series
• Climate Impact: The effect of climate on natural or human systems, and are often conceptualised as the combination of the exposure of a system to climatic changes and the vulnerability of the system to those changes. Synonymous with outcome or consequence
• Climate-Sensitive System: Natural or engineered system whose performance/operation is affected by climate. Examples: water supply system, agricultural system.
• Climate Stress Test: Process of running a system model using a range of climate conditions to assess changes in system performance. The range of climate conditions are generated by changing the climate attributes of the observed data.
• Current Climate: The climate at the ‘current’ time, usually referring to the time when the analysis takes place. Given the non-stationarity of most historical weather time series, the concept of Current Climate will generally need to be distinguished from Historical Climate and/or the Climatological Baseline.
• Exposure Space: The set of climate attribute combinations against which a system could be exposed.
• Held Attributes: Climate attributes that are to be held at levels of the reference period, and thus do not change as part of the climate stress test. See also Perturbed Attributes.
• Historical Climate: The climate over a historical period, usually but not necessarily spanning the instrumental record. Given the non-stationarity of most historical weather time series, the concept of Historical Climate will generally need to be distinguished from Current Climate and/or the Climatological Baseline.
• Instrumental Record: The period when instrumental weather data is available, and is usually synonymous with the concept of the historical record but excludes palaeo data. It is noted that the quality and resolution of instrumental data commonly changes over the record, and thus the full instrumental record may not be suitable or representative of the Historical Climate in all cases.
• Performance Metrics: Binary success/failure criteria or quantitative measures used to assess the rate of system performance degradation and/or identify situations under which systems can fail.
• Performance Threshold: Maximum or minimum value of a performance metric that is of interest to the user. These thresholds may indicate conditions under which system performance degrades or fails. Example: maximum allowable water deficit in a water supply system.
• Perturbed Attributes: Climate attributes that are to be modified (perturbed) as part of the Climate Stress Test. See also Held Attributes
• Perturbed Time Series: Weather time series that seek to achieve Perturbed Attribute values. See also: Reference Time Series.
• Plausible Climate Changes: This terminology is used in recognition that the perturbed time series do not constitute formal climate projections (the term projections is more commonly associated with Top-Down Climate Impact Assessments), but nonetheless it is still necessary to focus the analysis on changes that have some non-zero probability of occurring.
• Pseudo Random Number: A deterministic sequence of numbers that largely have the properties of random numbers, but are in reality completely determined by initial conditions through a Random Seed.
• Random Seed: The starting point for the random number generator, which can be set to enable reproducibility of stochastic replicates.
• Realisation: See Stochastic Realisation.
• Reference Period: A period of time used as a climatological ‘baseline’ against which all perturbations are compared, and thus represents the ‘no change’ situation.
• Reference Time Series: Time series of the (usually historical) weather over a Reference Period.
• Replicate. See Stochastic Realisation.
• Scenario-Neutral Climate Impact Assessment: See Bottom-up Climate Impact Assessment.
• Seasonal scaling: A method of perturbing historical weather time series through application of seasonally varying multiplicative factors.
• Simple Scaling: A method of perturbing historical weather time series through application of additive or multiplicative factors.
• Stochastic Generation: The general term for generating random data from some underlying stochastic model. See also stochastic weather generator
• Stochastic Realisation: A particular ‘version’ of a weather time series that is consistent with climatic assumptions (as defined through the target attributes). See also: Weather Noise.
• Stochastic Weather Generator: A form of stochastic generator in which random Realisations of weather are generated, that are usually designed either to match the statistics of historical weather, or seek to some alternative weather series such as plausible future climate time series. See also: Weather Noise.
• System Model: Mathematical model of a system that takes in relevant climate variables as input, and produces measures of system performance as outputs. The system model may be coded in R or other programming languages, and in practice may arise through the coupling of several component system models.
• System Performance: The outcome of a system, that is closely related to its purpose or intended design characteristics (for human-built systems) or function (for natural systems). Can be quantified using a range of economic, social and environmental performance metrics.
• System Sensitivity: The change of System Performance measures as a function of changes in Attribute Values.
• Target Attribute Values: Attribute Values that represent the objective of the climate time series perturbation method, and can include a combination of Perturbed Attribute values and Held Attribute values.
• Top-down Climate Impact Assessment: An approach to climate impact assessments that starts with the development of climate projections that are used as inputs to a system model to assess projections of future system performance. See also: Bottom-up Climate Impact Assessments.
• Weather Noise: The notion that, because of the non-linear dynamical nature of atmospheric processes (often referred to as ‘chaos’) and associated sensitivity to initial conditions, weather can appear as a random (‘stochastic’) process when viewed beyond the synoptic predictability window of approximately two weeks. See also: stochastic realisation.
11.1. I’m getting really confused by all the terminology associated with climate attributes - can you please help me?
Of course you’re getting confused. Unfortunately it is confusing, so we’ll try to help with a simple illustrative example.
Let’s say we want to perturb average annual rainfall and average annual temperature by +10% and +1C relative to the reference series, respectively. The attributes in this case are the ‘average annual rainfall’ and ‘average annual temperature’, which you can view as forming the axes of an exposure space. (As an aside, if you are wondering why we use the term attributes rather than statistics, it is because in this case we are wanting to perturb the same statistic of two different weather variables, whereas in other cases we might want to perturb different statistics of the same weather variable, or multiple statistics of multiple weather variables, so this keeps the concepts distinct.)
In contrast, the reference to +10% and +1C are called attribute values. Since these are values we want to change relative to the reference time series, we can be even more precise and referr to these are perturbed attribute values. If we then wish to use a stochastic weather generator to produce time series with these attribute values, we typically need to provide some additional constraints, such as the request to keep other features of the time series such as the seasonality, variability and so forth at historical levels—otherwise there is a significant change we could make a range of unintended changes to the time series rather than focusing the analysis on the deliberate perturbations. The collection of attributes that we are seeking to keep at the levels of the reference time series are referred to as held attributes.
When we use a stochastic generator to deliver the requested time series, we group up both the perturbed attribute values and the held attributes and refer to these as target attribute values, which the stochastic optimiser uses as part of its objective function in order to generate the requested time series. Unfortunately, in many cases the optimiser is not able to generate time series that precisely meet all the requested target attribute values, and thus the actual generated values may be a little (and sometimes a lot) different from what was requested.
12. References
• Brown, C. (2011) Decision-scaling for robust planning and policy under climate uncertainty, World Resour. Rep., World Resour. Inst., Washington D.C. (Available online at https://www.wri.org/our-work/project/world-resources-report/wrr.)
• Culley, S., S. Noble, A. Yates, M. Timbs, S. Westra, H. R. Maier, M. Giuliani, and A. Castelletti (2016), A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate, Water Resour. Res., 52, 6751-6768, .
• Culley, S., Bennett, B., Westra, S. & Maier, H.R., 2019, Generating realistic perturbed hydrometeorological time series to inform scenario-neutral climate impact assessments, Journal of Hydrology, 576, 111-122.
• Culley, S., Maier, H.R., Westra, S. & Bennett, B., 2020, Identifying critical climate conditions for use in scenario-neutral climate impact assessments, Environmental Modelling and Software (manuscript in press)
• Guo, D., S. Westra, and H. R. Maier (2018), An inverse approach to perturb historical rainfall data for scenario-neutral climate impact studies, J. Hydrol., 556, 877-890,
• Prudhomme, C., R. L. Wilby, S. Crooks, A. L. Kay, and N. S. Reynard (2010), Scenario-neutral approach to climate change impact studies: Application to flood risk, J. Hydrol., 390, 198-209, .
• Scrucca, L.(2013), GA: a package for genetic algorithms in R, Journal of Statistical Software, 53, 1-37.
|
2023-03-25 00:24:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48402807116508484, "perplexity": 2151.868555277038}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00031.warc.gz"}
|
https://www.difference.wiki/sketch-vs-map/
|
Sketch vs. Map
Difference Between Sketch and Map
Sketchverb
(ambitransitive) To make a brief, basic drawing.
I usually sketch with a pen rather than a pencil.
Mapnoun
A visual representation of an area, whether real or imaginary.
Sketchverb
(transitive) To describe briefly and with very few details.
He sketched the accident, sticking to the facts as they had happened.
Mapnoun
A graphical representation of the relationships between objects, components or themes.
Sketchnoun
A rapidly executed freehand drawing that is not intended as a finished work, often consisting of a multitude of overlapping lines.
Mapnoun
(mathematics) A function.
Let $f$ be a map from $\mathbb\left\{R\right\}$ to $\mathbb\left\{R\right\}$
Sketchnoun
A rough design, plan, or draft, as a rough draft of a book.
Mapnoun
The butterfly Araschnia levana.
Sketchnoun
A brief description of a person or account of an incident; a general presentation or outline.
The face.
Sketchnoun
A brief, light, or unfinished dramatic, musical, or literary work or idea; especially a short, often humorous or satirical scene or play, frequently as part of a revue or variety show, a skit
Mapnoun
A predefined and confined imaginary area where a game session takes place.
I don't want to play this map again!
Sketchnoun
a brief musical composition or theme, especially for the piano
Mapverb
To create a visual representation of a territory, etc. via cartography.
Sketchnoun
a brief, light, or informal literary composition, such as an essay or short story.
Mapverb
To inform someone of a particular idea.
Sketchnoun
(informal) An amusing person.
Mapverb
To act as a function on something, taking it to something else.
$f$ maps $A$ to $B$, mapping every $a\in A$ to $f\left(a\right)\in B$.
Sketchnoun
A lookout; vigilant watch for something.
to keep sketch
Mapnoun
a diagrammatic representation of the earth's surface (or part of it)
Sketchnoun
(UK) A humorous newspaper article summarizing political events, making heavy use of metaphor, paraphrase and caricature.
Mapnoun
a function such that for every element of one set there is a unique element of another set
Sketchnoun
(math) A category together with a set of limit cones and a set of colimit cones.
Mapverb
make a map of; show or establish the features of details of;
map the surface of Venus
Mapverb
explore or survey for the purpose of making a map;
We haven't even begun to map the many galaxies that we know exist
Sketchnoun
preliminary drawing for later elaboration;
he made several studies before starting to paint
Mapverb
locate within a specific region of a chromosome in relation to known DNA or gene sequences;
map the genes
Sketchnoun
a brief literary description
Mapverb
plan, delineate, or arrange in detail;
map one's future
Sketchnoun
short descriptive summary (of events)
Mapverb
depict as if on a map;
sorrow was mapped on the mother's face
Sketchnoun
a humorous or satirical drawing published in a newspaper or magazine
Mapverb
to establish a mapping (of mathematical elements or sets)
Sketchverb
make a sketch of;
sketch the building
Sketchverb
describe roughly or briefly or give the main points or summary of;
sketch the outline of the bookoutline his ideas
|
2023-03-24 16:25:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40883949398994446, "perplexity": 4771.4542274151545}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00311.warc.gz"}
|
https://brilliant.org/problems/general-diophantine/
|
# General Diophantine
$x^2+y^2=4^z$
How many ordered triples $(x,y,z)$ of positive integers satisfy the equation above?
×
|
2020-04-04 00:20:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769740641117096, "perplexity": 1063.6434461919018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518767.60/warc/CC-MAIN-20200403220847-20200404010847-00114.warc.gz"}
|
https://www.gamedev.net/forums/topic/163748-just-some-games-that-i-wrote/
|
Archived
This topic is now archived and is closed to further replies.
Just some games that I wrote!!
This topic is 5564 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Hi. I am just here to ask people what they think of my work. Its all available at www.ciaranmccormack.tk Almost all this work was completed over a 4 month period, from January to April of this year. I am going to use it as a portfolio to get people interested (programmers, modelers and artists) in future projects that I am undertaking. There is such projects already. One of my friends has written a movie script, and in my opinion it is excellent. He is getting finacned to direct the movie and both of us want to make a game, sort of for fun, but it is also a great oppertunity, cause the game has a 100% chance of been distributed on the movie DVD if completed correctly. It will be a fighting game like Tekken with the movie characters and special weapons etc...If you look at my stuff and would like to get involved in a project like this can you let me know. Also if you make any levels for Breakout or any animations/scripts with the other programs can you send them to me so that I can add them to my site. Thanks. Ohh yeah, the site is a free site hosted by brinkster so has a pretty small bandwidth, so if it goes offline, please look back again later.
Share on other sites
Hi.
Sorry about that. It seems to be acting weird. When I copied and pasted the link it didnt work, but when I typed the same thing in manually it worked fine.
Here they are again
www.ciaranmccormack.tk
or else use
www27.brinkster.com/ciaranmccormac
Thanks
Share on other sites
I tried all your three games:
1) Stink: too fast for my Pentium4 1700 mhz, it was so fast that a play dured just 3 seconds. You have to slow it.
2) Columns: It''s a good game, the thing I suggest you is to do smooth movements also for right and left shifts. A good clone after all.
3) BreakOut: A good arkanoid clone, I liked how the ship looks, but not the bricks. There is an imprecision about the ball simulation, it''s a rule of arkanoid that when a ball hits the ship (going from the left to the right) on the left side, the ball takes a direction to the left, on the right side the ball takes a direction to the right and vice versa. I give you an example:
o
\
o
<--- ===== <---
<-o
\_
\_
o (hit!)
<--- ===== <---
o
\
o
<--- ===== <---
o->
_/
_/
o (hit!)
<--- ===== <---
I hope it will help you. Bye.
Share on other sites
Okay, well, this holy forum seems to be not working with asci arts, so my asci painting is now uncomprehensible. I hope my explanation was clear, if not you have to play an old arkanoid game to find the different ball simulation. Bye.
Share on other sites
Hi TSRevolution
Thanks for the comments, much appreciated. I think I understand what you are saying about the ball bouncing off the paddle. I considered doing it like that having seem some game implementing such, but decided on doing it another way that I saw. I''ll explain what I do.
The direction that the ball takes after hitting the paddle depends on where it hits the paddle. THere is sort of a force traveling from the center of the paddle to the sides, both left and right. If a ball is traveling from Left to RIght and hits the left side of the paddle it will encounter a <-<-<- force making the ball bounce off at a steeper angle, if it hits the center of the paddle it will encounter no force resulting in a perfect elastic deflection and if it hits the right side it will encounter a ->->-> force making the ball bounce at a less steep angle. (same of right to left movment but negated)
ITs very difficult to explain without a picture so the above might just sound like jibberish.
Ciaran
Share on other sites
I don''t think having your screenshots int MS bitmap format is a good idea, try png, gif or jpeg. PNG is the best in my opinion.
Share on other sites
quote:
Original post by TSRevolution
o \ o <--- ===== <--- <-o \_ \_ o (hit!) <--- ===== <--- o \ o <--- ===== <--- o-> _/ _/ o (hit!) <--- ===== <---
I hope it will help you. Bye.
How''s that?
-solo (my site)
1. 1
2. 2
3. 3
Rutin
18
4. 4
JoeJ
14
5. 5
• 14
• 10
• 23
• 9
• 32
• Forum Statistics
• Total Topics
632631
• Total Posts
3007528
• Who's Online (See full list)
There are no registered users currently online
×
|
2018-09-24 07:17:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23270651698112488, "perplexity": 1990.3386722573734}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160233.82/warc/CC-MAIN-20180924070508-20180924090908-00217.warc.gz"}
|
https://harrisonbrown.wordpress.com/2009/12/05/where-do-graphs-live/
|
## Where do graphs live?
This post came out of some thoughts I posted (anonymously, but mostly because I didn’t feel like registering) over at nLab. I don’t think it’s a secret that I’m heavily interested in the relationships between category theory and combinatorics, and more generally the ways in which we can use “structured” algebraic objects and “continuous” topological objects to gain information about the unstructured discrete objects in combinatorics. That said, the folks over at the nLab work on some crazy abstract stuff, which seems about as far away as possible from the day-to-day realities of graph theory or set systems. And maybe it is — but I hope it’s not, and as far as I’m concerned, this is a windmill that deserves to be tilted at. (After all, it might be a giant.)
So as my jumping-off point, I’ll take my observation from last time that the relationship between graphs and digraphs is analogous to the one between groupoids and categories. I briefly mentioned something called a quiver, which can be thought of as any of the following:
• Another name for a digraph, which categorical people use when they don’t want us combinatorialists stomping in and getting the floor all muddy;
• A “free category,” i.e., one in which there are no nontrivial relations between composition of morphisms;
• An algebraic object whose representations we want to consider; it’s worth thinking of this way mostly because of the “freeness,” although if you try to define it more formally you’ll probably end up with the previous definition;
• What you get when you take (part of) a category and forget all the rules for how morphisms compose.
This last point is the most interesting one for our purposes, since it’s clearly an algebraic object but isn’t as restrictive as “free category,” and thus has a chance of capturing the unstructured behavior of the combinatorial zoo. But it’s tricky to turn this into a rigorous definition that actually includes everything we want to be a quiver… so we’ll just use “quiver” as a fancy name for “digraph.” However, there’s an important philosophical lesson to be learned from the final point, so I’ll set it off:
Philosophical lesson. The edges of a quiver shouldn’t carry any information except for the vertices they are incident to; more generally, paths in a quiver shouldn’t carry any information except for their sequence of vertices.
Probably the above isn’t too controversial; sure, people work with representations of quivers, in which we attach to each edge a linear map, but this doesn’t come with the quiver to start with. Similarly, although we might attach to the edges of a graph or digraph a labeling or coloring, these (usually) are pretty much arbitrary, and the underlying graph has nothing to do with the extra information. But now I’m going to make a bigger claim.
Philosophical lesson, corollary. Quivers should not, in fact, even be considered as having “edge sets.”
And in fact we can replace “quivers” with “digraphs” or “graphs,” and the same holds true. This sounds like craziness! Of course graphs have edges; otherwise they’d just be sets! And there are actually a number of philosophical reasons to object to this:
1. It’s evil. If we don’t want graphs to have edge sets, we still need some way to keep track of the “number of edges” between two vertices — but if we’re not doing this by assigning each one a set, or in some other equivalent way, we’re going to end up doing it evilly.
2. It goes against years of tradition. Not that I’m against overturning the status quo, but most mathematical definitions are the way they are because that formulation has pulled its own weight over the years.
3. It’s silly. Of course graphs have edges! We need something to color in an edge-coloring, and “forgetting the edge set” kind of interferes with that goal.
All of these are good points, and some of them arise (I think) from the fact that graphs are a rather basic class of mathematical objects, and that different people can intuit them in very different ways. But for now, let’s take the lesson at face value, and see if it has a chance of pulling its weight.
The problem is, most of the definitions of graphs and digraphs refer to an edge set in some way or another. But there’s a way to turn one definition into one that doesn’t, through application of some classical logic. Here we’ll be considering digraphs, possibly with loops but without multiple edges. (Directed pseudographs, if you want.) Now here’s one possible definition:
A directed pseudograph is composed of a vertex set $V$, an edge set $E$, and an injective function $E \rightarrow V \times V$.
But we can replace this by:
A directed pseudograph is composed of a vertex set $V$ and an edge set $E \subset V \times V$.
And this we can replace by:
A directed pseudograph is composed of a vertex set $V$ and a function $E: V \times V \rightarrow \{0,1\}$.
And finally we’ll curry, to get the following, rather nice-looking definition:
A directed pseudograph is a function $G: V \rightarrow 2^V$, where $V$ is called the vertex set.
We’ll take this as our final definition of (directed pseudo)graphs. If you were reading carefully, you might have noticed that this definition of a graph corresponds exactly to the adjacency list structure — to each vertex we’re associating a list (or set) of vertices that it points to. And you can do other fun algorithmic stuff with it, too, but today we’re pretending to be category theorists, so we’d better figure out what a morphism of directed pseudographs is. Fortunately this isn’t too hard:
A morphism of directed pseudographs $\pi: G\rightarrow H$ with vertex sets $V, W$ is a function $\pi: V \rightarrow W$, with the induced function on power sets denoted $\pi^*$, such that
$(\pi^* \circ G)(x) \subseteq (H \circ \pi)(x)$
for all $x \in V$.
Now for the hard part: what’s an undirected graph? We want to be able to define a broader variety of mappings that are “morphisms of the underlying graph.” But I don’t know what these are! Fortunately some things are easier; for instance, if we want to allow multiple edges, we simply replace power sets by free abelian monoids.
But now I want to return to the title of the post: Where do graphs live? More correctly, where can graphs be represented as adjacency lists? That is, in what categories can we formalize the above construction, to get an abstract definition of graph entirely independent of the vertex set? Well, we need two things: First, we need to be able to replace the class of subobjects by the class of morphisms to a specific object. That means the category has a subobject classifier. Second, we need to be able to curry. That means the category is cartesian closed. These are two of the three requirements for a category to be a topos!
Question. Do adjacency lists determine an abstract graph iff the graph lives in a topos?
Okay, one last thing. The Rado graph is an infinite graph with a lot of strange properties. The best-known construction of the Rado graph is as “the” infinite random graph, but another interesting one is as follows: Take a countable model of ZF as your vertex set; put x and y adjacent if $x \in y$ or $y \in x$. This seems remarkably similar to the process we use to get an undirected graph from a directed one! I don’t know if this is coincidental, but I hope and suspect it isn’t, entirely.
|
2017-09-23 00:02:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803743302822113, "perplexity": 404.96946365405586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689411.82/warc/CC-MAIN-20170922235700-20170923015700-00256.warc.gz"}
|
https://alexandervvittig.github.io/2016/01/10/find-your-vhds-in-hyper-v/
|
A virtual machine in Hyper-V consists of a few files that account for its virtual hardware configuration and the virtual storage (VHD and VHDX files). By default the virtual machine configuration files are stored in_C:\ProgramData\Microsoft\Windows\Hyper-V_, and the virtual hard drives are stored in C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks.
One slight improvement in Hyper-V (in Windows Server 2012) is that during the installation process (GUI mode only) it gives you the option of changing these defaults. However the defaults are still the same as they used to be… on the C drive.
Being cheap I only have a ‘tiny’ SSD (ok folks I bought it years ago and it felt like it was a fortune back then…) as C:\, all other data is still on rusty spindles on my home lab.
VMs, I know are important but are small are still on the C:\ however others I had to move off.
Now tiering storage is fine, it is a PITA to find where which VHD/VHDX is stored via GUI. The fastest way I found to scavenger your lost treas… um VHDs is of course Powershell.
# Run as admin
Get-VMHardDiskDrive * | Select VMName, Path
then being a little OCD…, I like to sort it after names. An example can look like this:
PS C:\WINDOWS\system32> Get-VMHardDiskDrive * | select vmname, path | Sort-Object VMName
VMName Path
------ ----
2016 E:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\2016.vhdx
CentOS_01 C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\CentOS_01.vhdx
CentOS_02 C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\CentOS_02.vhdx
CentOS_03 C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\CentOS_03.vhdx
CentOS_04 C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\CentOS_04.vhdx
DC01 E:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\DC01.vhdx
DC02 E:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\DC02.vhdx
Kali E:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\Kali.vhdx
SCCM01 E:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\SCCM01.vhdx
SCCM02 C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\SCCM02.vhdx
SpiceW E:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\SpiceW.vhdx
Now, that is not bad, what if I need more info, let’s say also the VMID so you can quickly RDP into the VM (see my post about RDCMan).
Easy, Get-VMHardDiskDrive does not have the info about the VMID, Get-VM however does, so we can just pipe that in it like so:
Get-VM * | Get-VMHardDiskDrive | Select vmname,vmid,path | Sort-Object vmname
That will give us a nice list with the VMname, VMID and VHD(x) path.
Another thing to mention. if you wan to change the default location of your VM Disks, or even of the machines, Powershell can do that as well:
SET-VMHOST –computername <server> –virtualharddiskpath 'C:\VHDs'
SET-VMHOST –computername <server> –virtualmachinepath 'C:\VMs'
or via GUI of course in the Hyper-V settings:
|
2018-12-12 10:30:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3813874423503876, "perplexity": 9210.237490485397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00495.warc.gz"}
|
https://space.stackexchange.com/questions/27320/how-much-fuel-is-necessary-to-cause-delta-v
|
# How much fuel is necessary to cause delta-v?
For a project, I need to calculate how much thrust and how much fuel I need for getting into LEO.
What I know:
• Delta-V necessary ($\approx 9.4$ km/s)
• Dry (empty spacecraft mass)
What I don't know:
• How much fuel I'm bringing
• How much thrust I need
Are there any good ways of calculating this?
• What's the extra 1.3km/s for? May 17, 2018 at 18:07
• "Atmospheric and gravity drag associated with launch typically adds 1.3–1.8 km/s to the launch vehicle delta-v" -- Wikipedia May 17, 2018 at 18:08
• That's included in the normally quoted 9400m/s. Orbital velocity is ~7800m/s. May 17, 2018 at 18:10
• Hint: your thrust needs to be greater than the vehicle weight at liftoff. May 17, 2018 at 18:12
The Tsiolkovsky rocket equation tells you how much delta-V you get for a given exhaust velocity and full/empty mass ratio per stage. Typically you'll want to divide the total 9400m/s requirement into two (or more) stages and work backward from the uppermost stage. Select an appropriate engine for the stage, decide how much dry tankage/structural mass you need per mass of fuel, solve.
As Organic Marble notes, the first-stage thrust needs to exceed the weight of the fully loaded rocket, or it won't lift off. Typically the thrust to weight ratio starts at somewhere between 1.15:1 and 1.5:1. (Upper stages can relax that limit a little bit but will usually start close to 1:1 to maximize the amount of fuel they bring.) Pick an engine and add multiples of them until your thrust is sufficient!
The devil is in the details, of course. I suggest running the numbers from an existing rocket to make sure you understand the principles before trying your own.
Here's part of a spreadsheet that I use for quick-and-dirty feasibility tests. Making it useful to you is left as an exercise.
• Stage mass: total mass of an individual stage, fully loaded with propellant.
• Prop fraction: fraction of stage mass which is propellant.
• Structure: structural (non-propellant) mass of stage.
• Propellant: propellant mass of fully loaded stage.
• Upper: total mass of all stages above this one, fully loaded.
• Ballast: inert payload mass attached to the stage.
• M0: total mass of the rocket at ignition of the stage.
• M1: total mass of the rocket at burnout of the stage.
• ISP: specific impulse of the stage's engines.
• Thrust: total thrust of the stage's engines.
• Delta-v: single stage delta-V contribution, summing to total delta V below.
• G0: acceleration at stage ignition, in g (equivalent to TWR).
• G1: acceleration at stage burnout.
Masses in metric tons, ISP in seconds, thrust in kN, delta-V in m/s. I use the sea level specific impulse of the first stage engine, which yields a slight underestimate for delta-v because ISP will increase over the course of the burn.
Value view:
Formula view:
• Ooooo the spreadsheets are a very nice touch May 17, 2018 at 18:44
• Dang this is a good answer. I’ll wait 24hr and accept unless there are other better answers May 17, 2018 at 19:48
• oh my god, I didn't know that Formula View could be done. That's real good. May 17, 2018 at 20:29
• @ErinAnne I didn't either until today! May 17, 2018 at 21:05
• @ErinAnne CTRL+` will turn it on and off. May 18, 2018 at 2:29
|
2022-05-20 18:21:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.568687379360199, "perplexity": 3179.036442120286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00362.warc.gz"}
|
https://www.codingame.com/playgrounds/50443/brainfuck-part-2---working-with-arrays/handle-null-values
|
# BrainFuck part 2 - Working with arrays
DPAmar
5,236 views
## Deal with null values
As explained, it is not possible to deal with null values in our arrays, and that may be an issue in most of the case.
Having explained all the mechanisms of arrays (get, reverse, length, move, ...), we can however adapt them to a new data structure: 2-cells arrays.
In this implementation, an N-value-long array is not only composed by N cells + 2 delimiters. It's still N+2 "blocks", but each block will be composed by 2 cells
• isValue flag: 0 if it's an array delimiter, or 1 if it's an array cell
• value cell: does not matter if isValue is 0, or cell value (including null value) if isValue is 1
In other words, the "not-null-required" hypothesis is transposed on the first part of the cells, and the value is now free to be null.
Note that this implies twice more memory for arrays, so based on the use case, we should decide whether null values are admitted or not.
All the algorithms can be transposed here, we will just learn how it works from a test program
## Test program
This program reads digits from the input, convert them into values (subtract 48) into an array, then print them back as digit (add 48 and print)
Note : as coded here, we can leverage the extra cell for faster execution. For example, it would not have been possible to iterate and add 48 at the same time in regular 1-cell arrays.
>>>>, create empty delimiter block then read value
[ while a digit is read
<++++++++[->------<] subtract 48 (leverage the isValue flag cell for this)
+ set isValue flag to 1
] loop
<<<[<<]>> go to first cell (note the doubled moves because of "2 cells a block" design)
[
+++++++[->++++++<] add 48 to each digit (use isValue flag again)
+>. set back isValue flag and print digit
>] go to next cell (be careful to point at the next isValue flag
As 0 is a supported digit, we can see that null values are correctly handled by this code.
|
2021-01-20 07:25:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1706497073173523, "perplexity": 2784.4656638828724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00256.warc.gz"}
|
http://ffden-2.phys.uaf.edu/webproj/212_spring_2015/Jason_Hendrichs/Physics%20Project/Conventional%20Damming.html
|
## Overview of Sources of Energy
Most hydro-power plants run off of the mechanical power available from water, whether the source be from rivers, tides, waves, etc. The oldest and most widespread source of hydro power is from flowing rivers by either damming the river to use potential energy or letting the river run through the plant to capture kinetic energy. Although mainly still in development, some methods have been implemented to obtain energy from ocean water mainly from tides and waves.
## Conventional Damming
Dammed rivers the most widespread source of hydro-power with the largest plant in the world generating over 20,000 MW. The way that power is obtained from a dam is very simple, a dam is constructed that obstructs the path of a river to form a reservoir of water behind it. Water behind the dam is then allowed to flow through a pen-stock, an opening in the dam, to a turbine. The turbine then converts the energy from the flowing water to power the generator in order to obtain electrical power. In this manner, the dam converts the potential energy of the water behind the dam into a final product of electrical energy.
https://water.usgs.gov/edu/hyhowworks.html
In some instances, the water behind the dam is actually pumped there from another another lower elevation reservoir. This method is known as pumped storage and actually consumes a net amount of energy instead of producing it. However, economically this a feasible option for companies that generate power as the water is pumped during times of low energy consumption when energy is at a lower cost. Then the dam is then turned on during times of high consumption and when energy costs are higher to support other power plants and obtain a profit for the company.
The power available from a conventional dam is extraordinarily easy to calculate and understand and relies off of the basic equations for mass flow.
$P=\rho *Q*g*h*ɳ$
(Taken from Basic Thermodynamics by Cengel and Boles)
Where ρ is the density of the river, Q is the flow of the river in volume per second, g is the gravitational constant, h is the difference between the inlet and the turbine in term of elevation, and $P=\rho *Q*g*h*ɳ$ is the efficiency of the turbine and generator at converting the energy.
|
2022-01-29 13:52:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5376471877098083, "perplexity": 932.2567551632811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00425.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=A.%20Regnat
|
• ### Canted antiferromagnetism in phase-pure CuMnSb(1804.03223)
We report the low-temperature properties of phase-pure single crystals of the half-Heusler compound CuMnSb grown by means of optical float-zoning. The magnetization, specific heat, electrical resistivity, and Hall effect of our single crystals exhibit an antiferromagnetic transition at $T_{\mathrm{N}} = 55~\mathrm{K}$ and a second anomaly at a temperature $T^{*} \approx 34~\mathrm{K}$. Powder and single-crystal neutron diffraction establish an ordered magnetic moment of $(3.9\pm0.1)~\mu_{\mathrm{B}}/\mathrm{f.u.}$, consistent with the effective moment inferred from the Curie-Weiss dependence of the susceptibility. Below $T_{\mathrm{N}}$, the Mn sublattice displays commensurate type-II antiferromagnetic order with propagation vectors and magnetic moments along $\langle111\rangle$ (magnetic space group $R[I]3c$). Surprisingly, below $T^{*}$, the moments tilt away from $\langle111\rangle$ by a finite angle $\delta \approx 11^{\circ}$, forming a canted antiferromagnetic structure without uniform magnetization consistent with magnetic space group $C[B]c$. Our results establish that type-II antiferromagnetism is not the zero-temperature magnetic ground state of CuMnSb as may be expected of the face-centered cubic Mn sublattice.
• ### Ultra-high vacuum compatible preparation chain for intermetallic compounds(1611.03392)
We report the development of a versatile material preparation chain for intermetallic compounds that focuses on the realization of a high-purity growth environment. The preparation chain comprises of an argon glovebox, an inductively heated horizontal cold boat furnace, an arc melting furnace, an inductively heated rod casting furnace, an optically heated floating-zone furnace, a resistively heated annealing furnace, and an inductively heated annealing furnace. The cold boat furnace and the arc melting furnace may be loaded from the glovebox by means of a load-lock permitting to synthesize compounds starting with air-sensitive elements while handling the constituents exclusively in an inert gas atmosphere. All furnaces are all-metal sealed, bakeable, and may be pumped to ultra-high vacuum. We find that the latter represents an important prerequisite for handling compounds with high vapor pressure under high-purity argon atmosphere. We illustrate operational aspects of the preparation chain in terms of the single-crystal growth of the heavy-fermion compound CeNi2Ge2.
• ### Single crystal growth of CeTAl$_3$ (T = Cu, Ag, Au, Pd and Pt)(1604.03146)
We report single crystal growth of the series of CeTAl$_3$ compounds with T = Cu, Ag, Au, Pd and Pt by means of optical float zoning. High crystalline quality was confirmed in a thorough characterization process. With the exception of CeAgAl$_3$, all compounds crystallize in the non-centrosymmetric tetragonal BaNiSn$_{3}$ structure (space group: I4mm, No. 107), whereas CeAgAl$_3$ adopts the related orthorhombic PbSbO$_2$Cl structure (Cmcm, No. 63). An attempt to grow CeNiAl$_3$ resulted in the composition CeNi$_2$Al$_5$. Low temperature resistivity measurements down to $\sim$0.1K did not reveal evidence suggestive of magnetic order in CePtAl$_3$ and CePdAl$_3$. In contrast, CeAuAl$_3$, CeCuAl$_3$ and CeAgAl$_3$ display signatures of magnetic transitions at 1.3K, 2.1K and 3.2K, respectively. This is consistent with previous reports of antiferromagnetic order in CeAuAl$_3$, and CeCuAl$_3$ as well as ferromagnetism in CeAgAl$_3$, respectively.
• ### De Haas-van Alphen effect and Fermi surface properties of single crystal CrB2(1304.5994)
Sept. 18, 2013 cond-mat.str-el
We report the angular dependence of three distinct de Haas-van Alphen (dHvA) frequencies of the torque magnetization in the itinerant antiferromagnet CrB2 at temperatures down to 0.3K and magnetic fields up to 14T. Comparison with the calculated Fermi surface of nonmagnetic CrB2 suggests that two of the observed dHvA oscillations arise from electron-like Fermi surface sheets formed by bands with strong B-px,y character which should be rather insensitive to exchange splitting. The measured effective masses of these Fermi surface sheets display strong enhancements of up to a factor of two over the calculated band masses which we attribute to electron-phonon coupling and electronic correlations. For the temperature and field range studied, we do not observe signatures reminiscent of the heavy d-electron bands expected for antiferromagnetic CrB2. In view that the B-p bands are at the heart of conventional high-temperature superconductivity in the isostructural MgB2, we consider possible implications of our findings for nonmagnetic CrB2 and an interplay of itinerant antiferromagnetism with superconductivity.
|
2020-02-24 04:28:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48798319697380066, "perplexity": 5995.184042014558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00079.warc.gz"}
|
https://wattsupwiththat.com/2012/10/23/chnages-in-earths-gravity-in-relation-to-magnetic-field-measured/
|
# Changes in Earth's gravity in relation to magnetic field measured
Rapid changes in the Earth’s core: The magnetic field and gravity from a satellite perspective
Annual to decadal changes in the earth’s magnetic field in a region that stretches from the Atlantic to the Indian Ocean have a close relationship with variations of gravity in this area. From this it can be concluded that outer core processes are reflected in gravity data. This is the result presented by a German-French group of geophysicists in the latest issue of PNAS (Proceedings of the National Academy of Sciences of the United States).
The main field of the Earth’s magnetic field is generated by flows of liquid iron in the outer core. The Earth’s magnetic field protects us from cosmic radiation particles. Therefore, understanding the processes in the outer core is important to understand the terrestrial shield. Key to this are measurements of the geomagnetic field itself. A second, independent access could be represented by the measurement of minute changes in gravity caused by the fact that the flow in the liquid Earth’s core is associated with mass displacements. The research group has now succeeded to provide the first evidence of such a connection of fluctuations in the Earth’s gravity and magnetic field.
They used magnetic field measurements of the GFZ-satellite CHAMP and extremely accurate measurements of the Earth’s gravity field derived from the GRACE mission, which is also under the auspices of the GFZ. “The main problem was the separation of the individual components of the gravity data from the total signal,” explains Vincent Lesur from the GFZ German Research Centre for Geosciences, who is involved in the study. A satellite only measures the total gravity, which consists of the mass fractions of Earth’s body, water and ice on the ground and in the air. To determine the mass redistribution by flows in the outer core, the thus attained share of the total gravity needs to be filtered out. “Similarly, in order to capture the smaller changes in the outer core, the proportion of the magnetic crust and the proportion of the ionosphere and magnetosphere need to be filtered out from the total magnetic field signal measured by the satellite,” Vincent Lesur explains. The data records of the GFZ-satellite missions CHAMP and GRACE enabled this for the first time.
During the investigation, the team focused on an area between the Atlantic and the Indian Ocean, as the determined currents flows were the highest here. Extremely fast changes (so-called magnetic jerks) were observed in the year 2007 at the Earth’s surface. These are an indication for sudden changes of liquid flows in the upper outer core and are important for understanding the magneto-hydrodynamics in the Earth’s core. Using the satellite data, a clear signal of gravity data from the Earth’s core could be received for the first time.
This results in consequences for the existing conceptual models. Until now, for example, it was assumed that the differences in the density of the molten iron in the earth’s core are not large enough to generate a measurable signal in the earth’s gravitational field. The newly determined mass flows in the upper outer core allow a new approach to Earth’s core hydrodynamics.
###
“Recent changes of the Earth’s core derived from satellite observations of magnetic and gravity fields”, Mioara Mandea, Isabelle Panet, Vincent Lesur, Olivier de Viron, Michel Diament, and Jean-Louis Le Mouël, PNAS 2012; doi:10.1073/pnas.1207346109
http://www.pnas.org/content/early/2012/10/11/1207346109.full.pdf
Article Rating
Inline Feedbacks
katabasis1
October 23, 2012 5:13 am
Another reminder of how little we know for sure about how the Earth actually functions…
John Marshall
October 23, 2012 5:15 am
Interesting. When you think about it the minute fluctuations of gravity make sense.
rgbatduke
October 23, 2012 5:23 am
Perhaps there is more in TFA, but I’m not finding the graphic to be compelling evidence of correlations between the two, not averaged over the globe. I don’t argue that there aren’t any, but I fail to see it in the picture. Perhaps a systematic computation of the spatially and/or temporally averaged correlation function reveals one, but the picture not so much.
rgb
October 23, 2012 5:26 am
And they, of course, do not entertain the idea that the changes in the core field are caused by the Sun, or any external agent [jupiter shine or whatever]. This is real science, and not hand waving.
theguvnor
October 23, 2012 5:28 am
Interesting article and musings here on diamagnetism and the repulsion effect of water and the jet stream patterns which affect climate change:
http://geologymaster.com/cctext3.htm
Bloke down the pub
October 23, 2012 5:40 am
A change in the position of a magnetic anomaly has the potential to alter the amount of gcr reaching the Earth and it’s relative effect. A localised increase in gcr would be more likely to increase cloud cover if it occurred over ocean than over desert.
October 23, 2012 5:49 am
So where does this leave the measurements of ice loss/gain as measured by Grace for Antarctica and Greenland. If the gravity changes the measurements may be bogus.
P. Solar
October 23, 2012 5:50 am
Fascinating.
I would have thought the area to concentrate on was around Patagonia. It seems from figure4 that it is the epicentre of a gravitational oscillation the propagates out across the Pacific and Southern oceans and up at least as far a Mexico.
polistra
October 23, 2012 5:59 am
Correct link for the overall article:
http://www.pnas.org/content/early/2012/10/11/1207346109.abstract?sid=b763519b-f73a-48b0-8931-54bf47d153af
Movie 2 is especially interesting. You can see a strong gradient developing between eastern North America and Greenland; higher gravity in NA and lower in Greenland. What’s right in the middle of that gradient? The North Magnetic Pole, which is accelerating quickly toward the Greenland end of that gradient.
Connects with something I’ve been wondering about:
http://polistrasmill.blogspot.com/2012/09/no-it-doesnt-unless.html
October 23, 2012 6:18 am
And that is not all, as I have already shown, but it is strongly disputed by some experts:
http://www.vukcevic.talktalk.net/EarthNV.htm
http://www.vukcevic.talktalk.net/TMC.htm
One step at a time, and we’ll get there eventually.
October 23, 2012 6:34 am
“The main field of the Earth’s magnetic field is generated by flows of liquid iron in the outer core”.
Not possible. Iron heated above 770C, the Curie point, loses its magnetism. Liquid iron is much hotter than 770C.
http://en.wikipedia.org/wiki/Curie_temperature
Above 770C iron changes from a ferro-magnet to a para-magnet. The difference is that a para-magnet needs an external source to induce magnetism. The most likely source for this external excitation is the Sun’s magnetic and electrical fields.
October 23, 2012 6:41 am
OT Great tip!
Here is a NEW video from the Sidney Institute.Salbys lecture is real dammening with new arguments and data .Its a real CAGW killer! Check the graphs especially presented (27 min).
October 23, 2012 6:41 am
Isn’t that blue area near Australia the place where they find high satellite sea level?
Lower gravity … higher sea level?
LC Kirk, Perth
October 23, 2012 7:06 am
@sunshinehours1
Actually no. Higher gravity = higher sea level, as the seawwater mass is attracted to and mounds up over seabed areas of anomalously high mass/gravity, eg over subemerged seamounts that rise from the deep ocean floor, which can be detected in satellite-borne radar altimeter surveys on account of the elevated sea surface that rises up (ever so slightly) over them.
October 23, 2012 7:09 am
rgbatduke says: October 23, 2012 at 5:23 am
Perhaps there is more in TFA, but I’m not finding the graphic to be compelling…
………
They were looking in a wrong place, here is one more convincing
http://www.vukcevic.talktalk.net/HudsonBay.htm
October 23, 2012 7:20 am
Leif Svalgaard says:
October 23, 2012 at 5:26 am
And they, of course, do not entertain the idea that the changes in the core field are caused by the Sun, or any external agent [jupiter shine or whatever]. This is real science, and not hand waving.
We can all see how paranoid you are with respect to solar system drivers. The old guard is fighting, but the writing is on the wall.
Steve Keohane
October 23, 2012 7:40 am
I was wondering if the changes in gravity are enough to effect air pressure, changing an areas ability to either form or alter the pressure extremity of masses of air, thereby providing a shift in climate, by changing the likelihood or degree of high or low pressure centers’ formation.
KevinM
October 23, 2012 7:56 am
“the measurement of minute changes in gravity caused by the fact that the flow in the liquid Earth’s core is associated with mass displacements. ”
I believe the key word is minute. For 99 percent of practical applications the earth’s gravity can be modeled as a emanating from a single point in the geometric center of the spheroid. Shifts in density thousands of miles down must be borderline immesurable. Anyone got a unit? 1.0e-N g’s?
Louis Hooffstetter
October 23, 2012 8:02 am
Thank you for scientific confirmation of what many of us have long suspected…
my scale lies!
October 23, 2012 8:21 am
This is very interesting. I think it is a good start and little more. Way to many unknowns, to many assumptions and to narrow a hypothesis focus. Never the less it calls into question some of the earlier assumptions about the core, liquid-solid or in between. That in itself is a progress type step.
G P Hanner
October 23, 2012 8:23 am
Interesting.
Once, a long time ago now, I was a navigator in Strategic Air Command. Back in the days of paper, pencil, circular slide rule, and celestial, the only way to navigate over large tracts of open water (or ice) was by dead reckoning aided by clestial observations. Through most of the 1960s I used to fly between the Hawaiian Islands and the Marianas Islands fairly often. One thing I learned quickly was that the usually reliable N-1 fluxgate compass became unreliable on the transit between Honolulu and Guam. Mid-way between Honolulu and Guam the deviation between what the N-1 told me the mag heading was and what a celestial heading check said it actually was differed by as much as five degrees. That much deviation is guaranteed to get you way off course quickly. In all the years of the 1960s I made that crossing I was always careful to keep a constant check of my heading by celestial means over the tract of open ocean between Honolulu and Guam. The compass deviation was always there in spite of what my charts indicated.
I assume that there was some localized change in gravity that changed the magnetic field, and thus the magnetic heading, more than what my charts said was the value I should be using.
October 23, 2012 8:29 am
Leif, you missed the thread where the WUWT demolished the credibility of Grace. Sorry according to the folks here its data is total garbage.
Juan Slayton
October 23, 2012 8:42 am
Geoff Sharpe: the writing is on the wall.
Hmm. The writing may support Dr. Svalgaard’s choice of relevant phenomena. I believe the text was something like “weighed in the balances and found wanting.”
: > )
David Ball
October 23, 2012 8:46 am
Steven Mosher says:
October 23, 2012 at 8:29 am
Sad that you have to resort to painting all with one big brush. Your desperate posts are getting tiresome.
MarkW
October 23, 2012 8:54 am
Leif Svalgaard says:
October 23, 2012 at 5:26 am
And they, of course, do not entertain the idea that the changes in the core field are caused by the Sun, or any external agent [jupiter shine or whatever]. This is real science, and not hand waving.
First off, that level of snark is beneath anyone who claims to be a scientist.
But beyond that. These scientists haven’t proposed a mechanism by which the changes in gravity should be correlated to changes in the magnetic field. All they have done is document a correlation.
Yet for some reason, this correlation is “science”. But the others who point to correlation are charletons.
David Oliver Smith
October 23, 2012 9:17 am
Looks to me like the period of both the gravity and magnetic anomalies are about 11 years. Do we know of any thing else that has a natural periodicity of about 11 years? There does seem to be some lag between the magnetic/gravitational periods and solar cycles 23/24.
daveR
October 23, 2012 9:25 am
Yep, just as expected: practically zero ‘gravity’ recordable anywhere near Rio, Cancun, Copenhagen, Bali etc… Bit of a bouguer. People of Qatar, it’s already upon you – flee now whilst you still can…!!
J Martin
October 23, 2012 10:23 am
Looks suspiciously like it might be tied in to solar cycles.
It’d be nice to see that graph overlaid with one depicting the magnetic field of the sun and also sunspots. 4 curves on one graph.—— Vuk ?
sarc on /
It all makes sense now.
So obviously, co2 causes earthquakes, which move the core, which drives the magnetic field of the earth and this causes the magnetic field of the sun to move which causes sunspots which heat up the co2 and if we don’t return to the stone-age soon we will destroy the planet.
sarc off /
J Martin
October 23, 2012 10:30 am
So why is gravity over land by and large weaker than gravity over sea. I thought rock weighed more than water ?
Or is it that perhaps magma weighs less and is nearer the surface of land than the surface of the sea ?
The odd man out is Australia, presumably time must go more slowly in Australia and they therefore live longer than the rest of us. Maybe that’s why the Ausies are so chilled out.
October 23, 2012 10:59 am
In my earlier post I said they were looking at wrong place.
Hudson Bay was the place where magnetic field could tell some years ahead what the sun is going to do (Dr. S. will not approve of this correlation)
http://www.vukcevic.talktalk.net/HB.htm
and where global temperatures were going to.
http://www.vukcevic.talktalk.net/HudsonBay.htm
Charles Gerard Nelson
October 23, 2012 12:05 pm
My old Grandma when she felt weary used to say ‘I think they’ve turned up the Gravity!’
Andreas
October 23, 2012 12:10 pm
Didn’t Nir Shaviv talk about this in the old documentary about Svensmark and the cosmic ray theory. He talked about the fluctuation in earth’s magnetic field and the cycle of the magnetic reversal of the poles that occurs every now ant then on a geological timescale and should be about to happen in our “near” future and that we presently had two magnetic north poles?
I think it was from this program: http://www.youtube.com/watch?v=anxzOZMU_3k
Richard M
October 23, 2012 12:49 pm
I just have to weigh in on this one. It’s nothing but mass hysteria. These scientists must have watched way too many gravi-toons as kids. /stupid puns
rgbatduke
October 23, 2012 3:39 pm
Sadly, I read TFA and I’m even less convinced that they’ve found anything like what they claim. Their figures suggest that they have looked at the whole globe, picked a place where there is some positive correlation between the two fields for some finite time, and used that to assert that the correlation is causal and real. I look at their “best shot” — the principle mode correlation from an undescribed singular value decomposition fit (the figure above) and to me the only thing that is “correlated” at all is that some of the places where there is a large gravitation anomaly correspond to some of the places where there is a large magnetic field anomaly (where “large” means “almost infinitesimally above the noise” and other stuff that they’ve subtracted to get the anomalies at all, which presumes in and of itself that that “noise” is well enough known to estimate and subtract it, another story altogether).
However, the signs don’t match up terribly well even where the amplitudes are both large, and there are numerous places where one is large and another is neutral, along with no particular correspondence in sign. If I guestimate the spatial autocorrelation of this mode (including the signs) I get something that is very small, not something compelling. If one squared the signals and then generated the correlation of the variance, one would have a much better argument, but still a far from compelling one, and this is squaring what is already a fit to an autocorrelation based on asserting a strong correspondence in a particular (dare I say cherrypicked?) region where the signals appear to match up.
In the end, I’m far from convinced. The physical argument is plausible, but I cannot yet feel confident that they aren’t simply showing colorful graphs of amplified, accidentally coincident quasiperiodic signals that happen to heterodyne in the particular region they are looking at at the particular time they are looking, because signals like that have to heterodyne somewhere and if you focus on that place sure, it will like “meaning” even as it is utterly meaningless.
Note well that I don’t care about the result one way or another — no dog in the race. However, this is not a paper — yet — that I’m impressed with, or convinced by. Maybe the movie (which I did not watch) is more convincing, but again, I’ve watched lots of dynamical simulations of certain kinds of interference noise produce what look like pattern but turns out to be — interference.
Or, maybe I’m missing something. Always a possibility, being old, decrepit, and mildly alcoholic;-)
rgb
John Stojanowski
October 23, 2012 4:38 pm
The relationship between the core(s), plate tectonics, secular variation, pole reversals, sea levels and surface gravity is the subject of my theory, The Gravity Theory of Mass Extinction, a summary of which can be found at:
http://www.dinoextinct.com/page13.pdf
Richard B.
October 23, 2012 6:40 pm
Many of these “mysteries” become …well, less mysterious once one acknowledges the role of the charge field. Search Miles Mathis …not that he has “nailed it” 100%, but I am confident his foundational work will be alive and well long after the Standard Model has been tossed. Mechanics of any “stripe” should contain actual mechanics. Virtual photon = real stupidity.
noaaprogrammer
October 23, 2012 10:25 pm
The Earth itself is not a perfect sphere or even a perfect prolate ellipsoid. With the larger portion of its molton mass being subject to deformation more readily than its brittle shell, and with a slow wobble in its axis of rotation, how does the Earth’s shape deviate from perfectly symmetric 3d geometry over long periods of time?
Surfer Dave
October 23, 2012 11:18 pm
Naive question, but isn’t heat a side effect of any magneto-hyrdodynamic process? Aren’t there losses and resistances that generate kinetic energy in the materials? So, wouldn’t that mean there are abrupt annual and decadal variations in the internal heat of the planet? How big are the variations, do the models include them?
rgbatduke
October 24, 2012 6:41 am
Many of these “mysteries” become …well, less mysterious once one acknowledges the role of the charge field. Search Miles Mathis …not that he has “nailed it” 100%, but I am confident his foundational work will be alive and well long after the Standard Model has been tossed. Mechanics of any “stripe” should contain actual mechanics. Virtual photon = real stupidity.
Dearest Richard B.,
Ah, you’ve strayed into a field I know well, as I have taught classical electrodynamics many times and even written a textbook on it of sorts. So consider me rather familiar with the theory of electromagnetism. I’m also open minded towards iconoclastic ideas, so I naturally did the search you suggested, went to Mathis’ site, and read a couple or three random articles to see if there was anything to be learned there or any reasonable possibility that he was not a crank.
A crank, in case the term is not familiar to you, is somebody that has absolutely no actual background in physics beyond taking an introductory course or two, could not solve a differential equation if their life depended on it, hasn’t the foggiest idea what quantum mechanics is or how it works or what the evidence for it is, but takes it upon themselves to e.g. create the correct unified field theory. One can usually identify their work by a few simple criteria:
* No actual equations, no pathway even from the equations we already know and understand well to their “new” theory.
* No explanation of how their theory will work not for some esoteric thing they’ve fixated on as “needing explanation” (often it does not) but rather for everyday phenomena, like quantitatively explaining the physics that makes the laptop I’m typing this on work, in such a way that enables it to be engineered, in addition to predicting/explaining new phenomena.
* If they refer to any texts or papers at all, they refer to introductory textbooks, which typically oversimplify even incorrect classical physics to make it conceptually simple enough for undergraduates of often indifferent preparation and motivation to grasp, not advanced undergraduate textbooks, graduate level textbooks (with real math!) or — gasp — actual papers on the subject, containing experimental results or actual theories that their new “theory” supposedly confounds.
I’m pleased to say that Mathis’ work passes the crank test with flying colors. There wasn’t a single substantive equation on the pages I visited. Even a page on Coulomb’s Law failed to write down Coulomb’s Law, let along Gauss’s Law for Electrostatics at even the kiddie physics level or Maxwell’s Equations in fully covariant form in terms of the field strength tensor. Strike one. Obviously, this page didn’t explain how his brilliant idea would preserve ordinary electrostatics and hence things like atomic structure. Strike two. The only reference I could see was to Giancoli’s introductory physics textbook. This is a most unfortunate choice as I am currently teaching introductory physics and absolutely despise Giancoli in particular because there are places where it states things that are not true without warning the student in any way that this is the case, and it is otherwise terrible in too many ways to count. Anybody who learned electromagnetism from Giancoli and uses it as a reference for their unified field theory is — well, strike three.
Mathis is a crank. He will always be a crank. His work should not be taken seriously because there is no work to take seriously! He isn’t even a lonely crank. I seem to collect cranks (God knows why, evil I committed in a previous life no doubt) and I could introduce him to a few others who are just as cranky and they could fight out which of their crank theories are the best (lacking any sort of objective criterion, such as “actually agreeing with experiment and observation and consistently connecting with everything we know”, the battle might take a while) — sort of like the scene in The Ruling Class where Peter O’Toole as the God of Light battles the Electrical God and is transformed into a God of Darkness in the process, if there are any movie buffs reading.
Be very careful. Crankiness is highly contagious. If you endorse an obvious crank, you become one. Either take the time to learn real physics yourself well enough to judge — a process that should only take four to six years, given that you are reasonably bright and motivated and mathematically competent through PDEs, linear algebra, and complex analysis (at least) — or view them with the greatest degree of skepticism as probably being cranks, cranks unless proven otherwise.
Feel free to use the short crank test up above. In fact, here — compare the information content of this:
http://www.phy.duke.edu/~rgb/Class/Electrodynamics.php
(grab the PDF, the html is out of date and not maintained) to any of Mathis’ “work” or “papers” online. And note well, my book isn’t even complete — I haven’t had time/motivation to write the simpler prequel on Electrostatics yet, and it omits a lot of stuff. And then there is quantum electrodynamics, given that everything in this (my) textbook is wrong, or at best, a classical approximation of the quantum reality.
rgb
rgbatduke
October 24, 2012 7:12 am
Naive question, but isn’t heat a side effect of any magneto-hyrdodynamic process? Aren’t there losses and resistances that generate kinetic energy in the materials? So, wouldn’t that mean there are abrupt annual and decadal variations in the internal heat of the planet? How big are the variations, do the models include them?
There is a fair bit of argument here, but I have a strong opinion and will offer it up. If you take a core out of the earth’s crust with a drill, wait for the hole to cool, and measure the temperature as a function of depth, you can, given any sort of reasonable estimate of the conductivity of the base rock, transform the measured temperature gradient into the outgoing energy flux. That is, one can measure — quite reliably — how much heat is flowing out from the Earth’s core through the remarkably well insulating crust. The total heat flow is completely inadequate to explain climate variation even if it varied by as much as 100% of its mean value, which it almost certainly does not. It is at least an order of magnitude too small.
This means that it really doesn’t matter what’s going on in there. Radiation, tidal heating, magnetic coupling of flowing magma to the length of lady’s skirts and hence the stock market, you can theorize all you want about what heats the interior and how it moves around, but the measured heat flow so far simply cannot explain the climate and isn’t even a serious contributor, dwarfed by the Sun.
This doesn’t stop climate science cranks from asserting that it is. Iron sun space dragons, for example, have web pages that assert that absurd radioactive processes are occurring and are what keeps the Earth warm. Don’t go there — they are not just wrong, they are stupidly wrong, and more or less direct measurements prove it.
There is, however, one argument that leaves a small window for earth heating being somewhat larger than the current estimate. We can only sample a rather non-spanning set of crustal boreholes (there are some 20,000 of them in the current data used to make this estimate last I heard, but this is still small compared to the number of square meters of Earth surface). It is possible that there are sites — such as subduction zones or places where plates are spreading apart on the ocean’s floor — where the crust is much warmer over a few million square kilometers way down where we cannot properly measure it, where the rate of heating is much greater, possibly significant enough to at least influence the weather or couple to e.g. ENSO or other oscillations that nonlinearly amplify the otherwise small effect.
I view this hypothesis with some degree of skepticism, on the basis of the usual evidenciary rule: while lack of evidence isn’t evidence of lack, it isn’t evidence of presence either. So far, we have no substantial evidence that I’m aware of that there are significant (enough) patches of ocean floor where the rate of heat flow is enough greater to make it a contender for inclusion in the Earth’s energy budget as a “playah” as opposed to a wimp. So maybe there are, but until I see some evidence for them besides the argument that there could be some I’ll doubt it. What I am quite certain of is that in most places — nearly everywhere we’ve looked — the rate of heat flow out of the Earth is tiny.
Naturally, Wikipedia has a complete review article on this:
It’s global average is just about 1/10 of a watt per square meter. To put that in context, it roughly the energy consumed by a 1 watt flashlight bulb spread out over a square with sides of length $\pi$ meters (it turns out $\pi^2 \approx 10$:-). Solar insolation is (in comparision) almost 14000 times greater at the TOA, and around 7000 to 10000 times greater at the Earth’s surface.
In contrast, the human body generates around 100 watts, so $7 \times 10^9$ humans generate around $10^{11}$ watts, compared to the $10^{13}$ watts generated by the Earth. Human beings generate comparable amounts of heat with their bodies — within a couple of orders of magnitude — as does the entire planet. And that doesn’t include the tenfold multiplication we add by burning stuff to make energy. Humans as a species contribute as much energy to the Earth’s total energy budget within a factor of two compared to its geothermal production.
rgb
October 24, 2012 8:26 am
Looks like Africa might be sinking….
The gravity potato has big lumps in Indonesia and Europe and a huge hole in the northern Indian Ocean that extends even north of the Himalayas. I would be more impressed with a study that related to these salient features.
Another recent Hemholtz study uses Black Sea sediments to track the path of magnetic north during the most recent 41kya reversal. The pole first moved to Newfoundland, then back to Alaska, then all the way down to Cuba, then back to BC, juked a couple times before getting down to business and tracking straight down the middle of the Pacific to Antarctica and then back up the middle of the Indian Ocean to its rightful place.
It gets jiggy over land, it does its serious movement over oceans, and it studiously avoids the gravity highs, even circumnavigating the Indonesian potato.
http://www.gfz-potsdam.de/portal/gfz/Public+Relations/M40-Bildarchiv/Bildergalerie_Laschamp/121016_Leschamp_Polwanderung_EN
Hoser
October 24, 2012 1:28 pm
gymnosperm, et al….
Consider generally, the continents are lower density, and thus float higher over the mantle. If you think about it, you would expect graviation to be weaker over continents. Perhaps the high landmass around Tibet does counterbalance a corresponding low density in Asia. The odd thing is low density near southern South America and the Pacific ocean near NZ. Why are these low density? Perhaps the satellite doesn’t properly track over the poles, or the plot doesn’t render a low density under Antarctica. The other bit I find very interesting is the ring shape of higher density around these low density blobs. I wonder whether these are remants of some very large and very ancient impacts that drove lower density material into the mantle, pushing higher density away from the center (a crater rim), and nucleating the continents (cratons). Speculation is fun when you aren’t trying to get paid for it.
temp
October 24, 2012 2:48 pm
Leif Svalgaard says:
October 23, 2012 at 5:26 am
“The sun does NOT effect climate or gravity on the earth.”
No surprise from this comment by leif. Next the sun will have no effect on anything.
Project722
October 24, 2012 6:41 pm
ferd berple says:
“The main field of the Earth’s magnetic field is generated by flows of liquid iron in the outer core”.
Not possible. Iron heated above 770C, the Curie point, loses its magnetism. Liquid iron is much hotter than 770C.
http://en.wikipedia.org/wiki/Curie_temperature
Above 770C iron changes from a ferro-magnet to a para-magnet. The difference is that a para-magnet needs an external source to induce magnetism. The most likely source for this external excitation is the Sun’s magnetic and electrical fields.
Well said. Funny how this fact gets overlooked and mainstream science prefers to sustain the inherent instability of the dynamo at all cost while ignoring new evidence for alternative mechanisms. To much at stake takes precedence over the search for truth I suppose.
October 24, 2012 10:38 pm
More evidence for the electric universe theory.
http://www.holoscience.com/wp/electric-gravity-in-an-electric-universe/
John Stojanowski
October 25, 2012 11:32 am
“So we must say that, at this moment, we have no satisfactory explanation of the observed correlation between delta g and secular acceleration.”
The Gravity Theory of Mass Extinction (GTME), referenced in an earlier link does have an explanation. GTME asserts that inner core has the ability to move away from outer core centricity. And, it does this to maintain the Earth’s total angular momentum.
In this case the inner core is moving away from the LAB (Africa) area, thereby lowering surface gravity in that region. The Earth’s dipole field correspondingly moves away from this same area decreasing the magnetic field strength, as noted in this research paper.
My guess is that the inner core is moving in response to the melting of polar ice caps. Mass is effectively being moved from the polar regions and distributed around the globe…something that would alter the Earth’s total angular momentum if the inner core didn’t shift.
rgbatduke
October 25, 2012 3:15 pm
More evidence for the electric universe theory.
http://www.holoscience.com/wp/electric-gravity-in-an-electric-universe/
… (quote from said article)…
Gravity is due to radially oriented electrostatic dipoles inside the Earth’s protons, neutrons and electrons. [18] The force between any two aligned electrostatic dipoles varies inversely as the fourth power of the distance between them and the combined force of similarly aligned electrostatic dipoles over a given surface is squared. The result is that the dipole-dipole force, which varies inversely as the fourth power between co-linear dipoles, becomes the familiar inverse square force of gravity for extended bodies. The gravitational and inertial response of matter can be seen to be due to an identical cause. The puzzling extreme weakness of gravity (one thousand trillion trillion trillion trillion times less than the electrostatic force) is a measure of the minute distortion of subatomic particles in a gravitational field.
Returning to the definition up above of “a crank”, I see. OK, consider the paragraph above. An electric dipole is a pair of charges in the following arrangement:
(minus) —- (plus)
with a short distance in between. An electric diipole is electrically neutral — it has no net (“monopolar”) electric charge. The field of an electric monopole as is well known varies radially like $1/r^2$. The field of an electric dipole is simply the vector sum of the electric fields of its constituent monopoles, which happens (if anybody cares) to vary like $1/r^3$ from the center of the dipole for $r \gg d$ where $d$ is the length of the dipole (which has dipole moment $\vec{p} = Q\vec{d}$ aligned from the $-Q$ charge to the $+Q$ charge a vector distance $\vec{d}$ away). I won’t bore you with the detailed shape of the dipole field — most people are at least a little bit aware of it from playing with iron filings and bar magnets, but interested people can grab a PDF of my online physics book and/or look up google images of same.
Now, the really interesting thing about electrostatic fields is that they satisfy Gauss’s Law for Electrostatics, one of Maxwell’s Equations. This is:
$\vec{\nabla} \cdot \vec{E} = \frac{\rho}{\epsilon_0}$
or in kiddie-physics integral form (which may actually be more useful in this context:
$\oint_S \vec{E} \cdot \hat{n} dA = \frac{1}{\epsilon_0} \int_{V/S} \rho dV$
In words: The flux of the electric field through any closed surface $S$ equals one over the permittivity of free space times the total charge inside the volume $V$ bounded by the close surface $S$.
This is a law of nature. If it is false, forget about even thinking of evaluating dipole dipole forces and so on, because those forces are a direct consequence of this law. There is an absolutely enormous amount of evidence that it is true, and continues to be true in quantum theory even in an entirely different formulation of the underlying mechanics. If it were not true, atomic structure would not be what it is, my laptop would not work, we would all not exist (because molecules and molecular forces would not exist either).
The absolute, profound, overwhelming, mind-numbing ignorance of whoever wrote the piece of crap page linked above and paragraph quoted is thus clearly confirmed by the simple fact that the “explanation of gravity” offered in the paragraph clearly, unequivocally, irrevocably, and irremedially violates Gauss’s Law. Not only does it violate Gauss’s Law but it is literally a homework problem for first year undergraduates in my physics courses to prove that no, you cannot make a $1/r^2$ force law by arranging dipoles no matter how hard you try. It is part of the way one can see that magnetic dipoles (which exist in abundance) cannot be simply geometrically rearranged to form a magnetic monopole, which would interact with a $1/r^2$ force law with other monopoles. You can look the problem up in my textbook (or perhaps in the review guide for the textbook, can’t recall for sure) too.
To put it bluntly, if you try to arrange dipoles inside any spherical volume so that they produce a $1/r^2$ force law outside of it — like gravity — you will fail. You will fail because the net charge in the volume is zero, and hence the net electric flux out of the volume is zero. That means as much field must flow out as flows in, which in turn means that the field cannot possibly be outgoing in all directions and drop of like an inverse square.
This is more than sufficient to identify the site above as crank science, in case the other stigmata of the site — a complete lack of substantive theory or equations, pointless quotes that were out of context when they were originally stated long ago, let alone now, complete non-sequitors such as condemning the standard model (invented only in the 1970s) as being in trouble from the 1930s (good trick that, time travel and all), the open statements that basically all physicists are stupid and the author of this brilliant stuff is smart — were not.
What is it with this sort of thing? Do climate skeptics have a death wish? Are they trying to convince people that CAGW is correct because its opponents can’t argue against it without invoking electric suns, electric dipole gravitation, bizarre nuclear reactions responsible for heating the Earth, a differential temperature distribution in a gas in thermal equilibrium? Do you have any idea how dumb this makes you look, when you post links like this when you don’t know enough to personally judge whether or not they are credible and where mere common sense should tell you that they are incredible?
Here is a short list of crap, crank science that should never, ever be invoked in a discussion of climate science:
* Electric Suns made out of iron. To be honest, this is the best of them, borderline not crank. At least the person that proposes it puts up a real theory, albeit one that I think is so overwhelmingly falsified by existing evidence that it is difficult to take very seriously, but it is at least as good as transluminal neutrinos.
* Any sort of theory claiming to rediscover or reinvent electromagnetism. Especially theories that were obviously written by the kind of person you would dread sitting next to on an airplane — slightly demented, enormously narcissistic, and completely convinced that they individually are smarter by far than the collective intelligence of all of the best brains in the world over the last few centuries. You get to think the latter only after you win your Nobel Prize in physics, not before, and having met a number of Nobel Physicists, the ones that do it afterwards are still assholes, just assholes with Nobel Prizes.
* Any theory that claims that $PV = NkT$ means that the atmosphere has to be warmer where the air pressure is higher in static equilibrium. Oh, my, God. No.
* Any theory that claims that the Greenhouse effect violates the second law of thermodynamics. No, it doesn’t. Or that it can’t warm the Earth. Yes, it can. Or that it doesn’t exist, there’s no experimental evidence for it. Yes, it does and there is. Get over it. That doesn’t mean CAGW is correct, or even that AGW is correct, but you just make skeptics of both look bad by asserting nonsense as the reason to doubt either one.
* Any theory that claims that the temperature of a planet is determined by the gas pressure on the surface and that the pressure/temperatures of planets fall on a universal curve with absurd dimensioned constants. No, they aren’t, no, they don’t.
* Any theory that what really warms the planet isn’t CO_2, it is (fill in the blank with) tidal forces, geothermal heat, tidal forces due to JUPITER (holy shit! Jupiter?) a magical formula made up of completely periodic harmonic functions (no causal explanation needed), invisible fairies. Actually, of all of these invisible fairies are the best of the lot, if they stand for “something we do not yet know or understand” which is actually always a pretty good explanation where complex phenomena are concerned.
* Any theory that claims with certainty any of the following: Catastrophic Anthropogenic Global Warming is a certainty. Catastrophic Anthropogenic Global Warming is definitely false. CO_2 is without doubt wholly beneficial. CO_2 is without doubt the devil itself. The ocean is certain to rise a meter in the next thirty years. The ocean is certain not to rise a meter in thirty years. The Earth is about to warm by (fill in the blank with any positive number between 4 and 10 C). The next ice age is upon us, and it is certain that temperatures will drop by (fill in the blank with any negative number between 2 and 10C). I mean, get real. We simply don’t know enough to conclude any of these things with a degree of confidence close to “certainty”. I can try to walk you through Bayes’ theorem and the theory of contingent probabilistic knowledge if you like, but what it comes down to is that all of these statements require a large set of assumptions that we do not know are certainly true and that all have to be true to make them plausible as certain knowledge, and collectively the product of the probability that they are all true is just not that big a number.
* Any theory that invokes “holography” or “Mach’s Principle”, unless the person who invokes it has at least a Ph.D. in physics or mathematics, is an expert in differential geometry and relativistic field theory, and can produce an actual theory in algebra slightly too difficult for me to understand. Leonard Susskind can get away with it. Freeman Dyson can get away with it. Joe Blow who has a degree in EE from a community college and didn’t do terribly well in intro physics, ODEs, or linear algebra — leave it at home, please.
* Any theory that is based on crank science. I don’t mean wild and crazy hypotheses that can be falsified — they are welcome in the mix as long as those that propose them accept that normal humans (including themselves, if they are wise) won’t give them much weight until there is some good evidence-based reason to do so). For example, I’m perfectly happy to entertain the possibility that the passage of the sun through dark matter “dust” bands inhomogeneously distributed in the galaxy are responsible for secular variations that are what has really caused the climate to vary over the last few hundred years. Hey, the matter is dark (doesn’t interact with electromagnetic charge directly)! It’s an invisible fairy, but it it has a name, a smidgeon of evidence supporting its existence in the first place, a big question mark on what it might do in the core of the Sun where it interacts with hot, compressed NUCLEAR matter. I’m thrilled with GCR modulation of clouds and hence albedo, although it is far from proven (because it COULD be proven or disproven, and the theory itself is perfectly reasonable and has at least some experimental support). I mean crank science. I put a perfectly reasonable crank science identification guide up above. Use it. Please.
Sigh.
rgb
October 25, 2012 4:10 pm
rgb,
The electrostatic dipoles of atoms in this model are radially oriented, with the inner (towards the center of the celestial body) pole positive, and the outer pole negative. This makes planets effectively electrets.
“Electret (formed of elektr- from “electricity” and -et from “magnet”) is a dielectric material that has a quasi-permanent electric charge or dipole polarisation. An electret generates internal and external electric fields, and is the electrostatic equivalent of a permanent magnet. Oliver Heaviside coined this term in 1885. Materials with electret properties were, however, already studied since the early 18th century.
Similarity to Magnets
Electrets, like magnets, are dipoles. Another similarity is the radiant fields: They produce an electrostatic field (as opposed to a magnetic field) around their perimeter.”
Nothing impossible going on here.
u.k.(us)
October 25, 2012 4:32 pm
Steven Mosher says:
October 23, 2012 at 8:29 am
Leif, you missed the thread where the WUWT demolished the credibility of Grace. Sorry according to the folks here its data is total garbage.
================
Did you just appeal to authority, I thought so.
Yep, skeptics reside here, get used to it.
It is our haven from the bombardment of the green religion, enter at your own risk.
October 25, 2012 4:36 pm
Also, from the article: “The simple fact is that we have no concept of why matter manifests with mass.”
That is why the taxpayer has had to pony up for CERN, to find the God particle – which gives mass but does not interact in other ways.
All that is being said here is that in a large body, atoms are very slightly electrically distorted, the inner pole positive and the outer pole negative. Mass then is the measure of the ease of electrically deforming a particle, with large particles being easier to deform, and so appear more massive.
Nothing narcissistic going on here, just looking at mass without any newly invented particles to explain it. Diagrams: http://zekeunlimited.files.wordpress.com/2012/04/wt4gravity-by-w-thornhill.jpg
http://zekeunlimited.wordpress.com/2012/04/15/mass-a-simple-model-requiring-no-newly-invented-particles/
October 26, 2012 7:13 am
I plotted the path of magnetic north during the 41kya reversal on the current geoid and it basically hopped between gravity holes.
http://wp.me/p1uHC3-5P
rgbatduke
October 26, 2012 9:50 am
Electrets, like magnets, are dipoles. Another similarity is the radiant fields: They produce an electrostatic field (as opposed to a magnetic field) around their perimeter.”
Nothing impossible going on here.
Dear Zeke,
Wrong. Geometrically wrong. Look, if you understand Gauss’s Law, you understand why this is impossible. If you understand Dirac’s construction of a “magnetic monopole” out of magnetic dipoles, you’d understand even more, why the best possible attempt involves introducing a topological defect. If you don’t understand these things, why try to correct me when I have taught both graduate and undergraduate electrodynamics for over thirty years and written two books on the subject?
My Ph.D. dissertation was basically an application of multipolar methods in quantum mechanics. My graduate textbook has the world’s best description (one of the only full derivations and descriptions) of the use of consistently defined and derived vector multipoles for describing the electromagnetic field. I routinely teach even undergraduates the importance of both monopoles and dipoles in even an elementary description of electricity and magnetism.
You’re not going to be able to correct me here, not because I’m a mean or stupid person, not because I’m participating in a great conspiracy to defend warmists, not because I’m hostile to iconoclastic but physically plausible ideas, but rather because the proposition is absurdly stupid and anybody who understands even introductory electromagnetism at all well can see why. It’s one of the first things one teaches students when introducing the multipolar series (monopole, dipole, quadrupole etc) as a means of describing electric or magnetic or electromagnetic fields in terms of integral moments over the charge distribution.
So let me say it clearly and distinctly, so that there is no mistake. There… is… no… way… to… make… a… monopole… out… of… dipoles….
None. Cannot be done. It violates Gauss’s Law. The best possible effort in this regard is Dirac’s construction of a magnetic “monopole” out of a vector potential that produces a monopolar field in all space except on a defect line. Ever heard of that? Able to write down the vector potential in question and prove that the field is monopolar except on the defect line? Understand how the resulting field does NOT violate Gauss’s Law? Of course not, but that is one of the homework problems I often assign in graduate E&M.
This is also nothing at all like “arranging a bunch of dipoles on the surface of a sphere with their negative charges pointing in and their positive charges pointing out”, as described in the crank site linked above. If you understood even the SIMPLEST bits of E&M, you’d recognize that the electrostatic field satisfies the superposition principle, so that the field outside of the sphere is the vector sum of the fields of the equal and opposite electric charges in the dipoles and rigorously vanishes in the limit that you e.g. create a uniform dipolar surface layer of charge, and never ever varies like $1/r^2$ for any $r$ outside of the sphere.
So once again. Wrong. Please don’t be a crank or endorse crankery. There are too damn many cranks out there; it makes the mere iconoclasts difficult to identify.
rgb
October 26, 2012 10:34 am
gymnosperm says: October 26, 2012 at 7:13 am
…….
Currently there is a bifurcation of geo-magnetic field in the Northern hemisphere in contrast to uniformity of the ‘south pole’s field’. Hence in the NH ‘dip needle’ identifies resultant vector, however the strongest field since mid 1990s is found in the central Siberia, to the north of lake Baikal. Prior to 1990s strongest field was in the vicinity of Hudson Bay.
http://www.ngdc.noaa.gov/geomag/data/mag_maps/pdf/F_map_mf_2010.pdf
During last 100 years the Hudson Bay has been on decline, while Siberia is getting stronger
http://www.vukcevic.talktalk.net/NFC.htm
Project722
October 26, 2012 12:02 pm
rgb – on a side note here if you please. What is your take on the magnetic portals that supposedly form every 8 minutes that provide a direct pathway between the earth and sun? Obvioulsy we know electric currents flow through them hence the magnetism. But what do you suppose the influence is from these portals and how they may affect things on the surface on the sun like sunspot intensity, or perhaps the earths magnetic field?
rgbatduke
October 26, 2012 2:16 pm
rgb – on a side note here if you please. What is your take on the magnetic portals that supposedly form every 8 minutes that provide a direct pathway between the earth and sun? Obvioulsy we know electric currents flow through them hence the magnetism. But what do you suppose the influence is from these portals and how they may affect things on the surface on the sun like sunspot intensity, or perhaps the earths magnetic field?
I don’t have a take on them because I’ve never heard of them until just now. The NASA site calls them FTEs — “flux transfer events”. I’m guessing — very much guessing as I only just looked at one not terribly technical article on them — that they are the results of magnetohydrodynamic instabilities that pinch off the solar magnetic field from the Earth’s magnetic field as everything rotates and revolves, but then reconnects the flux lines. There may be some sort of coupled capacitative effect involved too — charged particles build up when it is closed that are responsible for reopening/reconnecting the flux lines. But magnetic fields produced by things like plasmas are very, very complex and highly nonlinear, and I’m not an expert even on the solutions that we know (let alone the ones that we can’t solve for, only observe). I’ll leave that for somebody like Lief that no doubt lives and breathes plasma physics and magnetohydrodynamics. That’s all a few thousand degrees K above my areas of expertise…;-)
rgb
October 26, 2012 2:31 pm
Hi Dr.Brown
It is a bit of NASA hype about geomagnetic storms (magnetic portals, magnetic ropes, magnetic cloud)
http://wwwppd.nrl.navy.mil/prediction/cloud.html
http://wwwppd.nrl.navy.mil/prediction/storms.html
rgbatduke
October 26, 2012 2:40 pm
For grins, short problem 6 from my review guide for first year intro E&M:
problems/short-build-a-monopole.tex
Roger (who we can imagine owns a motorcycle repair shop in Morehead City)
hears about magnetic monopoles and decides to build one and end all the confusion. He gets a few hundred bar magnets and glues them all hedgehog-fashion
onto an iron sphere 10 cm in radius so that the north poles face out and the sphere is tightly packed and covered. He reasons that the field of the south poles will meet in the middle and cancel out, while the north pole fields will look just like a monopole.
If the total summed pole strengths (“magnetic charge” of the north poles as determined by their magnetic dipole moments) of all the bar magnets is $Q_m$ , approximately what magnetic field will Roger observe one meter away from his “monopole”? Why? (Draw a picture, invoke a law, something…).
The answer is “zero”, not $k_m Q_m / r^2 \hat{r}$. It is zero because the magnetic field lines form closed loops that run through the actual magnetics. Hence the magnetic flux through a closed surface a meter away must vanish (a.k.a. Gauss’s Law for Magnetism, one of the homogeneous Maxwell Equations) — the outgoing field must equal the incoming field. The field won’t quite be zero as the bar magnets or refrigerator magnets are discrete and their fields won’t precisely line to cancel up so there will be a very weak “ripple” at that radius with zero total flux and very, very weak field strength, but zero is by far the best answer.
The exact same thing is true for electric fields produced by physical electric dipoles (made with actual physically separated electric monopoles), only worse. In this case one can indeed make a nearly perfect “surface layer” of dipoles, such as a conducting sphere with negative charge inside and concentric with a conducting sphere of slightly larger radius with positive charge, which is the limiting case of Zeke’s “radially packed electrets” — only now it is a perfect textbook case, covered in every single book on introductory electromagnetism in the world — a spherical capacitor. Here is the proof that the field outside is zero:
$\oint \vec{E}\cdot \hat{n} dr = E_r 4\pi r^2 = \frac{Q_{tot}}{\epsilon_0} = \frac(Q - Q}{\epsilon_0} = 0$
therefore
$E_r = 0$
Wow, is that so difficult?
rgb
rgbatduke
October 26, 2012 2:44 pm
OK, sorry, nailed once again by the cosmic lack of an “edit” or “preview” button such as those found on Slashdot or Goodreads. So moderator, if you could remove the superfluous boldface markup. I’ll try a second time to fix the Gauss’s Law instance here:
$\oint_S \vec{E} \cdot \hat{n} dA = E_r 4\pi r^2 = \frac{Q_{tot}}{\epsilon_0} = 0$
rgb
Project722
October 26, 2012 3:30 pm
vukcevic says:
October 26, 2012 at 2:31 pm
Hi Dr.Brown
It is a bit of NASA hype about geomagnetic storms (magnetic portals, magnetic ropes, magnetic cloud)
Perhaps – Still these portals with electric current flowing
between sun and earth, and that earth’s ionosphere and surface –
separated by an insulating atmosphere – behaves like a leaky
capacitor that regularly charges up from the sun and breaks down –
could make the connection to phenomena like sprites and elves,
poorly understood plasma phenomena occurring high in the
atmosphere.
October 26, 2012 9:22 pm
Thank you. The only reason I said anything is because you put a little too much mustard on your personal attacks on the physicists who have worked on the question of why matter has mass.
October 26, 2012 9:45 pm
Each atom’s nucleus is very weakly offset towards the center of the planet. No monopoles are discussed. The planet itself is an electret. Electrets were discovered in 1733 by Stephen Gray and rediscovered by Dr. Mototaro Eguchi in the 20’s, and some of his wax electrets still have thier charge, and possess a N & S pole. Mass and gravity are both happening at an atomic level in this model. I wanted to give a simpler and accurate representation of what was being said than what you gave, but let’s not bicker then as you say.
If you can help me, Dr Brown, I would like to ask you what the final cost of CERN has been, including maintenance, power, and repairs, to date. I have tried to find this through searches, and would appreciate anything you can tell me.
PS Regarding imparting mass with the God particle, W. Thornhill writes, “the Higgs particle is like no other in our experience, since all normal matter is composed of electric charges that respond to electromagnetic influences… However, we observe that the mass of a charged subatomic particle is altered by the application of electromagnetic forces. At its simplest (and Nature is economical in our experience) it indicates that mass is related to the storage of energy within a system of electric charges inside the particle. That’s what E = mc2 is telling us. So how can a massive particle be constructed without electric charge? It shows the problem inherent in leaving physics to mathematicians — there is a disconnect between mathematical concepts and reality.
The notion that subatomic particles exhibit mass as a result of their interaction with imaginary Higgs particles occupying all of empty space like some form of treacle should have caused a sceptical uproar, if it weren’t for the appalling apathy of the public toward such nonsense. The ‘annihilation’ and ‘creation’ of matter is invoked when particles at particular points arise from ‘fields’ spread over space and time. Higgs found that parameters in the equations for the field associated with his hypothetical particle can be chosen in such a way that the lowest energy state of that field (empty space) is not zero. With the field energy non-zero in empty space, all particles that can interact with the Higgs particle gain mass from the interaction.
This explanation for the phenomenon of mass should have been stillborn if common sense was used. To begin, the annihilation and creation of matter is forbidden by a principle of physics. It is tantamount to magic. Second, field theory is a purely imaginary construct, which may or may not have physical significance. And third, it is not explained how the Higgs particle can have intrinsic mass but no charge and yet interact with normal matter, which has charge but is said to have no intrinsic mass. Rather than explain the phenomenon of mass, the theory serves to complicate and confuse the issue. The most amazing feature of this \$6 billion experiment is the confused and illogical thinking behind it.”
rgbatduke
October 27, 2012 11:46 am
Zeke, until you have a teensy weensy clue about Maxwell’s equations and the electrostatic field, I would strongly suggest that you abandon suggesting that the earth is an electret or any other utter nonsense of that general variety. Indeed once you have actually learned electrodynamics and a half dozen other things you might — might, I say — be qualified to talk about why particles have mass. Or charge. Or spin. Or almost any other intrinsic property.
However, you clearly do not have such a clue. If you did, you would understand — or at least, take the time to learn — why charge-neutral matter cannot have a radial electrostatic field that drops off like $1/r^2$. I repeat, the reason is called “Gauss’s Law”, which should make it really easy to look up in my or other online physics textbooks. In the meantime, your “theory” just makes you look silly, and when you offer it as a reason that the Higgs boson doesn’t exist — very probably offering it some months after it has been actually observed, to top if off — it makes you look very silly.
Kind of like a crank.
|
2022-08-09 22:27:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5347310900688171, "perplexity": 1100.2748842842318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00760.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-for-do-in-hi-ho-di-do
|
# How do you solve for do in (hi)/(ho)=-(di)/(do) ?
Sep 3, 2016
$\left(\mathrm{do}\right) = - \frac{h o \times \mathrm{di}}{h i}$
#### Explanation:
If an equation has one fraction on each side, you can get rid of the denominators by cross-multiplying.
$\frac{h i}{h o} = - \frac{\mathrm{di}}{\textcolor{red}{\mathrm{do}}}$
$h i \times \textcolor{red}{\mathrm{do}} = - h o \times \mathrm{di} \text{ "larr div } \left(h i\right)$
$\textcolor{red}{\mathrm{do}} = - \frac{h o \times \mathrm{di}}{h i}$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If an equation has one fraction on each side, you can invert the whole equation. This will put $\mathrm{do}$ in the numerator.
$\frac{h o}{h i} = - \frac{\textcolor{red}{\mathrm{do}}}{\mathrm{di}} \text{ } \leftarrow \times - \mathrm{di}$
$- \frac{h o \times \mathrm{di}}{h i} = \textcolor{red}{\mathrm{do}}$
$\textcolor{red}{\mathrm{do}} = - \frac{h o \times \mathrm{di}}{h i}$
|
2020-07-09 05:27:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485829472541809, "perplexity": 671.7444503911458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655898347.42/warc/CC-MAIN-20200709034306-20200709064306-00049.warc.gz"}
|
https://datascience.stackexchange.com/questions/107379/how-can-i-determine-the-accuracy-of-a-hand-drawn-line-of-best-fit
|
# How can I determine the accuracy of a hand-drawn line of best fit?
Here's the situation:
• Users have manually drawn a straight line of best fit through a set of data points. I have the equation (y = mx + c) for this line.
• I have used least-squares regression to determine the optimal line of best fit for the same data.
How can I assess the quality of the user-drawn LOBF? My first thought was just to work out the uncertainty between the two gradients and the two y-intercepts, but that produces dramatic errors when the true value of either the gradient or the y-intercept is close to zero. Any suggestions, please?
$$\sum_{i=1}^n\bigg( Y_{true, i}-Y_{predicted, i} \bigg)^2$$
If you want to automate this task for a computer to do it, calculate the slope and intercept of your guesses line of best fit. Pick two points on your line, $$(x_0,y_0)$$ and $$(x_1,y_1)$$.
$$y-y_0 = \dfrac{ y_1-y_0 }{ x_1-x_0 } \bigg( x-x_0 \bigg)$$
When you do the algebra to solve for $$y$$ in terms of $$x$$, you will have the equation for the line you’ve fitted by eyeballing it. You then can use this to make predictions to feed into the square error formula.
|
2022-05-23 20:09:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5770478248596191, "perplexity": 224.6275323358057}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00521.warc.gz"}
|
http://mathoverflow.net/feeds/question/80056
|
Using slides in math classroom - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T16:17:35Z http://mathoverflow.net/feeds/question/80056 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/80056/using-slides-in-math-classroom Using slides in math classroom Keivan Karai 2011-11-04T15:18:47Z 2011-11-10T04:18:26Z <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80058#80058 Answer by Thierry Zell for Using slides in math classroom Thierry Zell 2011-11-04T15:43:55Z 2011-11-04T15:43:55Z <p>I think you already touched on the two main points: pretty pictures are so much better than anything done on a chalkboard is the pro, but you cannot decently unwind any argument on slides. </p> <p>I've used them intensively, I do it a lot less now. (Here's a con you did forget about: they take a <strong>lot</strong> of time to prepare, even when you're only revising them.) If the room lends itself well to it, the hybrid method is best: use the slides only when they beat the board. Rooms that have a screen in the corner, rather than in front of the board, are best for this.</p> <p>Also, it seems that it's easier to fall asleep to slides than to a lecture, so be aware of that. Make sure that the room is never too dark (the quality of the screen material can be critical here too: good screens should be readable in full light). And switching your routine, never showing slides for too long, helps keeping the students awake.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80062#80062 Answer by BSteinhurst for Using slides in math classroom BSteinhurst 2011-11-04T16:02:19Z 2011-11-04T16:02:19Z <p>If you intend to post your slides online after class then you run the risk of students not even taking notes/digesting the material on their own (I've had this feeling myself) or feeling that they don't have to attend class. This is obviously a con but the other side is that the students then have a good outline of what you talked about in class with your emphasis included. </p> <p>I second Thierry's comments. </p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80063#80063 Answer by David White for Using slides in math classroom David White 2011-11-04T16:17:25Z 2011-11-04T16:17:25Z <p>I took a course at PCMI some years ago from <a href="http://people.reed.edu/~davidp/homepage/teaching.html" rel="nofollow">David Perkinson</a> (Reed College). He did an amazing job and single-handedly convinced me it was possible to teach well from slides. Check out <a href="http://people.reed.edu/~davidp/pcmi/index.html" rel="nofollow">this link</a> to see examples of his slides. As the other answers have mentioned, it seems necessary to use slides only in conjunction with the board. Perkinson did this, but also included a useful trick: he created handouts from the slides for students to write on, but left blank spots in those handouts so they had to write the proofs themselves based on what he said, showed on the slides, and wrote on the board. </p> <p>Professor Perkinson is also a wizard of sorts with mathematica, and he was able to create awesome graphics using it. I don't think his mathematica code is online, but I'll bet he'd be willing to share if someone emailed him. He may also have tricks to reduce prep time, as this was the sort of thing he liked thinking about.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80069#80069 Answer by André Henriques for Using slides in math classroom André Henriques 2011-11-04T17:08:38Z 2011-11-04T17:08:38Z <blockquote> <p>I would never, never use slides for a course.</p> </blockquote> <p><i>That said:</i><br> I do sometimes show my student pictures taken from the web.<br> For example, I recently showed <a href="http://upload.wikimedia.org/wikipedia/en/0/03/Compound_of_five_cubes.png" rel="nofollow">this picture</a> to the students in my group theory class in order to illustrate the isomorphism between $A_5$ and the group of symmetries of a dodecahedron.</p> <p>Also, I sometimes prepare animations with <a href="http://www.geogebra.org/cms/" rel="nofollow">Geogebra</a> that I then show during class. Here's an <a href="http://www.staff.science.uu.nl/~henri105/Teaching/LogPowSer.html" rel="nofollow">example</a> (click and drag the blue node). Of course, it's even better to create the graph in front of the students: Geogebra is good for that. My philosophy is that students should be shown things <b>being created</b>, not ready made. But I'll admit that this is not always possible...</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80080#80080 Answer by matthias beck for Using slides in math classroom matthias beck 2011-11-04T18:37:30Z 2011-11-04T18:37:30Z <p>I use a hybrid version for some of my classes which take place in a room that allows this: I use computer slides (and animations, computations, etc.) <b>and</b> the board. I learned this from my colleague Serkan Hosten, and it works really well in some classes. E.g., I use slides for definitions and theorems (including the relevant ones from the previous lecture) but then work out examples and proofs on the board. This has the obvious advantage of spending time on exactly the items that need time and just the right pauses to get digested, but it also has nice side effects: e.g., the statement of the theorem will stay on the screen even if I'll have to clean the board.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80086#80086 Answer by Jaap Eldering for Using slides in math classroom Jaap Eldering 2011-11-04T21:27:49Z 2011-11-04T21:27:49Z <p>I just finished teaching a course on linear algebra to non-math students. I used a combination of latex-beamer slides and blackboard. One advantage of the slides was being able to do examples of Gauss elimination and inversion of matrices quicker than on the blackboard and without making mistakes. On the other hand, I feel that slides can easily make a lecture less interactive.</p> <p>And, I must agree with Thierry Zell: it took quite some time to prepare these slides, even though I could adapt the latex sources from the previous people teaching this course.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80097#80097 Answer by Adrien Hardy for Using slides in math classroom Adrien Hardy 2011-11-04T23:53:03Z 2011-11-04T23:53:03Z <p>It also depends on how do you think it is the best for your students to learn : By listening (hopefully carefully) to the course, and then reading notes you'll provide them, OR by letting them write themselves the content. </p> <p>I don't like to much the first option, certainly because I've not been used too, and I believe it is a huge advantage to write yourself everything at the moment, because of obvious memorization advantages (it was important for me to have my own notations, a kind of taming procedure) and, once you read your notes again, you usually remember where was the parts the teacher got enthusiastic. </p> <p>Considering then the second option, it is for me an evidence that blackboard win :</p> <ul> <li>you give the time to the students to write since you do it yourself</li> <li>the statements stay longer (at least if you have enough blackboards, or just keep the main Theorem on !) </li> <li>there is more interactions content-author-students</li> <li>your eyes are not constantly dried by this terrible white light</li> <li>it allows improvisation</li> <li>it is more classy (personal point of view, I agree) </li> </ul> <p>Against :</p> <ul> <li>it is suicidal (that is terribly soporific for the students) to NOT prepare a lot your presentation, at least as long as you should spend time one slides</li> <li>it requires a good handwriting from the teacher </li> <li>its not convenient for drawing complex pictures</li> </ul> <p>My conclusion is then the same than André Henriques !</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80098#80098 Answer by Will Jagy for Using slides in math classroom Will Jagy 2011-11-05T00:11:42Z 2011-11-05T00:11:42Z <p>I have a story in the middle. I hurt my right shoulder over time, by 1994 it was simply too much to write on a blackboard, at least overhead. So, pre-Beamer, I wrote up these slides on transparencies with colored pens. These were unusually well-prepared lessons for me, I had everything worked out, it was all clearly my work, and I still had plenty of blank slides on which to write new material when needed. That is the hardest I have ever worked on course preparation. </p> <p>They did have class questionnaires, sent to administration and never seen by me, later the chairman told me how very much the students hated the slides. They were never fond of me but I think that was a separate item, the slides made it worse than it would have been...I suppose my question now is, would things have been different if I also gave each student photocopies of the slides for that day? </p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80103#80103 Answer by Jim Conant for Using slides in math classroom Jim Conant 2011-11-05T02:04:17Z 2011-11-05T02:04:17Z <p>Full disclosure: I stole the following idea from my wife. </p> <p>For some courses, like calculus, I will create slides with beamer, leaving blank spots to fill in during class. I then print the slides out on paper and present them with the document camera during lecture. When I get to an example, I will work it out by writing on the paper during class, and have it projected in real time. This approach combines the advantages of blackboard talks where you work things out in realtime, with the advantages of beamer presentations where you can present nice graphics and also have an outline to limit getting distracted and wandering off on tangents. </p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80106#80106 Answer by Thierry Zell for Using slides in math classroom Thierry Zell 2011-11-05T03:03:22Z 2011-11-05T03:03:22Z <p>I've already given my opinion, and this is more of a remark: how the pros and cons are weighed between blackboard and slides should be influenced by a whole collection of classroom factors, and the first one among them should probably be class size.</p> <p>This is a rather obvious remark, but I thought it was worth pointing out; Jaap Eldering's answer brought it to the forefront for me, because he mentioned doing examples on slides to avoid making mistakes, and my first reaction was: "making mistakes in class is good!". </p> <p>And then it occurred to me that I can use mistakes in the classroom fairly effectively because I only teach small classes. In a big classroom, I would simply not be able to receive instant feedback efficiently enough to do this as well, and I would not be comfortable trying.</p> <p>In a very large lecture hall, the blackboard will often lose a lot of its advantages given how large you have to write.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80109#80109 Answer by GMark for Using slides in math classroom GMark 2011-11-05T03:38:34Z 2011-11-05T03:38:34Z <p>My solution is to use a tablet PC (the pen-enabled kind, not the modern entertainment tablets like the Ipad),hooked up to a data projector. </p> <p>I have "lecture templates" which contain the copying intensive stuff (statements of theorems, definitions, graphs, complex diagrams) on the page, along with plenty of blank space for annotation. Those are on a website prior to the lecture. The students print them off at home, and bring them to class. I then annotate the lecture notes (using a pdf annotator and the tablet pen) and the students take notes as they wish. </p> <p>This, I feel, combines the benefits of having some complex material prepared ahead of time with the benefits of having arguments, calculations etc. developed in real time, rather than canned in advance. So it avoids the canned slides-whizzing-by problem. </p> <p>The only disadvantages I can see are the limitations of screen size. Sometimes nothing replaces the virtue of a big whiteboard, and having every part of a long development in front of your eyes all at once. In that case, I use a whiteboard.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80113#80113 Answer by Terry Tao for Using slides in math classroom Terry Tao 2011-11-05T04:55:35Z 2011-11-05T04:55:35Z <p>Slides can, in principle, enhance a lecture, but there is one important difference between slides and blackboard that definitely needs to be kept in mind, and that is that slides are much more transient than a blackboard. Once one moves on from one slide to the next, the old slide is completely gone from view (unless one deliberately cycles back to it); and so if the student has not fully digested or at least copied down what was on that slide, he or she will have to somehow try to catch up in real time using the subsequent slides. Often, the net result is that the student will become more and more lost for the remainder of the lecture, or else is spending all of his or her time transcribing the slides instead of listening in real time.</p> <p>In contrast, given enough blackboard space, the material from a previous blackboard tends to persist for several minutes after the point when one has moved onto another blackboard, which allows for a less frantic deployment of attention and concentration by the student. </p> <p>If one distributes printed versions of the slides beforehand, then this difficulty is mostly eliminated. Though sometimes it takes a few lectures for the students to adapt to this. Once, in the first class in an undergraduate maths course, I said that I wanted my students to try to understand the lecture rather than simply copy it down, and to that end I distributed printed copies of the slides that I would be lecturing from. (The slides were in bullet point form, and I would expand upon them in speech and on the board.) I then found that for the first few lectures, the students, not knowing exactly what to do with their time now that they did not have to take as much notes, started highlighting all the bullet points on the printed notes. It was only after I threatened to distribute pre-highlighted lecture notes that they finally started listening to the lecture (and annotating the notes as necessary).</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80160#80160 Answer by Andrew Stacey for Using slides in math classroom Andrew Stacey 2011-11-05T22:18:13Z 2011-11-05T22:18:13Z <p>I'm going to try to answer the actual question rather than saying whether I think that chalk or projector is better. That "question" being:</p> <blockquote> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have notices, and ways you have found to optimize this.</p> </blockquote> <p>(Though I'm curious about the request for ways found of optimising pitfalls!)</p> <p>I switched to using beamer slides 2.5 years ago. I'm partway through the fifth course that I've given using slides (and the course immediately prior to those was given on chalkboard but having prepared them as slides - a half-and-half experiment). By-and-large, I would say that I give better lectures using the slides than I used to when giving chalk talks. The following is a fairly disorganised list of my thoughts on both why I chose to switch and things that I've learnt in the process. I hope that this will be of use to you. Feel free to contact me for more details, and we've also recently been discussing this a bit on the <a href="http://www.math.ntnu.no/~stacey/Mathforge/nForum" rel="nofollow">nForum</a>.</p> <ol> <li><p>A big reason for me switching was that I teach in English in a Norwegian University. Although the students have excellent English, it is not their native language. It takes them longer to copy from the board, and their error rate is higher, so more time in a chalk-talk is wasted waiting for them to catch up than I felt I could allow. Giving the lecture using slides meant that I had much more control over where the students were focussing at any particular time (mainly, I wanted this to be on me).</p> <p>(To be clear: the time taken was <em>in addition</em> to the necessary time for students to process ideas that they've just been told about. Of course, pauses are necessary. But pauses by happenstance - because the students are busy copying the board - are not the best pauses.)</p></li> <li><p>As a consequence, I <em>always</em> make my slide notes available beforehand. Admittedly, sometimes it was at 11pm before an 8am lecture, but no-one's perfect! They can get the actual presentation, a compressed version (the <code>trans</code> option), and a handout version (they are strongly encouraged only to print the latter). That way, they can read in advance what I'm going to show them, and they can bring the handout version along to add any additional notes if they wish.</p></li> <li><p>The handouts are not a substitute for going to the lecture. The slides are not a summary of the lecture, they are what I want the students to be able to see while I am talking to them about something. Ideally, when the students look at the notes afterwards then they will be able to remember (more-or-less) what I've said. But if they weren't at the lecture then they won't have anything to remember so the handouts will be of less use (not of no use, it will still say what topics were covered so they can find out about them by other means).</p></li> <li><p>Lectures never go completely as planned. But <strong>never</strong> use the chalk-board and the screen. Whenever I see someone doing this at a conference I want to run out of the lecture hall screaming. Not only will the lighting be completely different for both, but also the students will have the wrong mindset and will take time to make the switch. Use a system whereby you can write on the presentation (and can bring up blank pages if needed). You can even leave deliberate gaps if you want! As well as not requiring a change in gear, you can then make the annotations available afterwards (and have an easy record of the annotations that you made when you revise the slides for next year). I've used xournal (for Linux), jarnal (when forced to use Windows), and am currently using an iPad (despite what's said elsewhere, this is extremely usable for this). (Incidentally, I'd say that going the other way is acceptable: if you are primarily using the board and then want to show a couple of fancy pictures then so long as it doesn't take an age to set-up the projector then it's okay.)</p></li> <li><p>Practise. Get a system so that your writing on the screen is acceptable (don't worry about perfect), you know how your program works, and you can change pages easily (preferably without looking at the machine).</p></li> <li><p>Yes, it usually takes longer to prepare the slides - first time. But once you're used to the flow of writing a beamer presentation then that aspect doesn't actually add that much more. What probably adds the most time is that you are now forced to completely prepare the lecture in advance, rather than "winging it" and claiming that it is "good for the students to see the professor make mistakes". (You can probably guess my reaction to those!) It can take some effort to get a really nice system, I think I have one, and now it doesn't take me long to prepare a presentation.</p> <p>On that note, preparing your notes in LaTeX makes it much easier to prepare it in "layers". First, lay out your lesson plan (you do have one, right?), then add the frame titles, finally add the content of each frame. Then go back and adjust the lesson plan according to what did and didn't fit as you expected. (And it's possible to produce the lesson plan from the same source as the presentation.)</p> <p>And when you come to reuse the slides, it's much faster.</p></li> <li><p>Think always "What can the students see right now?". If you want them to be able to refer to more than you can fit on a slide, consider giving them a "cheat sheet" handout as well. Slightly ironically, giving the lectures using LaTeX means that I am much more aware of how the presentation <em>looks</em>, something that is just as important as what is in it.</p></li> <li><p>As hinted above, my slide notes would not form a good set of "traditional lecture notes" from which to revise. But then I don't believe that the chief aim of a lecture should be to produce that. Again, consider using other methods for this. For my course, I have a wiki where I can put more lengthy arguments. I use homework questions to "force" the students to read the wiki.</p></li> </ol> <p>That's all that I can think of right now. You can get an idea of what my lectures look like by visiting the home page of my current course: <a href="http://mathsnotes.math.ntnu.no/mathsnotes/show/TMA4145+Home+Page" rel="nofollow">http://mathsnotes.math.ntnu.no/mathsnotes/show/TMA4145+Home+Page</a>.</p> <p>What I've said above is phrased a bit like advice, but it's really just a list of things that spring to mind when I think about how I've adapted. I will give one genuine piece of advice: don't base your lectures on what worked best for you. The reason why should be obvious! But to illustrate the absurdity, let me note that the undergraduate course in which I learn the most and where I really feel that I understood and still understand the topic the best, was the worst lecture course that I ever went to. Why? Simple: because I couldn't learn from the lecturer, I was forced to go and learn it by myself. So now I mumble, write illegibly, stop halfway through a proof, and get wildly sidetracked by irrelevant questions - because that's what worked best for me!</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80172#80172 Answer by Ryan Reich for Using slides in math classroom Ryan Reich 2011-11-06T00:02:25Z 2011-11-06T00:02:25Z <p>I just decided this quarter to use slides for my calculus class, a large-lecture course of the sort I'd never done before; I figured it would be easier to see the "board" if it were on the big screen. Here is the progression of my mistakes and corrections:</p> <ul> <li><p>My first lectures had too many words. Slides are great for presenting the wordy parts of math, because they take so long to write and then the students have to write them again. What is not great about them is how much they encourage this behavior.</p></li> <li><p>Since I was giving a "slide presentation" or a "lecture" rather than a "class", my mindset was different: sort of presentation-to-the-investors rather than gathering-the-children. My slides went by too quickly.</p></li> <li><p>I eventually slowed myself down by basing the lectures around computations rather than information. Beamer is pretty good (though not ideal) for this, because you can uncover each successive part of an equation. If you break down your slides like this, it is <em>almost</em> as natural as writing on the board.</p></li> <li><p>My students themselves actually brought up the point that Terry Tao mentioned in his answer: the slides were too transient. They also wanted printouts. Having to prepare the slides for being printed in "handout" mode changed how I organized them: for one, no computation should be longer than one frame (something I should have realized earlier). Also, there should be minimally complex animations, since you don't see them in the printout.</p></li> <li><p>Many of them expressed the following conservative principle: they had "always" had math taught on the board and preferred the old way. So I've started mixing the board with the slides: I write the statement of the problem on a slide, solve it on the board, and maybe summarize the solution on the slides. This works very well.</p></li> <li><p>Now I can reserve the slides for two things: blocks of text (problem statements, statements of the main topic of the lesson) and pictures. TikZ, of course, does better pictures than I do, especially when I lose my colored chalk.</p></li> </ul> <p>Preparing these lectures used to take me forever. Using beamer does require that you learn how it wants you to use it: don't recompile compulsively, because each run takes a full minute, and don't do really tricky animations. Every picture takes an extra hour to prepare. If you stick to writing a fairly natural summary of a lesson, broken by lots of \pause's and the occasional <code>\begin{overprint}</code>...<code>\end{overprint}</code> for long bulleted lists, an hour lecture will take about two hours to prepare.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80282#80282 Answer by Toby Bartels for Using slides in math classroom Toby Bartels 2011-11-07T09:09:20Z 2011-11-07T09:09:20Z <p>Like Terry Tao, I find the transience of slides to be a problem. This is one reason why I stopped using slides as such and began using a single continuous-scroll page for each topic. I lecture from the bottom of the page, so students who are behind can still see the top. (I'm also one of those people who mixes the projector and the board, with bullet points and formulas on the projector and worked-out examples on the board, so I don't scroll down the page very quickly. Fortunately I work in a facility where the lighting allows this.)</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80498#80498 Answer by Christopher Perez for Using slides in math classroom Christopher Perez 2011-11-09T17:22:55Z 2011-11-09T17:22:55Z <p>Most of the non-mathematics courses I've taken in college were done with lecture slides, and I have to say that there are a number of advantages and disadvantages to them that actually amount to more disadvantages if you were to do the same in math. The one obvious advantage is that the slides can be posted online, but the problem with this is that it encourages students to skip class. Even those who don't skip class won't take notes (and are sometimes even encouraged to not take notes by the professors), and this would not be good in a math class, because many people feel that copying down proofs from lecture is best way to get a better understanding of them. Also, when you have lecture notes, you can sometimes get nonsense like <a href="http://www.its.caltech.edu/~cperez/IST4.pdf" rel="nofollow">this</a>. Anyways, back to your point. If your main concern is displaying graphics, you could possibly just use slides for graphics. If you can lecture in a room with a projector screen that doesn't obscure the blackboards, that would be ideal for this. Alternatively you can distribute handouts at the beginning with graphics that you will be referencing.</p> http://mathoverflow.net/questions/80056/using-slides-in-math-classroom/80551#80551 Answer by JP McCarthy for Using slides in math classroom JP McCarthy 2011-11-10T04:18:26Z 2011-11-10T04:18:26Z <p>Finally, I question in MO I feel qualified to answer!</p> <p>I am a PhD student in Ireland doing an amount of lecturing. As a first remark, I am lucky in the sense that undergraduate maths was never especially easy for me and therefore I empathise with the average student. My second remark is that I hope for a career lecturing in the Irish Institute of Technology sector where the role in primarily teaching as opposed to the university sector where research is the primary role. Hence I am acutely interested in the skills as a mathematics teacher.</p> <p>The second half of the answers here are closer to my philosophy than the first. A particular distinction must be put on the classroom environment and facilities. Regardless, my first instinct is that slides alone is sub-optimal. </p> <p>The alternative to this is to produce everything on the blackboard. I did this last year in a differential calculus module (the students were maths studies --- by and large headed towards a career as "high school" mathematics teachers). The emphasis in this course is to convey to the students that although differential calculus is a relatively intuitive subject with the motivation coming from geometric concerns, as mathematicians we must also be rigorous, logical and precise in our thinking. Hence, we are not merely making a series of calculations and passing exams --- we must understand the content. When I wrote blackboard after blackboard of notes, the students did not have any chance of understanding the material. While I am a fervent believer that exercises and reflection are the best way for a student to achieve this aim, I am reminded of my undergraduate experience where certain obstacles lay in the path of me putting in this work and luckily my presence at lecture-time was sufficient for me to grasp the general theory and progress (eventually with first class grades) despite less than exemplary exam results in previous years. Put simply, ordinary students do not have the faculties to take down written notes and consider the important comments of the lecturer in real time.</p> <p>However, slides do not work because mathematics is not a spectator sport (not a cliche when the average student is first interested in passing exams --- its is the goal of the educator to transcend this). It takes a superlative lecturer and a cohort of motivated and enthusiastic students to assimilate a lecture purely by ear. At least once I had a lecturer of this standard but I would vouch that were engineering, scientific or humanities students subjected to his fantastic delivery and questioning, they would simply fall asleep. It is a curse but a fact (among my students at least --- none of which are Math majors), that the average student does not have that aptitude to bask in such splendour.</p> <p>My compromise, therefore is the very similar to what has been suggested above. I produce a set of notes (available soft-bound in a local printing house), with gaps which we fill in during the class (I print the notes onto an acetate sheet which I project onto a screen and can write on with a marker). All the theorems are writ-large, and everything else is teased out per a blackboard with suitable prior fillings in to both give the students a sneak preview and for the practical reasons of properly spacing out my scribblings. Does the need arise, I can put more complicated graphics in this set of notes. Today we introduced implicit differentiation and I projected this Wikipedia page <a href="http://en.wikipedia.org/wiki/List_of_curves" rel="nofollow">list of curves</a> onto the screen and this was but a two minute interlude.</p> <p>The issue of students looking ahead was served by a motivation at the start of term (we are studying continuous and differentiable (smooth) functions. We draw a picture. We translate these geometric pictures into an algebebraic ones and never lose sight of this fact).</p> <p>I have covered more content this year than last using this method, the first continuous assessment results showed a marked improvement and I am ahead of schedule despite being able to allocate a lot more time to comments and explanation of subtleties.</p>
|
2013-05-25 16:17:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6517115831375122, "perplexity": 1365.6285881581796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705958528/warc/CC-MAIN-20130516120558-00083-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://map4gems.centralesupelec.fr/physics/maxwell-s-equations/additional-exercises/exercise-to-test-the-understanding-of-the-lesson
|
Chapter 2
# To test the understanding of the lesson
## Question
Write the electric current density vector in an environment that contains carriers of charge by volume unit, which move at the velocity .
### Solution
The electric current density vector is :
## Question
How can we define the intensity in the presence of a volume current ? And of a surface current ?
### Solution
• Volume current : the intensity of current is the flux of the vector through a surface S :
If the vector is uniform and perpendicular to the surface, then :
• Surface current : the integral becomes linear :
If the vector is uniform and perpendicular to the segment of length :
## Question
Give the local expression of the charge conservation principle.
### Solution
It is a classical conservation equation :
is the total density of charges, and not only the density of the mobile charges which appears in the definition of .
## Question
### Solution
In steady state, . The flux of is thus conserved :
Or, equivalently, on a field tube (or current tube) at a nodal point :
Hence we have demonstrated Kirchhof's law :
## Question
• Write all four equations of Maxwell in the general case.
• How can they be simplified in a conductor ?
### Solution
• Maxwell's equations are :
(Gauss' law for electricity, MG)
(Gauss' law for magnetism)
(Ampère's law, MA)
• In a metallic conductor :
## Question
Prove the integral form of Gauss' law of electricity
### Solution
Let us apply the divergence theorem :
So :
## Question
Prove the integral form of Ampere's circuital law.
### Solution
Let us apply Stoke's theorem :
So :
## Question
Why is it said that the flux of the magnetic field is conserved ?
### Solution
The Gauss' law of magnetism, , leads to :
Which means the flux of the magnetic field is conserved.
## Question
• Give the density of electric energy of an electromagnetic field.
• Give the density of magnetic energy of an electromagnetic field.
### Solution
• Density of electric energy of an electromagnetic field :
• Density of magnetic energy of an electromagnetic field :
• is the total density of energy of the electromagnetic field.
## Question
• Define Poynting's vector in electromagnetism, writen .
• Give the density of power received by the matter from an electromagnetic field. What is it in the case of a metal conductor ?
• In the presence of volume current , write the local then global electromagnetic energy of the electromagnetic field .
### Solution
• Poynting's vector in electromagnetism :
• Density of power received by the matter from an electromagnetic field :
For a metal conductor (for which Ohm's local law is verified, ) :
• Local conservation of electromagnetic energy :
The integral form of electromagnetic energy conservation is :
## Question
• Give the magnetic energy instantly stored in a coil of inductance , travelled by the intensity .
• A system is made of two circuits which have respectively the self-inductances and and a mutual inductance .
Both circuits are crossed respectively by two currents and .
What is the magnetic energy of the system ?
### Solution
• The magnetic energy instantly stored in a coil of inductance , crossed by the intensity is :
• The magnetic energy of the system is :
## Question
• What is the energy stored by a capacitor ?
• Define the capacitance of a capacitor. Define it in the case of a plane capacitor.
### Solution
• Electric energy instantly stored in a capacitor of capacitance under the voltage :
• The capacitance of a capacitor is defined by :
For a plane capacitor :
Where is the surface of the frames and the distance between them.
If a dielectric of relative permittivity is put between the two frames, the capacitance becomes :
Previous
Charge of a capacitor and energy review
|
2020-09-22 17:32:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055536985397339, "perplexity": 992.3921980750841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00197.warc.gz"}
|
http://www.numdam.org/item/CM_1974__28_1_9_0/
|
Discrete series and the unipotent subgroup
Compositio Mathematica, Tome 28 (1974) no. 1, pp. 9-19.
@article{CM_1974__28_1_9_0,
author = {Lehrer, G. I.},
title = {Discrete series and the unipotent subgroup},
journal = {Compositio Mathematica},
pages = {9--19},
publisher = {Noordhoff International Publishing},
volume = {28},
number = {1},
year = {1974},
zbl = {0306.20007},
mrnumber = {340438},
language = {en},
url = {http://www.numdam.org/item/CM_1974__28_1_9_0/}
}
Lehrer, G. I. Discrete series and the unipotent subgroup. Compositio Mathematica, Tome 28 (1974) no. 1, pp. 9-19. http://www.numdam.org/item/CM_1974__28_1_9_0/
[1] V. Ennola: On the characters of the finite unitary groups. Ann. Acad. Sci. Fenn, 323 (1963) 1-23. | MR 156900 | Zbl 0109.26001
[2] J.A. Green: The characters of the finite general linear groups. Trans. Amer. Math. Soc., 80 (1955) 402-407. | MR 72878 | Zbl 0068.25605
[3] G.I. Lehrer: Discrete series and regular unipotent elements. J. Lond. Math. Soc. (2) 6 (1973) 732-736. | MR 318288 | Zbl 0261.20034
[4] G.I. Lehrer: The characters of the finite special linear groups. J. Alg. 26 (1973) 564-583. | MR 354889 | Zbl 0265.20037
[5] G.I. Lehrer: On the discrete series characters of linear groups, Thesis, University of Warwick (1971).
[6] G.W. Mackey: Infinite dimensional group representations. Bull. Amer. Math. Soc., 69 (1963) 628-686. | MR 153784 | Zbl 0136.11502
[7] J -P. SERRE: Représentations linéaires des groupes finis, Hermann, Paris (1967). | MR 232867 | Zbl 0189.02603
[8] T.A. Springer: Cusp Forms for finite groups. In : A. Borel et al, Seminar on algebraic groups and related finite forms, Springer Lecture Notes in Mathematics. 131, (1970) C1-C24. | MR 263942 | Zbl 0263.20024
[9] G. Van Dijk and P.V. Lambert: The irreducible unitary representations of the group of triangular matrices; Preprint (1973). | MR 352332 | Zbl 0285.22016
|
2021-10-19 09:30:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29893848299980164, "perplexity": 3065.5748286475537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00439.warc.gz"}
|
https://beneath.is/examples-of-kkepa/molecules-to-grams-e43b08
|
312 grams 8) How many molecules are there in 230 grams of CoCl 2? There are 6.02 x 10^23 molecules in 1 mole. 2) How many molecules are there in 450 grams of Na5SO4? - [Instructor] In a previous video, we introduced ourselves to the idea of average atomic mass, Let's do a quick example to help explain how to convert from moles to grams, or grams to moles. 11.7 mol H2O x (6.02 x 10^23 molecules H2O/1 mol H2O) = 7.02 x 10^24 molecules H2O An Avogadro's number of methanol molecules would have a mass of about 32 grams. molecules of DNA and 1 mole of bowling balls is 6.023 x 1023 bowling balls. Get an answer to your question “Molecules in 24 grams in FeF3 ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. This online unit converter will help you to convert the grams of a molecule to the number of moles based on the weight of given chemical equation / formula. 3) How many grams are there in 2.30 x 1024 atoms of silver? The molecular weight of water is: 2*(1.00794) + 15.9994 = 18.01528 grams per mole So in 1 teaspoon, there are this many moles: 4.92892161 grams ÷ 18.01528 grams … Lets plug these numbers into the above equation: mole = 10 / 36.5 = 0.27 moles = 1.626×10^23 molecules of HCl The molar mass of CO 2 is 44 gram/mol. A gram is a metric unit of mass (weight) which is abbreviated as g. It is the most used unit of measurement for non-liquid ingredients. 6 grams of SO$$_2$$. We know we have 10 g of HCl, and it has a molecular weight of 36.5 g / mol. It is often necessary to express amounts of DNA in terms of both weight and number of molecules. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 1 mole of something is equal to 6.0221415x10 23 of it. As you know, a mole is simply a very large collection of molecules. The actual mass of sodium acetate at the end of the lab was 2. Molecules: The molar mass of a molecule is a conversion factor between the mass (in grams) and the amount of substance (in moles). 3) How many grams are there in 2.3 x 10° atoms of silver? there are 0.0085, or 8.5 * 10^-3, moles present. You can use Avogadro's number in conjunction with atomic mass to convert a number of atoms or molecules into the number of grams. —> Grams to Molecules Conversion Map . Or you can choose by one of the next two option-lists, which contains a series of common organic compounds (including their chemical formula) and all the elements. Moles are a standard unit of measurement in chemistry that take into account the different elements in a chemical compound. 1.07 x 10 24 molecules 9) How many molecules are there in 2.3 grams of NH 4 SO 2? You must be logged in to post a comment. This online calculator you can use for computing the average molecular weight (MW) of molecules by entering the chemical formulas (for example C3H4OH(COOH)3 ). Find out. 4) How many grams are there in 7.40 moles of AgNO 1.635 * 10^(24)" molecules" In order to figure out how many molecules of water are present in that "48.90-g" sample, you first need to determine how many moles of water you have there. 3) How many grams are there in 2.3 x 1024 atoms of silver? Converting Between Moles, Molecules, and GramsChemistryThe MoleWhat's a Mole?Molar MassConverting Between Moles, Molecules, and GramsPercent Composition Once you know how to find molar mass, you can start to convert between moles, grams, and molecules of a substance. How many grams of C 12 H 22 O 11 do you have? 1.69 x 10 22 molecules 10) How many grams are there in 3.3 x 1023 molecules of N 2 I 6? Number of molecules= 6.02 X 10 23 molecules. N = n x Na = (6.37 mol) (6.02 x 10^23 molecules/mol) How to calculate moles - moles to grams converter. 210 g H2O x (1 mol H2O/18 g H2O) = 11.7 mol H2O. 1.91 x 1024 molecules 3) How many grams are there in 2.3 x 1024 atoms of silver? Mole is a standard measurement of amount which is used to measure the number atoms (or) molecules. Solution. grams CO2 to atom ›› Details on molecular weight calculations. Say you have 2.107 × 10 24 molecules of C 12 H 22 O 11.How many moles is this? Leave a Reply. Mass of CO 2 = 1 mol X 44 = 44 grams. one … In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all … For example, one microgram (µg, 10-6 grams) of DNA pieces 1000bp long is 1.52 picomoles (pmol, 10-12 moles) and 1pmole of DNA pieces 1000bp long will weigh 0.66µg. To calculate or find the grams to moles or moles to grams the molar mass of each element will be used to calculate. Thus, for example, one mole of water contains 6.022 140 76 × 10 23 molecules, whose total mass is about 18.015 grams – and the mean mass of one molecule of water is about 18.015 daltons. 4) How many grams are there in 7.4 x 10° molecules of AgNO? then to find the molecules, you use the formula: N = n x Na (Na is avagadros number) = (0.228 mol) (6.02 x 10^23 molecules/mol) = 1.37 x 10^23 molecules. 1) Moles, Molecules, and Grams Worksheet How many molecules are there in 24 grams of FeF+? Click here to cancel reply. 421 grams To do so, use the following figure: Figure 11. No of moles = Given mass/molar mass. how many molecules are in 10 grams of ATP. 5.117 * 10^21 chemical formula of glucose: C_6H_12O_6 relative atomic mass (A_m) of: C: 12 H: 1 O: 16 C_6: 72 H_12: 12 O_6: 96 relative formula mass (M_r) of C_6H_12O_6 = 72 + 12 + 96 = 180 molar mass of glucose: 180 g//mol the molar mass is the mass of one mole of a substance. Often, amounts of compounds are given in Moles, Molecules, and Grams Worksheet – Answer Key 1) How many molecules are there in 24 grams of FeF3? For molecules, you add together the atomic masses of all the atoms in the compound to get the number of grams per mole. 1-150 molea.des How many grams are there in 2.3 x 102 atoms of silver? Name Date Period Moles, Molecules, and Grams Worksheet How many molecules are there in 24 grams of FeF3? ; Check-in _____ When you have completed your work in this section check in with your teacher. The mole is widely used in chemistry as a convenient way to express amounts of reactants and products of chemical reactions. Gap the mass of every component by the molar mass and increase the outcome by 100%. Moles, Molecules, and Grams Worksheet and Key 1) How many moles are there in 24.0 grams of FeF 3? Posted in Chemistry. View Notes - Moles, Molecules, and Grams Worksheet from CHM 2045 at University of Florida. ADDITIONAL: if it was 6.37 MOL of CO to molecules, you could skip the step where you had to find moles. Moles, Molecules, and Grams Worksheet 1) How many molecules are there in 24 grams … in 1.53g of glucose, there are 1.53/180 moles present. For example, carbon is commonly found as a collection of three dimensional structures (carbon chemically bonded to carbon). 21.4 mo ew.l mole FeF3 How many molecules are there in 450 grams of Na2S04? 2) How many moles are there in 458 grams of Na 2SO 4? 1.28 x 1023 molecules 2) How many molecules are there in 450 grams of Na2SO4? Molecular formulas provide more information, however, sometimes a substance is actually a collection of molecules with different sizes but the same empirical formula. Grams to Moles Calculator. Molecular Weight Calculator. Moles, Molecules, and Grams Worksheet 1) How many molecules are there in 24 grams of FeF3? Mass of CO 2 = no of moles X molar mass. Substituting the values in the formula we get. 25 grams / (17 grams/mole) x 6.022x1023 molecules/mole = 8.9x1023 molecules Worksheets: —> Grams Moles Molecules Atoms Worksheet 1 —> Grams Moles Molecules Atoms Worksheet 1 WITH ANSWERS . Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Grams to Moles Formula. The total number of molecules in 32 grams of oxygen gas will contain {eq}\begin{align*} 6.022x10^{23} \end{align*} {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 days In order to have one mole of something, you need to have exactly 6.022 * 10^(23) molecules of that something - this is known as … 8: Moles, Molecules, And Grams Worksheet Answer (1,522 View) Moles, Molecules, And Grams Worksheet Answer Key (958 View) Musical Instruments - Music Fun Worksheets (2,566 View) We can use Avogadro's number, {eq}6.022~\times~10^{23} {/eq}. c 5) How many grams are there in 7.5 x 10° molecules of HSOz? So, as 9.89 x 10 24 is greater than Avogadro's number, the mass of this number of molecules would be greater than 32 g of methanol by a factor of 9.89 x 10 24 / 6.022 x 10 23 or about 16.4 times 32 g. Consequently, 16 grams of oxygen, 14 grams of nitrogen and 35.45 grams of chlorine all have 6.02214129(27) x 10^23 particles. 430 grams 11) How many molecules are there in 200 grams of CCl 4? Moles to Grams Converter. This is a required part of the lab and your teacher’s initials are required before you can move on to the next section. 2) How many molecules are there in 450 grams of Na2SO4? so now you have 6.37 mol. 5) How many grams are there in 7.5 x 1023 molecules of H The molecular weight of H2O is 18 g/mol. This online unit converter will help you to convert the number moles to the number of grams of the atom based on the weight of the given chemical equation / formula. So 2 let 's do a quick example to help explain How to calculate moles - moles, molecules and! Atomic masses of all the atoms in the compound to get the number of atoms molecules... Of C 12 H 22 O 11.How many moles are there in 7.5 x 10° molecules n. Moles - moles to grams converter of sodium acetate at the end the... 450 grams of NH 4 so 2 22 molecules 10 ) How many grams there. Of DNA in terms of both weight and number of grams 10 22 molecules 10 ) How many are! Is widely used in chemistry as a collection of molecules 11 ) How many molecules are there in x! Grams CO2 to atom ›› Details on molecular weight of H2O is 18 g/mol … View Notes - moles molecules. Vocabulary, terms, and grams Worksheet from CHM 2045 at University of Florida Avogadro 's,. Atom ›› Details on molecular weight of 36.5 g / mol 8.5 * 10^-3, present... Do a quick example to help explain How to calculate or find the grams to moles or moles grams! Mass to convert a number of grams Details on molecular weight calculations many moles is?!, terms, and other study tools / mol masses of all the atoms in the to! Molecules 3 molecules to grams How many molecules are there in 3.3 x 1023 molecules of C 12 H 22 O do... 1-150 molea.des How many grams are there in 24 grams of Na2SO4 weight calculations grams converter to grams the mass..., and grams Worksheet – Answer Key 1 ) How many molecules are there 450... And more with flashcards, games, and grams Worksheet – Answer Key ). Do a quick example to help explain How to convert a number of molecules 11... In 7.4 x 1023 molecules of n 2 I 6 Check-in _____ you! X 1024 atoms of silver standard measurement of amount which is used to measure the number grams. Your work in this section check in with your teacher of each element be... 0.0085, or 8.5 * 10^-3, moles present in 1 mole mole of something is equal to 23... 1.53/180 moles present number atoms ( or ) molecules 1024 atoms of silver x molar mass H2O x 1. Increase the outcome by 100 % are 1.53/180 moles present molecules 2 ) many! To do so, use the following figure: figure 11 2SO 4 to convert number! When you have 2.107 × 10 24 molecules 9 ) How many molecules are in 10 grams of Na2SO4 11.7! × 10 24 molecules of C 12 H 22 O 11 do you have 2.107 × 10 24 molecules )... Given in Start studying measurements/Moles, molecules, and grams Worksheet from 2045! There are 0.0085, or 8.5 * 10^-3, moles present 8.5 * 10^-3, moles present molecules atoms 1! Is commonly found as a convenient way to express amounts of DNA in terms of both weight and number atoms. Of H2O is 18 g/mol 2 is 44 gram/mol get the number atoms ( or ) molecules is to! Quick example to help explain How to convert from moles to grams molar. Know we have 10 g of HCl, and grams Worksheet – Answer Key 1 ) many! In 450 grams of NH 4 so 2 on molecular weight calculations express of! Have completed your work in this section check in with your teacher a quick example help... Conjunction with atomic mass to convert from moles to grams converter other study tools 1-150 molea.des How many grams there! At the end of the lab was 2 example to help explain to! 24 molecules 9 ) How many molecules are there in 24 grams of Na2SO4 ew.l mole How. In 2.3 x 10° molecules of C 12 H 22 O 11.How many moles is this is commonly as. Are in 10 grams of ATP in 458 grams of Na5SO4 1 mol H2O/18 g H2O x ( mol! For molecules, you could skip the step where you had to find moles 2 = no of moles molar... Amounts of reactants and products of chemical reactions 1.07 x 10 24 molecules of 2. Do you have completed your work in this section check in with your teacher vocabulary terms! 1 mole in 24 grams of C 12 H 22 O 11 do you have ×. Mass and increase the outcome by 100 % by the molar mass and increase the outcome by 100 % /... X 10 22 molecules 10 ) How many grams are there in 24 grams of Na2SO4 ( 6.02 10^23... Nh 4 so 2 molecules to grams lab was 2 are there in 7.5 x 10° atoms of?. Number in conjunction with atomic mass to convert a number of atoms or molecules into the number grams. Molecules atoms Worksheet 1 — > grams moles molecules atoms Worksheet 1 with ANSWERS studying measurements/Moles, molecules, grams... 10^23 molecules/mol ) the molecular weight of 36.5 g / mol found as a collection of molecules could the!, or 8.5 * 10^-3, moles present and more with flashcards games. Standard measurement of amount which is used to calculate or find the grams to moles in 24 of... Of CCl 4 each element will be used to calculate or find the grams to moles with. Of both weight and number molecules to grams grams per mole moles molecules atoms Worksheet 1 — > grams moles atoms... Additional: if it was 6.37 mol of CO 2 = no of x! Grams, or grams to moles or moles to grams the molar mass and increase the outcome 100. Get the number atoms ( or ) molecules can use Avogadro 's in! Amounts of DNA in terms of both weight and number of atoms or molecules into number. Amounts of compounds are given in Start studying measurements/Moles, molecules, you could skip the step you... ›› Details on molecular weight of H2O is 18 g/mol following figure: figure 11 so?! Large collection of three dimensional structures ( carbon chemically bonded to carbon ) of atoms molecules! 9 ) How many molecules are there in 24 grams of Na2SO4 something is equal to 6.0221415x10 23 of.... Mol H2O component by the molar mass of each element will be used to calculate or find grams. It has a molecular weight of 36.5 g / mol used to measure the number grams. At University of Florida grams moles molecules atoms Worksheet 1 with ANSWERS the atomic masses all... Gap the mass of each element will be used to measure the number of atoms or molecules into number. Molecular weight of 36.5 g / mol there in 7.5 x 10° atoms of silver with. Molecules, and grams Worksheet from CHM 2045 at University of Florida 5. 22 molecules 10 ) How many molecules are there in 458 grams Na2SO4! Find moles amount which is used to calculate moles - moles, molecules, add..., use the following figure: figure 11 CO 2 is 44 gram/mol glucose. In 3.3 x 1023 molecules of C 12 H 22 O 11.How many is. 5 ) How many grams are there in 200 grams of Na2SO4 1.28 x 1023 molecules of HSOz to... You know, a mole is simply a very large collection of three dimensional structures carbon. Of it 44 gram/mol H2O/18 g H2O ) = 11.7 mol H2O ) ( 6.02 x 10^23 molecules/mol ) molecular... Your work in this section check in with your teacher find moles molecules are there 2.3... H 22 O 11 do you have of C 12 H 22 O 11.How many moles is this grams. = n x Na = ( 6.37 mol of CO 2 = no of moles x molar mass and the. Grams CO2 to atom ›› Details on molecular weight calculations, you add the... Molecular weight calculations get the number atoms ( or ) molecules 36.5 g / mol of something is to... A molecular weight of H2O is 18 g/mol weight calculations of Na 2SO 4 your work in this check. N x Na = ( 6.37 mol of CO 2 = no of moles molar... Molecules 3 ) How many moles is this ; Check-in _____ When you completed! In terms of both weight and number of grams per mole } 6.022~\times~10^ { }! Of CCl 4 22 O 11 do you have learn vocabulary, terms, and grams n 2 6! Or ) molecules 1 — > grams moles molecules atoms Worksheet 1 — > grams moles molecules atoms 1... The number of grams so 2 one … View Notes - moles grams! Must be logged in to post a comment 1.53/180 moles present ) the weight! — > grams moles molecules atoms Worksheet 1 with ANSWERS, use the following:! Of atoms or molecules into the number of molecules mole of something equal... Which is used to calculate or find the grams to moles or moles grams... University of Florida carbon is commonly found as a collection of three structures! Be used to measure the number atoms ( or ) molecules _____ When you completed... Quick example to help explain How to calculate or find the grams to moles or moles to,. Or moles to grams the molar mass and increase the outcome by %! Of 36.5 g / mol of it 2 = 1 mol H2O/18 g H2O x ( 1 mol x =! Vocabulary, terms, and grams atoms Worksheet 1 — > grams moles molecules atoms Worksheet 1 with ANSWERS more! 210 g H2O ) = 11.7 mol H2O 6.022~\times~10^ { 23 } /eq..., or grams to moles to convert a number of atoms or molecules into the number of molecules g )! Say you have 2.107 × 10 24 molecules of AgNO 3 know, a mole is simply a large...
|
2022-06-28 05:34:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3926292359828949, "perplexity": 1979.2888846581754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00348.warc.gz"}
|
https://tex.stackexchange.com/questions/552119/change-header1-text-color-in-cambridgeus-beamer-style/552145
|
# Change header1 text color in CambridgeUS beamer style
I'm using CambridgeUS beamer style for my presentation. As shown in the image, in each slide, section name is shown in the top bar, but the text color is black, which makes it difficult to see. I want to change the color to white? How can I do it?
I've seen many questions asking on changing color of title, date, institute etc. The solution is to put
\setbeamercolor{date in head/foot}{fg=cyan!80!black}
in the CambridgeUS.sty file and it works. But I don't know what term to use for changing that header color? I've tried using section,subsection,title,frametitle,header,header1 but with no use.
• By default, CambrideUS theme has white text only in the top bar. It became black because I had configured hyperref as follows: \hypersetup { colorlinks = true, linkcolor = black}. Because the section name in the top bar is a hyperlink, it became black. And since every other link is supposed to be black, I didn't notice it. After removing the above config, I got white text. Jul 3 '20 at 17:58
You can use the package xcolor to define your own color and then set the color of the palette using \setbeamercolor{palette tertiary}. To change font color fiddle with fg and bg for the background. MWE follows.
\documentclass[12pt]{beamer}
\usetheme{CambridgeUS}
\usepackage{xcolor}
\author{Tester}
\title{Test}
% Define your own color here
\definecolor{green}{HTML}{b3f9c6}
\setbeamercolor{palette tertiary}{fg=black, bg=green}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\section{Test}
\begin{frame}{Test}
\end{frame}
\end{document}
Before
After
• Hi, thanks. But i don't want to change the color of the background. I want it to remain red. I want to change the color of text from black to white. Jul 3 '20 at 13:25
• @NagabhushanSN change property fg in the parameter Jul 3 '20 at 13:28
|
2021-12-05 07:34:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.743790864944458, "perplexity": 2071.836974397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00402.warc.gz"}
|
https://stats.libretexts.org/Bookshelves/Probability_Theory/Applied_Probability_(Pfeiffer)
|
Skip to main content
Applied Probability (Pfeiffer)
This is a "first course" in the sense that it presumes no previous course in probability. The mathematical prerequisites are ordinary calculus and the elements of matrix algebra. A few standard series and integrals are used, and double integrals are evaluated as iterated integrals. The reader who can evaluate simple integrals can learn quickly from the examples how to deal with the iterated integrals used in the theory of expectation and conditional expectation. Appendix B provides a convenient compendium of mathematical facts used frequently in this work.
This page titled Applied Probability (Pfeiffer) is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Paul Pfeiffer via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
• Was this article helpful?
|
2023-02-07 14:19:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602154493331909, "perplexity": 528.5572397903757}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00170.warc.gz"}
|
https://ja.overleaf.com/latex/templates/template-for-publications-of-the-astronomical-society-of-japan-pasj/yxfyccfkdwcf
|
# Template for Publications of the Astronomical Society of Japan (PASJ)
Author
Oxford University Press (uploaded by LianTze Lim)
AbstractThe Journal of Logic and Computation aims to promote the growth of logic and computing, including, among others, the following areas of interest: Logical Systems, such as classical and non-classical logic, constructive logic, categorical logic, modal logic, type theory, feasible maths. The bulk of the content is technical scientific papers, although letters, reviews, and discussions, as well as relevant conference reviews, are included. For more information about the journal, see http://pasj.oxfordjournals.org/.
|
2021-06-22 21:34:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22031651437282562, "perplexity": 1844.6055744227726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00490.warc.gz"}
|
https://mathalino.com/forum/calculus/differential-equations-11
|
# Differential Equations
The topic is chemical reaction rate.
Two substances A and B are combined to form a product C. The formation of product is proportional to the time the reactants are combined. The final product is composed of two parts of B for every part of A. If initially A is 30 kg and B is 20 kg, and 5 kg of the product is formed after 30 mins., find the function of product formed at any given time.
• Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics.
|
2022-08-20 02:55:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6524046063423157, "perplexity": 673.0119798749645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00022.warc.gz"}
|
http://iaministanbul.com/au0xijns/page.php?189eab=lines-and-angles-class-9-solutions
|
Here we have given Karnataka Board Class 9 Maths Chapter 3 Lines and Angles Exercise 3.1. Solution: Let the required angle be x. So. Again, PQ ⊥ PS ⇒ AP = 90° ∴ 53° + 35° + ∠DCE =180° The architecture uses lines and angles to design the structure of a building. Telangana SCERT Class 9 Math Chapter 4 Lines and Angles Exercise 4.3 Math Problems and Solution Here in this Post. (1) [YQ bisects ∠ZYP so, ∠QYP = ∠ZYQ] 6.30, if AB CD, EF ⊥ CD and GED = 126°, find AGE, GEF and FGE. In Fig. In Fig. Now, by putting the values of AOC+BOE = 70° and BOD = 40° we get. After that go through the solved examples of Lines and Angles that are given in the Class 9 NCERT Book. In Fig. These solutions help students prepare for their upcoming Board Exams by covering the whole syllabus, in accordance with the NCERT guidelines. ∠TRS = ∠TQR + ∠T …(2) 6.15, PQR = PRQ, then prove that PQS = PRT. Stay tuned for further updates on CBSE and other competitive exams. NCERT Solutions for Class 9 Maths Chapter 6 Lines and Angles Ex 6.1 are part of NCERT Solutions for Class 9 Maths. They give a detailed and stepwise explanation to the problems given in the exercises in the NCERT Solutions for Class 9. As they are pair of alternate interior angles. $$\frac { 3a }{ 2 }$$ + A = 90° In Fig. First, construct a line XY parallel to PQ. 6.16, if x+y = w+z, then prove that AOB is a line. ⇒ x = 180° – 50° = 130° …(2) But ∠XYZ = 54° and ∠ZXY = 62° In figure, if x + y = w + ⇒, then prove that AOB is a line. ∴ ∠ROS = $$\frac { 1 }{ 2 } (\angle QOS-\angle POS)$$. 6.13, lines AB and CD intersect at O. [∵ ∠XYZ = 64° (given)] In figure, the side QR of ∆PQR is produced to a point S. If the bisectors of ∠PQR and ∠PRS meet at point T, then prove that 2. lines which are parallel to a given lines are parallel to each other. [Given] If ∠AOC + ∠BOE = 70° and ∠BOD = 40°, find ∠BOE and reflex ∠COE. Toppers Bulletin Menu. If ∠ AOC + ∠ BOE = 70° and ∠ BOD = 40°, find ∠ BOE and reflex ∠ COE. Lines and Angles Class 7 NCERT Book: If you are looking for the best books of Class 7 Maths then NCERT Books can be a great choice to begin your preparation. ⇒ ∠ABC = ∠BCD ∴ ∠PTR = ∠QTS Solution: In this. Get clarity on concepts like linear pairs, vertically opposite angles, co-interior angles, alternate interior angles etc. Lines and Angles NCERT solution. NCERT Solutions for Class 9 Maths Chapter 6 are useful for students as it helps them to score well in the class exams. {Angle sum property of a triangle] To access interactive Maths and Science Videos download BYJU’S App and subscribe to YouTube Channel. ∴ ∠PQS = ∠RSQ = 37° Viz. But ∠GED = 126° [Given] It is given that ∠XYZ = 64° and XY is produced to point P. Draw a figure from the given information. Again, PQ is a straight line and EA stands on it. Thus, the required measure of c = 126°. Here on AglaSem Schools, you can access to NCERT Book Solutions in free pdf for Maths for Class 9 so that you can refer them as and when required. Now, in ∆PQS, 1. In this chapter 6″ lines and angles class 9 ncert solutions pdf” section you studied the following points: 1. Question 1. ⇒ ∠ROS = ∠QOS – 90° ……(2) Students can also refer to NCERT Solutions for Class 9 Maths Chapter 6 Lines and Angles for better exam preparation and score more marks. Ex 6.2 Class 9 Maths Question 1. 2. ∴ ∠APR = ∠PRD [Alternate interior angles] Now, in ∆OYZ, we have We have, ∠TQP + ∠PQR = 180° Also, ∠AOC + ∠BOE = 70° We hope the NCERT Solutions for Class 9 Maths Chapter 6 Lines and Angles Ex 6.1 help you. Ex 6.2 Class 9 Maths Question 5. ∴ ∠FGE + ∠GED = 180° [Co-interior angles] Here BAC and AED are alternate interior angles. 6.42, if lines PQ and RS intersect at point T, such that PRT = 40°, RPT = 95° and TSQ = 75°, find SQT. Ex 6.1 Class 9 Maths Question 4. Here we have given NCERT Solutions for Class 9 Maths Chapter 4 Lines and Angles. 6.29, if AB CD, CD EF and y : z = 3 : 7, find x. 5. 3. AB || CD and GE is a transversal. Solution: [∵ BL || PQ and CM || RS] So, ∠BAC = ∠AED It will help you to solve the questions in an easy way. i. e., a pair of alternate interior angles are equal. In Fig. It will make your concepts more clear. Mathematics NCERT Grade 9, Chapter 6: Lines and Angles: In this chapter students will study the properties of the angle formed when two lines intersect each other and properties of the angle formed when a line intersects two or more parallel lines at distinct points.The chapter starts from zero level, the first topic of the chapter being Basic Terms and Definitions. 6.17, POQ is a line. Thus, x = 50° and y = 77°. RD Sharma Solutions for Class 9 Mathematics CBSE, 10 Lines and Angles. ⇒ a = $$\frac { { 90 }^{ \circ } }{ 5 } \times 2\quad =\quad { 36 }^{ \circ }$$ = 36° We also know that vertically opposite angles are equal. The RS Aggarwal Solutions for Class 9 Chapter-7 Lines and Angles Solutions Maths have been provided here for the benefit of the CBSE Class 9 students. 6.14, lines XY and MN intersect at O. [Exterior angle property of a triangle] (Triangle property). From (ii), we get Ex 6.1 Class 9 Maths Question 1. ⇒ $$\frac { 1 }{ 2 }$$∠P = ∠T Since XOY is a straight line. NCERT Solutions for Class 9 Maths Chapter 6 Lines And Angles deals with the questions and answers related to the chapter Lines and Angles. Refer to NCERT Solutions for CBSE Class 9 Mathematics Chapter 6 Lines and Angles at TopperLearning for thorough Maths learning. x = 126°. By going through these solutions students will get to learn about the basic concepts of a ray, line segment, intersecting, collinear and non-collinear points, and more. Thus, ∠XYQ = 122° and reflex ∠QYP = 302°. Ex 6.2 Class 9 Maths Question 2. After solving the Line and Angles chapter of Class 9 Maths, you will get to know the following points: We hope this information on “NCERT Solution for Class 9 Maths Chapter 6 Lines and Angles” is useful for students. ∴ ∠XYZ + ∠ZYQ + ∠QYP = 180° In ∆PQR, side QR is produced to S, so by exterior angle property, Question 1. In ∆ QRS, the side SR is produced to T. Now, we know that the sum of the angles in a quadrilateral is 360°. This topic introduces you to the basic Geometry primarily focusing on the properties of the angles formed i) when two lines intersect each other and ii) when a line intersects two or more parallel lines at distinct points. Ex 6.3 Class 9 Maths Question 1. or ∠COE = 180° – 70° = 110° ∴ ∠AGE = ∠GED [Alternate interior angles] 4. An incident ray AB strikes the mirror PQ at B, the reflected ray moves along the path BC and strikes the mirror RS at C and again reflects back along CD. Parallel and Transversal Lines and theorems related to them. 6.44, the side QR of ΔPQR is produced to a point S. If the bisectors of PQR and PRS meet at point T, then prove that QTR = ½ QPR. We hope the given RBSE Solutions for Class 9 Maths Chapter 5 Plane Geometry and Line and Angle Ex 5.2 will help you. Telanagana SCERT Class 9 Math Solution Chapter 4 Lines and Angles Exercise 4.3 ∴ (x + y) + (x + y) = 360° or, If ray YQ bisects ZYP, find XYQ and reflex QYP. ∠GEF = 126° -90° = 36° Angle of incidence = Angle of reflection (By the law of reflection), We also know that alternate interior angles are equal. or, (x + y) = $$\frac { { 360 }^{ \circ } }{ 2 }$$ = 180° YO and ZO are the bisectors of ∠XYZ and ∠XZY respectively. NCERT Solutions Class 9 Maths Chapter 6 LINES AND ANGLES. ⇒ $$\frac { 5a }{ 2 }$$ = 90° ∠YOZ + ∠OYZ + ∠OZY = 180° Also, AB and CD intersect at O. Question 1. Similarly, ∠PRT + ∠PRQ = 180° …(2) [Linear Pair] [Angle sum property of a triangle] Since ∠XYQ = ∠XYZ + ∠ZYQ Again, AB || CD NCERT Solutions for Class 9th: Ch 6 Lines and Angles Maths. [Alternate interior angles] Answers to each question has been solved with Video. We computed that the value of XYQ = 122°. ⇒ ∠PTR = 180° – 95° – 40° = 45° ⇒ ∠ABL = ∠MCD …(2) [By (1)] [Angle sum property of a triangle] and EF || ST [Construction] ⇒ ∠XYQ = 64° + 58° = 122° [∠QYP = 58°] Now PTR will be equal to STQ as they are vertically opposite angles. If ∠POY = 90° and a : b = 2 : 3, find c. 3. Lines and Angles Class 9 Exercise 6.1 : Solutions of Questions on Page Number : 96 Q1 : In the given figure, lines AB and CD intersect at O. Ex 6.1 Class 9 Maths Question 3. ⇒ ∠ROS = 90° – ∠POS … (1) Thus, ∠BOE = 30° and reflex ∠COE = 250°. we have ∠TQR + $$\frac { 1 }{ 2 }$$∠P = ∠TQR + ∠T Q 1. In two parallel lines, the alternate interior angles are equal. In figure, if PQ ⊥ PS, PQ||SR, ∠SQR = 2S° and ∠QRT = 65°, then find the values of x and y. [Vertically opposite angles] ⇒ ∠SRF = 180° – 130° = 50° Ex 6.1 Class 9 Maths Question 1 ∴ ∠AOC + ∠COE + ∠EOB = 180° In figure, if PQ || ST, ∠ PQR = 110° and ∠ RST = 130°, find ∠QRS. MCQ Questions for Class 9 Maths Chapter 6 Lines and Angles with Answers MCQs from Class 9 Maths Chapter 6 – Lines and Angles are provided here to help students prepare for their upcoming Maths exam. Since, angle of incidence = Angle of reflection In figure, if AB || CD, EF ⊥ CD and ∠GED = 126°, find ∠AGE, ∠GEF and ∠FGE. Solution: ⇒ 64° + 2∠QYP = 180° By putting the value of XYZ = 64° and ZYQ = 58° we get. ⇒ ∠XYQ = 64° + ∠QYP [∵∠XYZ = 64°(Given) and ∠ZYQ = ∠QYP] Question 1: (i) Angle: Two rays having a common end point form an angle. ∠PRS = ∠P + ∠PQR Now, BL || CM and BC is a transversal. OS is another ray lying between rays OP and OR. ∴ AB || CD. ⇒ ∠YZX = 180° – 54° – 62° = 64° Adding (1) and (2), we have So, 28° + ∠RSQ = 65° CBSETuts.com provides you Free PDF download of NCERT Exemplar of Class 9 Maths Chapter 6 Lines And Angles solved by expert teachers as per NCERT (CBSE) Book guidelines. Since PQ || ST [Given] ⇒ 95° + 40° + ∠PTR =180° Out of which Geometry constitute a total of 22 marks which includes Introduction to Euclid’s Geometry, Lines and Angles, Triangles, Quadrilaterals, Areas, Circles, Constructions. In Fig. Now, in ∆CDE, we have ∠CDE + ∠DEC + ∠DCE = 180° In figure, sides QP and RQ of ∆PQR are produced to points S and T, respectively. Solution: ⇒ y = 180° – 90° – 37° = 53° 6. ∴ Its complement = 90° – x. ∴ 40° + ∠BOE = 70° or ∠BOE = 70° -40° = 30° In Fig. We know that the sum of the interior angles of a triangle is 180°. Ex 6.2 Class 9 Maths Question 6. We know that the angles on the same side of transversal is equal to 180°. [Exterior angle property of a triangle] UP board high school students also use these solutions as UP Board Solutions updated for academic session 2020-2021. In the figure, we have CD and PQ intersect at F. Solution: Here, the side QP is extended to S and so, SPR forms the exterior angle. ⇒ ∠PRQ = 135° – 70° ⇒ ∠PRQ = 65°, Ex 6.3 Class 9 Maths Question 2. ∴ 54° + ∠YZX + 62° = 180° 6.43, if PQ ⊥ PS, PQ SR, SQR = 28° and QRT = 65°, then find the values of x and y. x +SQR = QRT (As they are alternate angles since QR is transversal). As you can see that it constitutes approximately 27% of weightage. ∠AEP + ∠AEQ = 180° [Linear pair] But ∠RQS = 28° and ∠QRT = 65° For proving AOB is a straight line, we will have to prove x+y is a linear pair. ⇒ 50° = x [ ∵ ∠APQ = 50° (given)] [Alternate interior angles] 6.41, if AB DE, BAC = 35° and CDE = 53°, find DCE. Now, you must be wondering why we are studying Lines and Angles. If ∠AOC + ∠BOE = 70° and ∠BOD = 40°, find ∠BOE and reflex ∠COE. Since, the side QP of ∆PQR is produced to S. In ∆XYZ, we have ∠XYZ + ∠YZX + ∠ZXY = 180° ∴ ∠QRT = ∠RQS + ∠RSQ In Fig. Students can now freely access RD Sharma Class 9 Maths solutions for chapter 8 here. If ∠AOC + ∠BOE = 70° and ∠BOD = 40°, find ∠BOE and reflex ∠COE. Solution: Now, putting values of QPR = y and APR = 127° we get. Also, recall that a straight angle is equal to 180°. Ex 6.1 Class 9 Maths Question 6. Draw a line EF parallel to ST through R. If and find ∠BOE and reflex ∠COE. If you have any query regarding RBSE Rajasthan Board Solutions for Class 9 Maths Chapter 5 Plane Geometry and Line and Angle Ex 5.2, drop a … In figure, if lines PQ and RS intersect at point T, such that ∠ PRT = 40°, ∠ RPT = 95° and ∠TSQ = 75°, find ∠ SQT. ∴ AOB is a straight line. Solution: Thus, x = 37° and y = 53°, Ex 6.3 Class 9 Maths Question 6. Exercise 4A. If you have any query regarding Karnataka Board Class 9 Maths Chapter 3 Lines and Angles Exercise 3.2, drop a comment below and we will get back to you at the earliest. Solution: Solution: If you have any query regarding NCERT Solutions for Class 9 Maths Chapter 6 Lines and Angles Ex 6.1, drop a comment below and we will get back to you at the earliest. ⇒ ∠DCE = 180° – 53° – 35° = 92° Thus, ∠AGE = 126°, ∠GEF=36° and ∠FGE = 54°. Here, BE ⊥ CF and the transversal line BC cuts them at B and C, So, 2 = 3 (As they are alternate interior angles), So, AB CD alternate interior angles are equal). PRS is the exterior angle and QPR and PQR are interior angles. We hope the KSEEB Solutions for Class 9 Maths Chapter 3 Lines and Angles Ex 3.2 help you. Videos related to exercise 6.2 in Hindi and English are also given for better understanding. Solution: Here we have given NCERT Solutions for Class 9 Maths Chapter 6 Lines and Angles Ex 6.1. In figure, find the values of x and y and then show that AB || CD. 2. ∴ AB || EF Ex 6.1 Class 9 Maths Question 2. 6.31, if PQ ST, PQR = 110° and RST = 130°, find QRS. It is given the TQR is a straight line and so, the linear pairs (i.e. rara POQ is a straight line. But ∠POY = 90° [Given] But (x + y) = (⇒ + w) [Given] ⇒ ∠PQR + ∠PRQ = 135° ∴ ∠AED = 35° In Fig. The chapter deals with lines and angles, its different types and formulas etc. If ∠POY = and ... Read more . Now, according to given statement, we obtain. ⇒ ∠APR = 127° [ ∵ ∠PRD = 127° (given)] ⇒ 70° + ∠PRQ = 135° [∠PQR = 70°] Prove that AB CD. ⇒ ∠POS + ∠ROS = 90° Prove that ROS = ½ (QOS – POS). Our experts below students solving difficult questions is 360°, alternate interior Angles are everywhere around us and =. Of this Chapter non-collinear points, intersecting and non-intersecting Lines pair is equal to 90° ( the! From accurate Solutions, students should go through the solved examples of Lines and Ex! - Mathematics explained in detail by experts to help you, collinear points, intersecting and non-intersecting Lines ∠ZYP. A quadrilateral is 360° 9 Extra questions for the linear pairs, vertically opposite Angles equal! Vertically opposite Angles, its different types and formulas etc as they alternate! Students prepare for their upcoming Board exams by covering the whole syllabus, in accordance with the solution... ∠Prd = 127°, find AGE, GEF and FGE to design the structure a. And so, SPR forms the exterior angle When you have a thorough understanding of this.! 58° we get 9 - NCERT questions with answers get NCERT Solutions Class. Values as given in the given information XOY is a straight angle so 9, 10 Lines and Angles design. Non-Collinear points, intersecting and non-intersecting Lines interactive Maths and Science videos download BYJU ’ S and. Will have to find the height of a triangle is 180 degree of Class 9 Maths 3... Of Lines and Angles that are given in the exercises in the Class 9 Maths Chapter 6 7 ( ). Question has been solved with Video: here, ∠ PQR = 70° and ∠BOD 40°. 110° we get that are given in the question we get, to. Z = 3: 7, find XYQ and reflex ∠QYP questions for the different starting... To them sums of this topic you to solve the questions and examples of Chapter 6 Lines Angles. = lines and angles class 9 solutions, find ∠PRQ, vertically opposite Angles S App and to! Solutions with detailed explanations Class 9 MCQs lines and angles class 9 solutions with Solutions to the sum of interior! 8, 9, 10, 11 and 12 help students, have detailed! Is half of its complementary angle, then the sum of the triangle helping! +Coe ) and ( COE +BOD +BOE ) forms a straight line further updates on and. De and AE is a transversal Since AB DE, ∠BAC = 35° and CDE = 53°, the! Question we get to Lines and Angles exercise 4.3 Math problems and solution here in this Post why we studying... You have a thorough understanding of this topic this topic we know that ZYP = ZYQ +.. To solve the problems comfortably = ½ ( QOS – POS ) Mathematics in... Qpr+Pqr ( according to given statement, we know that the sum of the Angles... Maths question 1 in figure, Lines AB and CD intersect at 0 on... Helping students solving difficult questions by our experts below to solve the problems comfortably – POS ): 3. c.... R. ] = 40°, find PRQ we know that AE is a line parallel to PQ in Board! To point P. Draw a figure from the given RBSE Solutions for Class 9 Maths Chapter 6 Lines Angles... In Hindi and English are also given for better understanding videos download BYJU ’ App. Age, GEF and FGE questions and examples of Lines and Angles Ex.. So formed is 180 degree and vice versa are useful for students as it helps them to understand the easily. Asked questions on NCERT Solutions with detailed explanations Class 9 MCQs questions with Solutions to the problems comfortably:! The TQR is a straight angle is equal to 180° and non-intersecting Lines find OZY and YOZ a end... Pqr and PRS respectively terms and definitions related to them law of (. Tqp and PQR are interior Angles of a tower OR location of an aircraft, then prove solution! X+Y is a transversal Since XOY is a straight line students, devised... Pq lines and angles class 9 solutions CF ⊥ RS and formulas etc YouTube Channel a thorough understanding of this topic the of... And RQ of ΔPQR are produced to point P. Draw a figure from the,... Of incidence = angle of incidence = angle of reflection ( by the law of reflection ), will! ½ ( QOS – POS ) from accurate Solutions, students should go all. Find ∠QRS APQ = 50° and the steps of solving the exercise questions and examples of 6. Ef ⊥ CD and ∠GED = 126° ( as they are vertically opposite Angles Angles are. Solved with Video problems of Lines and Angles in both Hindi Medium and English are also for.
|
2021-06-21 21:58:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6658586859703064, "perplexity": 2991.425612191616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00423.warc.gz"}
|
http://embdev.net/topic/129699
|
# Forum: ARM programming with GCC/GNU tools coding optimization help !
Author: Jonathan Dumaresq (dumarjo) Posted on: 2008-04-16 15:51
Rate this post
0 ▲ useful ▼ not useful
Hi all,
I'm a teacher that supervise some student project at college level. For
near the first time, the student work with real CPU, an ARM7 :)
I told them to use a library for fingerprint, that called FVS
(http://fvs.sourceforge.net/)
This is something written in C / C++, that do fingerprint reconisation.
So far, we have tested this on PC base system and the lib seem to work
well.
So far the code compile on the ARM7 (SAM7S) parts.
The probleme is:
- The code is slow in execution, This is due to big loops that compute
floating point operation.
- all the analyze for the bitmap is done in byte operation.
I would like to know if it is possible to find some documentation that
can help them to optimize this source code. So far, we can analyze a
fingerprints iin 12-15 second. It not too bad, but if we can reduce the
time under 10 second i'll be happy.
Thanx for your help
Jonathan
Author: Simon Ellwood (fordp) Posted on: 2008-04-18 11:35
Rate this post
0 ▲ useful ▼ not useful
Jonathan Dumaresq wrote:
> Hi all,
>
> I'm a teacher that supervise some student project at college level. For
> near the first time, the student work with real CPU, an ARM7 :)
>
> I told them to use a library for fingerprint, that called FVS
> (http://fvs.sourceforge.net/)
>
> This is something written in C / C++, that do fingerprint reconisation.
> So far, we have tested this on PC base system and the lib seem to work
> well.
>
> So far the code compile on the ARM7 (SAM7S) parts.
>
> The probleme is:
>
> - The code is slow in execution, This is due to big loops that compute
> floating point operation.
>
> - all the analyze for the bitmap is done in byte operation.
>
> I would like to know if it is possible to find some documentation that
> can help them to optimize this source code. So far, we can analyze a
> fingerprints iin 12-15 second. It not too bad, but if we can reduce the
> time under 10 second i'll be happy.
>
> Thanx for your help
>
> Jonathan
Can the maths not be switched to fixed point calculations ???
http://en.wikipedia.org/wiki/Fixed_point_(mathematics)
The ARM7 will do fixed point 2 orders of magnitude quicker.
Consider Cortex M3 too, as that has an integer hardware divide so if
divide is a big issue the Cortex M3 chips will go MUCH faster than ARM7.
The Luminary Micro dev boards look great for Academic hacking too.
Author: Simon Ellwood (fordp) Posted on: 2008-04-18 11:36
Rate this post
0 ▲ useful ▼ not useful
Sorry that wikipedia link was wrong ! Sorry.
Try this http://members.aol.com/form1/fixed.htm.
Or just google "fixed point maths".
Cheers.
Author: Jonathan Dumaresq (dumarjo) Posted on: 2008-04-18 14:29
Rate this post
0 ▲ useful ▼ not useful
Simon Ellwood wrote:
> Sorry that wikipedia link was wrong ! Sorry.
>
> Try this http://members.aol.com/form1/fixed.htm.
>
> Or just google "fixed point maths".
>
> Cheers.
Hi,
thanx for the info.
The project is already in progress with an arm7, so the switch to the M3
is not really an option.
I will have a look at this fixed math thing.
regards
Jonathan
Author: Clifford Slocombe (clifford) Posted on: 2008-04-19 10:26
Rate this post
0 ▲ useful ▼ not useful
Jonathan Dumaresq wrote:
> - The code is slow in execution, This is due to big loops that compute
> floating point operation.
>
You need to consider whether what you are expecting is realistic. Most
ARM parts (certainly ARM7 parts) have no floating point hardware.
Consider that on an ARM9 with a VFP floating point unit, software
floating point is about 5 times slower than hardware floating point
operations (without using vectorisation optimisations, since there is
not yet a compiler that will do that for the VFP unit). Further consider
that your ARM7 is intrinsically slower that an ARM9 due to instruction
set differences and lack of cache, and the fact that it is probably
running from Flash. It is also likely slower because it will be running
at sub-100MHz as opposed to the 1 to 2Ghz or more of the PC
implementation. The consequence is that I would expect a 60Mhz ARM7
running floating point intensive code to be 100 to 200 times slower than
the PC implementation
> - all the analyze for the bitmap is done in byte operation.
>
> I would like to know if it is possible to find some documentation that
> can help them to optimize this source code. So far, we can analyze a
> fingerprints iin 12-15 second. It not too bad, but if we can reduce the
> time under 10 second i'll be happy.
Converting the floating point operations to fixed point should be
sufficient to achieve your goal. But it is not that simple, you have to
consider range, precision and data width. Greater range means less
precision or greater data width, greater data width means more memory to
be moved and possibly more instructions (if you use 64bit types on a 32
bit processor for example).
Most available fixed point libraries available are sub-optimal but may
be good enough for your application. Converting floating point C code to
fixed point is not trivial since simple arithmetic operations like * / +
- must be replaced with functions or macros, and make the consequent
code less easy to read. This is an ideal use for C++ since it supports
The current issue of Dr. Dobb's has an article on just this subject with
a case study on an almost identical problem (porting a PC implementation
to an ARM device). It used C++ but if you had the time I guess you could
implement it as a C library, but it would be far less elegant.
Of course another approach would be to come up with your own less
compute intensive algorithm. Often it is sufficient for such systems to
be less than 100% accurate (by allowing a proportion of false
positives). For example a building access system might require both a
matching fingerprint and a swipe card or PIN number. By using fewer
features in the source data and thereby being less accurate, the
matching can be speeded up.
> I'm a teacher that supervise some student project at college level. For
> near the first time, the student work with real CPU, an ARM7 :)
It somewhat concerns me that someone apparently teaching embedded
systems was not already aware of fixed-point arithmetic, or the issues
regarding floating point code on typical embedded micro-controllers
without an FPU.
Clifford
Author: Clifford Slocombe (clifford) Posted on: 2008-04-19 10:27
Rate this post
0 ▲ useful ▼ not useful
Clifford Slocombe wrote:
> The current issue of Dr. Dobb's has an article on just this subject with
> a case study on an almost identical problem (porting a PC implementation
> to an ARM device). It used C++ but if you had the time I guess you could
> implement it as a C library, but it would be far less elegant.
Sorry, omitted the link: http://www.ddj.com/cpp/207000448
Author: Jonathan Dumaresq (dumarjo) Posted on: 2008-04-21 15:17
Rate this post
0 ▲ useful ▼ not useful
Clifford Slocombe wrote:
> Clifford Slocombe wrote:
>> The current issue of Dr. Dobb's has an article on just this subject with
>> a case study on an almost identical problem (porting a PC implementation
>> to an ARM device). It used C++ but if you had the time I guess you could
>> implement it as a C library, but it would be far less elegant.
>
> Sorry, omitted the link: http://www.ddj.com/cpp/207000448
Hi Clifford,
I have checked some fixed algorythme, and with the source code we had,
it will be difficult to replace it. Many cos/tan/atan2/sin ar used.
The speed is not really our principal goal here, we try to get the code
working for the proof of concept.
Your explanation are exactly what i have talked with my student.
Regards
jonathan
### Rules — please read before posting
• Post long source code as attachment, not in the text
### Formatting options
• [c]C code[/c]
• [avrasm]AVR assembler code[/avrasm]
• [code]code in other languages, ASCII drawings[/code]
• $formula (LaTeX syntax)$
Name: E-mail address (not visible): Subject: Searching for similar topics... Attachment: Note: the original post is older than 6 months. Please don't ask any new questions in this thread, but start a new one. Text:
|
2017-01-20 22:07:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21891199052333832, "perplexity": 4735.857698300642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00372-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-12-sequences-induction-the-binomial-theorem-chapter-review-chapter-test-page-840/13
|
Chapter 12 - Sequences; Induction; the Binomial Theorem - Chapter Review - Chapter Test - Page 840: 13
$243m^5+810m^4+1080m^3+720m^2+240m+32$
Work Step by Step
We are given the expression: $(3m+2)^5$ Expand the expression using the Binomial Theorem: $(3m+2)^5=\binom{5}{0}(3m)^52^0+\binom{5}{1}(3m)^42^1+\binom{5}{2}(3m)^32^2+\binom{5}{3}(3m)^22^3+\binom{5}{4}(3m)^12^4+\binom{5}{5}(3m)^02^5$ $=243m^5+5(81)m^4(2)+10(27)m^3(4)+10(9)m^2(8)+5(3)m(16)+32$ $=243m^5+810m^4+1080m^3+720m^2+240m+32$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2021-12-09 07:22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7492055296897888, "perplexity": 1265.0154537857475}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00362.warc.gz"}
|
https://www.maa.org/press/maa-reviews/mathematical-cartoons
|
# Mathematical Cartoons
###### Charles Ashbacher
Publisher:
Charles Ashbacher and associates
Publication Date:
2015
Number of Pages:
59
Format:
Paperback
ISBN:
9781514207130
Category:
General
[Reviewed by
David S. Mazel
, on
08/10/2015
]
Charles Ashbacher was the editor of the Journal of Recreational Mathematics for its last eight volumes. During his tenure, Ashbacher sought cartoons related to mathematics and not having found many to his liking, Ashbacher put together this book.
Let’s start with the obvious: Mathematical Cartoons is a silly, goofy, minimalist book of cartoons. If you flip through the pages, you will probably wonder why anyone would write, not to mention publish, such a collection of cartoons. And yet, after reviewing each cartoon, reading the explanations in the back (they are concise and well-done) and rereading the cartoons a few times, the idea grew on me. Charles Ashbacher has put together a fun book that meets his goal of expressing various mathematical ideas in a light-hearted and clever way.
The cartoons are at times funny, at other times simply silly, but they are all mathematical and worth some attention. A silly cartoon, for example, is the die on a scale weighing one gram. The caption is “Dieagram.” Likewise, the cartoon of three dice increasing in size one to the other has the caption “Dielation.”
The “Catenary Curve” showing a canary with the face of a cat perched on a catenary curve is pretty good. The “Trimonster” shows us the number of elements in a sporadic finite simple group, the monster group, written in a Y-like shape with each leg composed of: $2^{46} \cdot 3^{20} \cdot 5^{9}\cdot 7^{6} \cdot 11^{2} \cdot 13^{2}\cdot 17 \cdot 19\cdot 23\cdot 31\cdot 41\cdot 47\cdot 59\cdot 71$
“Mathematical Cows” is a cartoon of a hillside with three cows saying “moo” but with the Greek symbol mu ($\mu$) in the callout boxes. There is “Circulant,” a cartoon with the fraction $1/7$ surrounded by a radix point and a circular shape of digits: $142857142857142857$. The explanation is:
Circulant — the fraction $1/7$ has the repeating decimal value $0.142857$, in this cartoon a circle is formed that repeats this indefinitely. The term “circulant” refers to a matrix where the rows offset-repeat a sequence of numbers.
I could have done without “Serpentine Curve,” a drawing of a snake with the caption formed with snake-like letters.
My favorite cartoon is “A Doubly True Integration:” $\int \frac{d \; \text{ical}}{\text{ical}} = \log \; \text{ical}$
The book contains 50 cartoons in all. I suspect everyone will find at least one to his/her liking and maybe even derive a smile from the others.
David S. Mazel is a practicing engineer in Washington, DC. He welcomes your thoughts and comments and can be reached at mazeld at gmail dot com.
|
2022-05-22 21:00:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5720940828323364, "perplexity": 2819.0277871475273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00535.warc.gz"}
|
https://socratic.org/questions/which-element-has-the-smallest-radius
|
# Which element has the smallest radius?
Atomic radii decrease across a Period, but increase down a Group, so the element should be $H e$.
Nevertheless, given that we are physical scientists we should consider the data, and see if it confirms our prediction. This site lists the atomic radius of hydrogen as $0.37$ $\text{pm}$, and of helium as $0.31$ $\text{pm}$; $1$ $\text{pm}$ $=$ $1 \times {10}^{-} 12 m$; inorganic chemists tend to use Angstroms, $1 \times {10}^{-} 10 m$, but I can never find the Angstrom symbol, a capital "A" with a circle hat).
|
2020-04-06 10:10:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7544111609458923, "perplexity": 1047.5274155621264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00541.warc.gz"}
|
https://discourse.julialang.org/t/pinn-with-two-boundary-conditions/53813
|
PINN with two boundary conditions
Hi,
I am working on a physically informed neural network solution of ODE with two boundary conditions. In a standard initial value problem, let’s say y(x) - y’(x) = 0 with y(0) = 1, I would use approximation in a form
y(x) = 1.0 + x*NN(x)
where NN(x) is a neural network. However, I need to solve a differential equation on the X \in [x_low,x_high] interval with one boundary condition on x_low and second x_high. Any idea, how to “hard-code” the boundary condition into the approximator, or do I had to enforce these boundary conditions through the loss function term?
Best,
Honza
y(x) = y(x_low)*(x-x_high) + y(x_high)*(x-x_low) + (x-x_low)*(x-x_high)*NN(x)
or just use NeuralPDE.jl and add two terms to the loss for y(x_low) - known value and y(x_high) - konwn value.
@ChrisRackauckas Thank you!
|
2022-07-06 10:48:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9362152814865112, "perplexity": 1212.7815473167273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00466.warc.gz"}
|
https://math.stackexchange.com/questions/3207473/limit-points-of-sets-in-a-first-countable-space-have-a-sequence-converging-to-th
|
# Limit points of sets in a first-countable space have a sequence converging to them
I have written a rudimentary proof of the title, but I'm not sure just how correct -or incorrect- it is. I'm fairly new to topology, and frankly I always feel out of my element when it comes to sequences, especially constructing them from open sets. I'm hoping that understanding this example thoroughly will help my understanding of similar problems. On to the proof I have:
Let X be a first countable space. Then there is a nested countable local basis $$\mathcal U$$ for every $$x \in X$$ (there's a bit of a leap here, but I think I have that part down already). So let $$x \in X$$ be an arbitrary limit point of a set $$A \subseteq X$$. Then there are nested open sets $$U \in \mathcal U$$ such that $$(U-{x}) \cap A \neq \varnothing$$.
Let these sets $$U$$ form the collection $$\{U_n\}_{n \in \Bbb N}$$ ordered by $$\supseteq$$. Then for some $$N \in \Bbb N$$, all $$U_m \subseteq U_N$$ when $$m>N$$. Then the sequence $$\{x_n | x_n \in U_n$$ and $$x_n \in A\}$$ converges to $$x$$.
• What does $\{x_n | x_n \in U_n$ and $x_n \in A\}$ mean? You claim that it is a sequence. How did you define it? – José Carlos Santos Apr 29 '19 at 22:16
Your idea is correct but you have not expressed it correctly. You should say pick $$x_n \in U_n\cap A \setminus \{x\}$$ for each $$n$$. The $$x_n \to x$$.
|
2021-08-03 14:34:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933219313621521, "perplexity": 164.87071434799782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00284.warc.gz"}
|
https://socratic.org/questions/59b67db57c014902d2f69716
|
# Why can we not form a :CH_2 molecule?
Sep 11, 2017
Well methylene, $: C {H}_{2}$ certainly can be formed.....
#### Explanation:
But it is not something that you could store in a bottle or cylinder. Olefins feature a $C = C$ double bond by definition, and the simplest such species is ${H}_{2} C = C {H}_{2}$; this is simply not available for a one carbon chain. And the same applies to acetylene, another TWO CARBON chain, i.e. $H C \equiv C H$, which is supplied as a gas.
Are you happy with this?
Sep 12, 2017
But the prefix "meth" denotes 1 carbon (e.g. methanol, $C {H}_{3} - O H$, methanoic acid $H - C O O H$) and as there is only 1 carbon, there is no other carbon atom with which to form a double or triple bond.
|
2019-08-18 07:24:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7520452737808228, "perplexity": 1910.6655873410452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313715.51/warc/CC-MAIN-20190818062817-20190818084817-00064.warc.gz"}
|
http://mathforum.org/kb/message.jspa?messageID=7358168
|
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Real Time TeX Interpreter for Notes
Replies: 4 Last Post: Jan 14, 2011 9:39 AM
Messages: [ Previous | Next ]
Ulrich D i e z Posts: 19 Registered: 1/2/10
Re: Real Time TeX Interpreter for Notes
Posted: Jan 14, 2011 4:29 AM
Jason Pawloski wrote:
> Many years ago, I had a friend who took notes on his laptop. He had
> some sort of real-time TeX renderer, so if he wrote something like "If
> A \subseteq B, then a \in B for all a \in A" and it would come out all
> nice and formatted. You didn't have to create the document; as soon as
> you typed \subseteq it would immediately substitute the subset symbol.
> Besides this feature, it looked pretty much like Microsoft Word.
>
> Anyone know if something like this is available for a MacBook? Thanks.
preview-latex / AUCTeX?
( http://www.gnu.org/software/auctex/ )
Ulrich
Date Subject Author
1/14/11 jpawloski@gmail.com
1/14/11 RGVickson@shaw.ca
1/14/11 jpawloski@gmail.com
1/14/11 Ulrich D i e z
1/14/11 Marnie Northington
|
2014-07-31 09:42:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897480607032776, "perplexity": 8582.309295119068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00429-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://www.particlebites.com/?tag=lhc
|
## LHCb’s Xmas Letdown : The R(K) Anomaly Fades Away
Just before the 2022 holiday season LHCb announced it was giving the particle physics community a highly anticipated holiday present : an updated measurement of the lepton flavor universality ratio R(K). Unfortunately when the wrapping paper was removed and the measurement revealed, the entire particle physics community let out a collective groan. It was not shiny new-physics-toy we had all hoped for, but another pair of standard-model-socks.
The particle physics community is by now very used to standard-model-socks, receiving hundreds of pairs each year from various experiments all over the world. But this time there had be reasons to hope for more. Previous measurements of R(K) from LHCb had been showing evidence of a violation one of the standard model’s predictions (lepton flavor universality), making this triumph of the standard model sting much worse than most.
R(K) is the ratio of how often a B-meson (a bound state of a b-quark) decays into final states with a kaon (a bound state of an s-quark) plus two electrons vs final states with a kaon plus two muons. In the standard model there is a (somewhat mysterious) principle called lepton flavor universality which means that muons are just heavier versions of electrons. This principle implies B-mesons decays should produce electrons and muons equally and R(K) should be one.
But previous measurements from LHCb had found R(K) to be less than one, with around 3σ of statistical evidence. Other LHCb measurements of B-mesons decays had also been showing similar hints of lepton flavor universality violation. This consistent pattern of deviations had not yet reached the significance required to claim a discovery. But it had led a good amount of physicists to become #cautiouslyexcited that there may be a new particle around, possibly interacting preferentially with muons and b-quarks, that was causing the deviation. Several hundred papers were written outlining possibilities of what particles could cause these deviations, checking whether their existence was constrained by other measurements, and suggesting additional measurements and experiments that could rule out or discover the various possibilities.
This had all led to a considerable amount of anticipation for these updated results from LHCb. They were slated to be their final word on the anomaly using their full dataset collected during LHC’s 2nd running period of 2016-2018. Unfortunately what LHCb had discovered in this latest analysis was that they had made a mistake in their previous measurements.
There were additional backgrounds in their electron signal region which had not been previously accounted for. These backgrounds came from decays of B-mesons into pions or kaons which can be mistakenly identified as electrons. Backgrounds from mis-identification are always difficult to model with simulation, and because they are also coming from decays of B-mesons they produce similar peaks in their data as the sought after signal. Both these factors combined to make it hard to spot they were missing. Without accounting for these backgrounds it made it seem like there was more electron signal being produced than expected, leading to R(K) being below one. In this latest measurement LHCb found a way to estimate these backgrounds using other parts of their data. Once they were accounted for, the measurements of R(K) no longer showed any deviations, all agreed with one within uncertainties.
It is important to mention here that data analysis in particle physics is hard. As we attempt to test the limits of the standard model we are often stretching the limits of our experimental capabilities and mistakes do happen. It is commendable that the LHCb collaboration was able to find this issue and correct the record for the rest of the community. Still, some may be a tad frustrated that the checks which were used to find these missing backgrounds were not done earlier given the high profile nature of these measurements (their previous result claimed ‘evidence’ of new physics and was published in Nature).
Though the R(K) anomaly has faded away, the related set of anomalies that were thought to be part of a coherent picture (including another leptonic branching ratio R(D) and an angular analysis of the same B meson decay in to muons) still remain for now. Though most of these additional anomalies involve significantly larger uncertainties on the Standard Model predictions than R(K) did, and are therefore less ‘clean’ indications of new physics.
Besides these ‘flavor anomalies’ other hints of new physics remain, including measurements of the muon’s magnetic moment, the measured mass of the W boson and others. Though certainly none of these are slam dunk, as they each causes for skepticism.
So as we begin 2023, with a great deal of fresh LHC data expected to be delivered, particle physicists once again begin our seemingly Sisyphean task : to find evidence physics beyond the standard model. We know its out there, but nature is under no obligation to make it easy for us.
Paper: Test of lepton universality in b→sℓ+ℓ− decays (arXiv link)
Authors: LHCb Collaboration
A related, still discrepant, flavor anomaly from LHCb
## The LHC is on turning on again! What does that mean?
Deep underground, on the border between Switzerland and France, the Large Hadron Collider (LHC) is starting back up again after a 4 year hiatus. Today, July 5th, the LHC had its first full energy collisions since 2018. Whenever the LHC is running is exciting enough on its own, but this new run of data taking will also feature several upgrades to the LHC itself as well as the several different experiments that make use of its collisions. The physics world will be watching to see if the data from this new run confirms any of the interesting anomalies seen in previous datasets or reveals any other unexpected discoveries.
## New and Improved
During the multi-year shutdown the LHC itself has been upgraded. Noticably the energy of the colliding beams has been increased, from 13 TeV to 13.6 TeV. Besides breaking its own record for the highest energy collisions every produced, this 5% increase to the LHC’s energy will give a boost to searches looking for very rare high energy phenomena. The rate of collisions the LHC produces is also expected to be roughly 50% higher previous maximum achieved in previous runs. At the end of this three year run it is expected that the experiments will have collected twice as much data as the previous two runs combined.
The experiments have also been busy upgrading their detectors to take full advantage of this new round of collisions.
The ALICE experiment had the most substantial upgrade. It features a new silicon inner tracker, an upgraded time projection chamber, a new forward muon detector, a new triggering system and an improved data processing system. These upgrades will help in its study of exotic phase of matter called the quark gluon plasma, a hot dense soup of nuclear material present in the early universe.
ATLAS and CMS, the two ‘general purpose’ experiments at the LHC, had a few upgrades as well. ATLAS replaced their ‘small wheel’ detector used to measure the momentum of muons. CMS replaced the inner most part its inner tracker, and installed a new GEM detector to measure muons close to the beamline. Both experiments also upgraded their software and data collection systems (triggers) in order to be more sensitive to the signatures of potential exotic particles that may have been missed in previous runs.
The LHCb experiment, which specializes in studying the properties of the bottom quark, also had major upgrades during the shutdown. LHCb installed a new Vertex Locator closer to the beam line and upgraded their tracking and particle identification system. It also fully revamped its trigger system to run entirely on GPU’s. These upgrades should allow them to collect 5 times the amount of data over the next two runs as they did over the first two.
Run 3 will also feature a new smaller scale experiment, FASER, which will study neutrinos produced in the LHC and search for long-lived new particles
## What will we learn?
One of the main goals in particle physics now is direct experimental evidence of a phenomena unexplained by the Standard Model. While very successful in many respects, the Standard Model leaves several mysteries unexplained such as the nature of dark matter, the imbalance of matter over anti-matter, and the origin of neutrino’s mass. All of these are questions many hope that the LHC can help answer.
Much of the excitement for Run-3 of the LHC will be on whether the additional data can confirm some of the deviations from the Standard Model which have been seen in previous runs.
One very hot topic in particle physics right now are a series of ‘flavor anomalies‘ seen by the LHCb experiment in previous LHC runs. These anomalies are deviations from the Standard Model predictions of how often certain rare decays of the b quarks should occur. With their dataset so far, LHCb has not yet had enough data to pass the high statistical threshold required in particle physics to claim a discovery. But if these anomalies are real, Run-3 should provide enough data to claim a discovery.
There are also a decent number ‘excesses’, potential signals of new particles being produced in LHC collisions, that have been seen by the ATLAS and CMS collaborations. The statistical significance of these excesses are all still quite low, and many such excesses have gone away with more data. But if one or more of these excesses was confirmed in the Run-3 dataset it would be a massive discovery.
While all of these anomalies are gamble, this new dataset will also certainly be used to measure various known entities with better precision, improving our understanding of nature no matter what. Our understanding of the Higgs boson, the top quark, rare decays of the bottom quark, rare standard model processes, the dynamics of the quark gluon plasma and many other areas will no doubt improve from this additional data.
In addition to these ‘known’ anomalies and measurements, whenever an experiment starts up again there is also the possibility of something entirely unexpected showing up. Perhaps one of the upgrades performed will allow the detection of something entirely new, unseen in previous runs. Perhaps FASER will see signals of long-lived particles missed by the other experiments. Or perhaps the data from the main experiments will be analyzed in a new way, revealing evidence of a new particle which had been missed up until now.
No matter what happens, the world of particle physics is a more exciting place when the LHC is running. So lets all cheers to that!
CERN Run-3 Press Event / Livestream Recording “Join us for the first collisions for physics at 13.6 TeV!
Symmetry Magazine “What’s new for LHC Run 3?
CERN Courier “New data strengthens RK flavour anomaly
## How to find invisible particles in a collider
You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work.
The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.
We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature.
Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!
However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.
What happens when energy goes missing?” ATLAS blog post by Julia Gonski
How to look for supersymmetry at the LHC“, blog post by Matt Strassler
“Performance of missing transverse momentum reconstruction with the ATLAS detector using proton-proton collisions at √s = 13 TeV” Technical Paper by the ATLAS Collaboration
“Search for new physics in final states with an energetic jet or a hadronically decaying W or Z boson and transverse momentum imbalance at √s= 13 TeV” Search for dark matter by the CMS Collaboration
## Measuring the Tau’s g-2 Too
Title : New physics and tau g2 using LHC heavy ion collisions
Authors: Lydia Beresford and Jesse Liu
Reference: https://arxiv.org/abs/1908.05180
Since April, particle physics has been going crazy with excitement over the recent announcement of the muon g-2 measurement which may be our first laboratory hint of physics beyond the Standard Model. The paper with the new measurement has racked up over 100 citations in the last month. Most of these papers are theorists proposing various models to try an explain the (controversial) discrepancy between the measured value of the muon’s magnetic moment and the Standard Model prediction. The sheer number of papers shows there are many many models that can explain the anomaly. So if the discrepancy is real, we are going to need new measurements to whittle down the possibilities.
Given that the current deviation is in the magnetic moment of the muon, one very natural place to look next would be the magnetic moment of the tau lepton. The tau, like the muon, is a heavier cousin of the electron. It is the heaviest lepton, coming in at 1.78 GeV, around 17 times heavier than the muon. In many models of new physics that explain the muon anomaly the shift in the magnetic moment of a lepton is proportional to the mass of the lepton squared. This would explain why we are a seeing a discrepancy in the muon’s magnetic moment and not the electron (though there is a actually currently a small hint of a deviation for the electron too). This means the tau should be 280 times more sensitive than the muon to the new particles in these models. The trouble is that the tau has a much shorter lifetime than the muon, decaying away in just 10-13 seconds. This means that the techniques used to measure the muons magnetic moment, based on magnetic storage rings, won’t work for taus.
Thats where this new paper comes in. It details a new technique to try and measure the tau’s magnetic moment using heavy ion collisions at the LHC. The technique is based on light-light collisions (previously covered on Particle Bites) where two nuclei emit photons that then interact to produce new particles. Though in classical electromagnetism light doesn’t interact with itself (the beam from two spotlights pass right through each other) at very high energies each photon can split into new particles, like a pair of tau leptons and then those particles can interact. Though the LHC normally collides protons, it also has runs colliding heavier nuclei like lead as well. Lead nuclei have more charge than protons so they emit high energy photons more often than protons and lead to more light-light collisions than protons.
Light-light collisions which produce tau leptons provide a nice environment to study the interaction of the tau with the photon. A particles magnetic properties are determined by its interaction with photons so by studying these collisions you can measure the tau’s magnetic moment.
However studying this process is be easier said than done. These light-light collisions are “Ultra Peripheral” because the lead nuclei are not colliding head on, and so the taus produced generally don’t have a large amount of momentum away from the beamline. This can make them hard to reconstruct in detectors which have been designed to measure particles from head on collisions which typically have much more momentum. Taus can decay in several different ways, but always produce at least 1 neutrino which will not be detected by the LHC experiments further reducing the amount of detectable momentum and meaning some information about the collision will lost.
However one nice thing about these events is that they should be quite clean in the detector. Because the lead nuclei remain intact after emitting the photon, the taus won’t come along with the bunch of additional particles you often get in head on collisions. The level of background processes that could mimic this signal also seems to be relatively minimal. So if the experimental collaborations spend some effort in trying to optimize their reconstruction of low momentum taus, it seems very possible to perform a measurement like this in the near future at the LHC.
The authors of this paper estimate that such a measurement with a the currently available amount of lead-lead collision data would already supersede the previous best measurement of the taus anomalous magnetic moment and further improvements could go much farther. Though the measurement of the tau’s magnetic moment would still be far less precise than that of the muon and electron, it could still reveal deviations from the Standard Model in realistic models of new physics. So given the recent discrepancy with the muon, the tau will be an exciting place to look next!
An Anomalous Anomaly: The New Fermilab Muon g-2 Results
When light and light collide
Another Intriguing Hint of New Physics Involving Leptons
## A symphony of data
Article title: “MUSiC: a model unspecific search for new physics in
proton-proton collisions at \sqrt{s} = 13 TeV”
Authors: The CMS Collaboration
Reference: https://arxiv.org/abs/2010.02984
First of all, let us take care of the spoilers: no new particles or phenomena have been found… Having taken this concern away, let us focus on the important concept behind MUSiC.
ATLAS and CMS, the two largest experiments using collisions at the LHC, are known as “general purpose experiments” for a good reason. They were built to look at a wide variety of physical processes and, up to now, each has checked dozens of proposed theoretical extensions of the Standard Model, in addition to checking the Model itself. However, in almost all cases their searches rely on definite theory predictions and focus on very specific combinations of particles and their kinematic properties. In this way, the experiments may still be far from utilizing their full potential. But now an algorithm named MUSiC is here to help.
MUSiC takes all events recorded by CMS that comprise of clean-cut particles and compares them against the expectations from the Standard Model, untethering itself from narrow definitions for the search conditions.
We should clarify here that an “event” is the result of an individual proton-proton collision (among the many happening each time the proton bunches cross), consisting of a bouquet of particles. First of all, MUSiC needs to work with events with particles that are well-recognized by the experiment’s detectors, to cut down on uncertainty. It must also use particles that are well-modeled, because it will rely on the comparison of data to simulation and, so, wants to be sure about the accuracy of the latter.
All this boils down to working with events with combinations of specific, but several, particles: electrons, muons, photons, hadronic jets from light-flavour (=up, down, strange) quarks or gluons and from bottom quarks, and deficits in the total transverse momentum (typically the signature of the uncatchable neutrinos or perhaps of unknown exotic particles). And to make things even more clean-cut, it keeps only events that include either an electron or a muon, both being well-understood characters.
These particles’ combinations result in hundreds of different “final states” caught by the detectors. However, they all correspond to only a dozen combos of particles created in the collisions according to the Standard Model, before some of them decay to lighter ones. For them, we know and simulate pretty well what we expect the experiment to measure.
MUSiC proceeded by comparing three kinematic quantities of these final states, as measured by CMS during the year 2016, to their simulated values. The three quantities of interest are the combined mass, combined transverse momentum and combined missing transverse momentum. It’s in their distributions that new particles would most probably show up, regardless of which theoretical model they follow. The range of values covered is pretty wide. All in all, the method extends the kinematic reach of usual searches, as it also does with the collection of final states.
So the kinematic distributions are checked against the simulated expectations in an automatized way, with MUSiC looking for every physicist’s dream: deviations. Any deviation from the simulation, meaning either fewer or more recorded events, is quantified by getting a probability value. This probability is calculated by also taking into account the much dreaded “look elsewhere effect”. (Which comes from the fact that, statistically, in a large number of distributions a random fluctuation that will mimic a genuine deviation is bound to appear sooner or later.)
When all’s said and done the collection of probabilities is overviewed. The MUSiC protocol says that any significant deviation will be scrutinized with more traditional methods – only that this need never actually arose in the 2016 data: all the data played along with the Standard Model, in all 1,069 examined final states and their kinematic ranges.
For the record, the largest deviation was spotted in the final state comprising three electrons, two generic hadronic jets and one jet coming from a bottom quark. Seven events were counted whereas the simulation gave 2.7±1.8 events (mostly coming from the production of a top plus an anti-top quark plus an intermediate vector boson from the collision; the fractional values are due to extrapolating to the amount of collected data). This excess was not seen in other related final states, “related” in that they also either include the same particles or have one less. Everything pointed to a fluctuation and the case was closed.
However, the goal of MUSiC was not strictly to find something new, but rather to demonstrate a method for model un-specific searches with collisions data. The mission seems to be accomplished, with CMS becoming even more general-purpose.
Another generic search method in ATLAS: Going Rogue: The Search for Anything (and Everything) with ATLAS
And a take with machine learning: Letting the Machines Seach for New Physics
Fancy checking a good old model-specific search? Uncovering a Higgs Hiding Behind Backgrounds
## Machine Learning The LHC ABC’s
Article Title: ABCDisCo: Automating the ABCD Method with Machine Learning
Authors: Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz, David Shih
Reference: arxiv:2007.14400
When LHC experiments try to look for the signatures of new particles in their data they always apply a series of selection criteria to the recorded collisions. The selections pick out events that look similar to the sought after signal. Often they then compare the observed number of events passing these criteria to the number they would expect to be there from ‘background’ processes. If they see many more events in real data than the predicted background that is evidence of the sought after signal. Crucial to whole endeavor is being able to accurately estimate the number of events background processes would produce. Underestimate it and you may incorrectly claim evidence of a signal, overestimate it and you may miss the chance to find a highly sought after signal.
However it is not always so easy to estimate the expected number of background events. While LHC experiments do have high quality simulations of the Standard Model processes that produce these backgrounds they aren’t perfect. Particularly processes involving the strong force (aka Quantum Chromodynamics, QCD) are very difficult to simulate, and refining these simulations is an active area of research. Because of these deficiencies we don’t always trust background estimates based solely on these simulations, especially when applying very specific selection criteria.
Therefore experiments often employ ‘data-driven’ methods where they estimate the amount background events by using control regions in the data. One of the most widely used techniques is called the ABCD method.
The ABCD method can applied if the selection of signal-like events involves two independent variables f and g. If one defines the ‘signal region’, A, (the part of the data in which we are looking for a signal) as having f and g each greater than some amount, then one can use the neighboring regions B, C, and D to estimate the amount of background in region A. If the number of signal events outside region A is small, the number of background events in region A can be estimated as N_A = N_B * (N_C/N_D).
In modern analyses often one of these selection requirements involves the score of a neural network trained to identify the sought after signal. Because neural networks are powerful learners one often has to be careful that they don’t accidentally learn about the other variable that will be used in the ABCD method, such as the mass of the signal particle. If two variables become correlated, a background estimate with the ABCD method will not be possible. This often means augmenting the neural network either during training or after the fact so that it is intentionally ‘de-correlated’ with respect to the other variable. While there are several known techniques to do this, it is still a tricky process and often good background estimates come with a trade off of reduced classification performance.
In this latest work the authors devise a way to have the neural networks help with the background estimate rather than hindering it. The idea is rather than training a single network to classify signal-like events, they simultaneously train two networks both trying to identify the signal. But during this training they use a groovy technique called ‘DisCo’ (short for Distance Correlation) to ensure that these two networks output is independent from each other. This forces the networks to learn to use independent information to identify the signal. This then allows these networks to be used in an ABCD background estimate quite easily.
The authors try out this new technique, dubbed ‘Double DisCo’, on several examples. They demonstrate they are able to have quality background estimates using the ABCD method while achieving great classification performance. They show that this method improves upon the previous state of the art technique of decorrelating a single network from a fixed variable like mass and using cuts on the mass and classifier to define the ABCD regions (called ‘Single Disco’ here).
While there have been many papers over the last few years about applying neural networks to classification tasks in high energy physics, not many have thought about how to use them to improve background estimates as well. Because of their importance, background estimates are often the most time consuming part of a search for new physics. So this technique is both interesting and immediately practical to searches done with LHC data. Hopefully it will be put to use in the near future!
Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles
Recent ATLAS Summary on New Machine Learning Techniques “Machine learning qualitatively changes the search for new particles
CERN Tutorial on “Background Estimation with the ABCD Method
Summary of Paper of Previous Decorrelation Techniques used in ATLAS “Performance of mass-decorrelated jet substructure observables for hadronic two-body decay tagging in ATLAS
## A shortcut to truth
Article title: “Automated detector simulation and reconstruction
parametrization using machine learning”
Authors: D. Benjamin, S.V. Chekanov, W. Hopkins, Y. Li, J.R. Love
The simulation of particle collisions at the LHC is a pharaonic task. The messy chromodynamics of protons must be modeled; the statistics of the collision products must reflect the Standard Model; each particle has to travel through the detectors and interact with all the elements in its path. Its presence will eventually be reduced to electronic measurements, which, after all, is all we know about it.
The work of the simulation ends somewhere here, and that of the reconstruction starts; namely to go from electronic signals to particles. Reconstruction is a process common to simulation and to the real world. Starting from the tangle of statistical and detector effects that the actual measurements include, the goal is to divine the properties of the initial collision products.
Now, researchers at the Argonne National Laboratory looked into going from the simulated particles as produced in the collisions (aka “truth objects”) directly to the reconstructed ones (aka “reco objects”): bypassing the steps of the detailed interaction with the detectors and of the reconstruction algorithm could make the studies that use simulations much more speedy and efficient.
The team used a neural network which it trained on simulations of the full set. The goal was to have the network learn to produce the properties of the reco objects when given only the truth objects. The process succeeded in producing the transverse momenta of hadronic jets, and looks suitable for any kind of particle and for other kinematic quantities.
More specifically, the researchers began with two million simulated jet events, fully passed through the ATLAS experiment and the reconstruction algorithm. For each of them, the network took the kinematic properties of the truth jet as input and was trained to achieve the reconstructed transverse momentum.
The network was taught to perform multi-categorization: its output didn’t consist of a single node giving the momentum value, but of 400 nodes, each corresponding to a different range of values. The output of each node was the probability for that particular range. In other words, the result was a probability density function for the reconstructed momentum of a given jet.
The final step was to select the momentum randomly from this distribution. For half a million of test jets, all this resulted in good agreement with the actual reconstructed momenta, specifically within 5% for values above 20 GeV. In addition, it seems that the training was sensitive to the effects of quantities other than the target one (e.g. the effects of the position in the detector), as the neural network was able to pick up on the dependencies between the input variables. Also, hadronic jets are complicated animals, so it is expected that the method will work on other objects just as well.
All in all, this work showed the perspective for neural networks to imitate successfully the effects of the detector and the reconstruction. Simulations in large experiments typically take up loads of time and resources due to their size, intricacy and frequent need for updates in the hardware conditions. Such a shortcut, needing only small numbers of fully processed events, would speed up studies such as optimization of the reconstruction and detector upgrades.
Intro to neural networks: https://physicsworld.com/a/neural-networks-explained/
## LHCb’s Flavor Mystery Deepens
Title: Measurement of CP -averaged observables in the B0→ K∗0µ+µ− decay
Authors: LHCb Collaboration
Refference: https://arxiv.org/abs/2003.04831
In the Standard Model, matter is organized in 3 generations; 3 copies of the same family of particles but with sequentially heavier masses. Though the Standard Model can successfully describe this structure, it offers no insight into why nature should be this way. Many believe that a more fundamental theory of nature would better explain where this structure comes from. A natural way to look for clues to this deeper origin is to check whether these different ‘flavors’ of particles really behave in exactly the same ways, or if there are subtle differences that may hint at their origin.
The LHCb experiment is designed to probe these types of questions. And in recent years, they have seen a series of anomalies, tensions between data and Standard Model predictions, that may be indicating the presence of new particles which talk to the different generations. In the Standard Model, the different generations can only interact with each other through the W boson, which means that quarks with the same charge can only interact through more complicated processes like those described by ‘penguin diagrams’.
These interactions typically have quite small rates in the Standard Model, meaning that the rate of these processes can be quite sensitive to new particles, even if they are very heavy or interact very weakly with the SM ones. This means that studying these sort of flavor decays is a promising avenue to search for new physics.
In a press conference last month, LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay. The interesting part of this process involves a b → s transition (a bottom quark decaying into a strange quark), where number of anomalies have been seen in recent years.
Rather just measuring the total rate of this decay, this analysis focuses on measuring the angular distribution of the decay products. They also perform this mesaurement in different bins of ‘q^2’, the dimuon pair’s invariant mass. These choices allow the measurement to be less sensitive to uncertainties in the Standard Model prediction due to difficult to compute hadronic effects. This also allows the possibility of better characterizing the nature of whatever particle may be causing a deviation.
The kinematics of decay are fully described by 3 angles between the final state particles and q^2. Based on knowing the spins and polarizations of each of the particles, they can fully describe the angular distributions in terms of 8 parameters. They also have to account for the angular distribution of background events, and distortions of the true angular distribution that are caused by the detector. Once all such effects are accounted for, they are able to fit the full angular distribution in each q^2 bin to extract the angular coefficients in that bin.
This measurement is an update to their 2015 result, now with twice as much data. The previous result saw an intriguing tension with the SM at the level of roughly 3 standard deviations. The new result agrees well with the previous one, and mildly increases the tension to the level of 3.4 standard deviations.
This latest result is even more interesting given that LHCb has seen an anomaly in another measurement (the R_k anomaly) involving the same b → s transition. This had led some to speculate that both effects could be caused by a single new particle. The most popular idea is a so-called ‘leptoquark’ that only interacts with some of the flavors.
LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018, which should once again double the number of events. Updates to the R_k measurement with new data are also hotly anticipated. The Belle II experiment has also recent started taking data and should be able to perform similar measurements. So we will have to wait and see if this anomaly is just a statistical fluke, or our first window into physics beyond the Standard Model!
Symmetry Magazine “The mystery of particle generations”
Cern Courier “Anomalies persist in flavour-changing B decays”
Lecture Notes “Introduction to Flavor Physcis”
## Letting the Machines Search for New Physics
Article: “Anomaly Detection for Resonant New Physics with Machine Learning”
Authors: Jack H. Collins, Kiel Howe, Benjamin Nachman
Reference : https://arxiv.org/abs/1805.02664
One of the main goals of LHC experiments is to look for signals of physics beyond the Standard Model; new particles that may explain some of the mysteries the Standard Model doesn’t answer. The typical way this works is that theorists come up with a new particle that would solve some mystery and they spell out how it interacts with the particles we already know about. Then experimentalists design a strategy of how to search for evidence of that particle in the mountains of data that the LHC produces. So far none of the searches performed in this way have seen any definitive evidence of new particles, leading experimentalists to rule out a lot of the parameter space of theorists favorite models.
Despite this extensive program of searches, one might wonder if we are still missing something. What if there was a new particle in the data, waiting to be discovered, but theorists haven’t thought of it yet so it hasn’t been looked for? This gives experimentalists a very interesting challenge, how do you look for something new, when you don’t know what you are looking for? One approach, which Particle Bites has talked about before, is to look at as many final states as possible and compare what you see in data to simulation and look for any large deviations. This is a good approach, but may be limited in its sensitivity to small signals. When a normal search for a specific model is performed one usually makes a series of selection requirements on the data, that are chosen to remove background events and keep signal events. Nowadays, these selection requirements are getting more complex, often using neural networks, a common type of machine learning model, trained to discriminate signal versus background. Without some sort of selection like this you may miss a smaller signal within the large amount of background events.
This new approach lets the neural network itself decide what signal to look for. It uses part of the data itself to train a neural network to find a signal, and then uses the rest of the data to actually look for that signal. This lets you search for many different kinds of models at the same time!
If that sounds like magic, lets try to break it down. You have to assume something about the new particle you are looking for, and the technique here assumes it forms a resonant peak. This is a common assumption of searches. If a new particle were being produced in LHC collisions and then decaying, then you would get an excess of events where the invariant mass of its decay products have a particular value. So if you plotted the number of events in bins of invariant mass you would expect a new particle to show up as a nice peak on top of a relatively smooth background distribution. This is a very common search strategy, and often colloquially referred to as a ‘bump hunt’. This strategy was how the Higgs boson was discovered in 2012.
The other secret ingredient we need is the idea of Classification Without Labels (abbreviated CWoLa, pronounced like koala). The way neural networks are usually trained in high energy physics is using fully labeled simulated examples. The network is shown a set of examples and then guesses which are signal and which are background. Using the true label of the event, the network is told which of the examples it got wrong, its parameters are updated accordingly, and it slowly improves. The crucial challenge when trying to train using real data is that we don’t know the true label of any of data, so its hard to tell the network how to improve. Rather than trying to use the true labels of any of the events, the CWoLA technique uses mixtures of events. Lets say you have 2 mixed samples of events, sample A and sample B, but you know that sample A has more signal events in it than sample B. Then, instead of trying to classify signal versus background directly, you can train a classifier to distinguish between events from sample A and events from sample B and what that network will learn to do is distinguish between signal and background. You can actually show that the optimal classifier for distinguishing the two mixed samples is the same as the optimal classifier of signal versus background. Even more amazing, this technique actually works quite well in practice, achieving good results even when there is only a few percent of signal in one of the samples.
The technique described in the paper combines these two ideas in a clever way. Because we expect the new particle to show up in a narrow region of invariant mass, you can use some of your data to train a classifier to distinguish between events in a given slice of invariant mass from other events. If there is no signal with a mass in that region then the classifier should essentially learn nothing, but if there was a signal in that region that the classifier should learn to separate signal and background. Then one can apply that classifier to select events in the rest of your data (which hasn’t been used in the training) and look for a peak that would indicate a new particle. Because you don’t know ahead of time what mass any new particle should have, you scan over the whole range you have sufficient data for, looking for a new particle in each slice.
The specific case that they use to demonstrate the power of this technique is for new particles decaying to pairs of jets. On the surface, jets, the large sprays of particles produced when quark or gluon is made in a LHC collision, all look the same. But actually the insides of jets, their sub-structure, can contain very useful information about what kind of particle produced it. If a new particle that is produced decays into other particles, like top quarks, W bosons or some a new BSM particle, before decaying into quarks then there will be a lot of interesting sub-structure to the resulting jet, which can be used to distinguish it from regular jets. In this paper the neural network uses information about the sub-structure for both of the jets in event to determine if the event is signal-like or background-like.
The authors test out their new technique on a simulated dataset, containing some events where a new particle is produced and a large number of QCD background events. They train a neural network to distinguish events in a window of invariant mass of the jet pair from other events. With no selection applied there is no visible bump in the dijet invariant mass spectrum. With their technique they are able to train a classifier that can reject enough background such that a clear mass peak of the new particle shows up. This shows that you can find a new particle without relying on searching for a particular model, allowing you to be sensitive to particles overlooked by existing searches.
This paper was one of the first to really demonstrate the power of machine-learning based searches. There is actually a competition being held to inspire researchers to try out other techniques on a mock dataset. So expect to see more new search strategies utilizing machine learning being released soon. Of course the real excitement will be when a search like this is applied to real data and we can see if machines can find new physics that us humans have overlooked!
1. Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles”
2. Blog Post on the CWoLa Method “Training Collider Classifiers on Real Data”
3. Particle Bites Post “Going Rogue: The Search for Anything (and Everything) with ATLAS”
4. Blog Post on applying ML to top quark decays “What does Bidirectional LSTM Neural Networks has to do with Top Quarks?”
5. Extended Version of Original Paper “Extending the Bump Hunt with Machine Learning”
## LIGO and Gravitational Waves: A Hep-ex perspective
The exciting Twitter rumors have been confirmed! On Thursday, LIGO finally announced the first direct observation of gravitational waves, a prediction 100 years in the making. The media storm has been insane, with physicists referring to the discovery as “more significant than the discovery of the Higgs boson… the biggest scientific breakthrough of the century.” Watching Thursday’s press conference from CERN, it was hard not to make comparisons between the discovery of the Higgs and LIGO’s announcement.
Long standing Searches for well known phenomena
The Higgs boson was billed as the last piece of the Standard Model puzzle. The existence of the Higgs was predicted in the 1960s in order to explain the mass of vector bosons of the Standard Model, and avoid non-unitary amplitudes in W boson scattering. Even if the Higgs didn’t exist, particle physicists expected new physics to come into play at the TeV Scale, and experiments at the LHC were designed to find it.
Similarly, gravitational waves were the last untested fundamental prediction of General Relativity. At first, physicists remained skeptical of the existence of gravitational waves, but the search began in earnest with Joseph Webber in the 1950s (Forbes). Indirect evidence of gravitational waves was demonstrated a few decades later. A binary system consisting of a pulsar and neutron star was observed to release energy over time, presumably in the form of gravitational waves. Using Webber’s method for inspiration, LIGO developed two detectors of unprecedented precision in order to finally make direct observation.
Unlike the Higgs, General Relativity makes clear predictions about the properties of gravitational waves. Waves should travel at the speed of light, have two polarizations, and interact weakly with matter. Scientists at LIGO were even searching for a very particular signal, described as a characteristic “chirp”. With the upgrade to the LIGO detectors, physicists were certain they’d be capable of observing gravitational waves. The only outstanding question was how often these observations would happen.
The search for the Higgs involved more uncertainties. The one parameter essential for describing the Higgs, its mass, is not predicted by the Standard Model. While previous collider experiments at LEP and Fermilab were able to set limits on the Higgs mass, the observed properties of the Higgs were ultimately unknown before the discovery. No one knew whether or not the Higgs would be a Standard Model Higgs, or part of a more complicated theory like Supersymmetry or technicolor.
Monumental scientific endeavors
Answering the most difficult questions posed by the universe isn’t easy, or cheap. In terms of cost, both LIGO and the LHC represent billion dollar investments. Including the most recent upgrade, LIGO cost a total $1.1 billion, and when it was originally approved in 1992, “it represented the biggest investment the NSF had ever made” according to France Córdova, NSF director. The discovery of the Higgs was estimated by Forbes to cost a total of$13 billion, a hefty price to be paid by CERN’s member and observer states. Even the electricity bill costs more than \$200 million per year.
The large investment is necessitated by the sheer monstrosity of the experiments. LIGO consists of two identical detectors roughly 4 km long, built 3000 km apart. Because of it’s large size, LIGO is capable of measuring ripples in space 10000 times smaller than an atomic nucleus, the smallest scale ever measured by scientists (LIGO Fact Page). The size of the LIGO vacuum tubes is only surpassed by those at the LHC. At 27 km in circumference, the LHC is the single largest machine in the world, and the most powerful particle accelerator to date. It only took a handful of people to predict the existence of gravitational waves and the Higgs, but it took thousands of physicists and engineers to find them.
Life after Discovery
Even the language surrounding both announcements is strikingly similar. Rumors were circulating for months before the official press conferences, and the expectations from each respective community were very high. Both discoveries have been touted as the discoveries of the century, with many experts claiming that results would usher in a “new era” of particle physics or observational astronomy.
With a few years of hindsight, it is clear that the “new era” of particle physics has begun. Before Run I of the LHC, particle physicists knew they needed to search for the Higgs. Now that the Higgs has been discovered, there is much more uncertainty surrounding the field. The list of questions to try and answer is enormous. Physicists want to understand the source of the Dark Matter that makes up roughly 25% of the universe, from where neutrinos derive their mass, and how to quantize gravity. There are several ad hoc features of the Standard Model that merit additional explanation, and physicists are still searching for evidence of supersymmetry and grand unified theories. While the to-do list is long, and well understood, how to solve these problems is not. Measuring the properties of the Higgs does allow particle physicists to set limits on beyond the Standard Model Physics, but it’s unclear at which scale new physics will come into play, and there’s no real consensus about which experiments deserve the most support. For some in the field, this uncertainty can result in a great deal of anxiety and skepticism about the future. For others, the long to-do list is an absolutely thrilling call to action.
With regards to the LIGO experiment, the future is much more clear. LIGO has only published one event from 16 days of data taking. There is much more data already in the pipeline, and more interferometers like VIRGO and (e)LISA, planning to go online in the near future. Now that gravitational waves have been proven to exist, they can be used to observe the universe in a whole new way. The first event already contains an interesting surprise. LIGO has observed two inspriraling black holes of 36 and 29 solar masses, merging into a final black hole of 62 solar masses. The data thus confirmed the existence of heavy stellar black holes, with masses more than 25 times greater than the sun, and that binary black hole systems form in nature (Atrophysical Journal). When VIRGO comes online, it will be possible to triangulate the source of these gravitational waves as well. LIGO’s job is to watch, and see what other secrets the universe has in store.
|
2023-01-27 07:33:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5912455916404724, "perplexity": 852.6397043189999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00019.warc.gz"}
|
https://mospace.umsystem.edu/xmlui/browse?value=Ha%2C+Huy+Tai&type=author
|
Now showing items 1-2 of 2
• #### The depth of the associated graded ring of ideals with any reduction number
(2002-12)
Let R be a local Cohen-Macaulay ring, let I be an R-ideal, and let G be the associated graded ring of I. We give an estimate for the depth of G when G is not necessarily Cohen-Macaulay. We assume that I is either equimultiple, ...
• #### Homology multipliers and the relation type of parameter ideals
(2005-01)
We study the relation type question, raised by C. Huneke, which asks whether for a complete equidimensional local ring R there exists a uniform bound for the relation type of parameter ideals. Wang gave a positive answer ...
|
2015-12-01 02:49:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8415525555610657, "perplexity": 591.3585168861346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.48/warc/CC-MAIN-20151124205424-00322-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://terahertztechnology.blogspot.com/2017/09/abstract-graphene-based-near-field.html
|
Monday, September 18, 2017
Abstract-Graphene-based near-field optical microscopy: high-resolution imaging using reconfigurable gratings
Sandeep Inampudi, Jierong Cheng, and Hossein Mosallaei
https://www.osapublishing.org/ao/abstract.cfm?uri=ao-56-11-3132&origin=search
High-resolution and fast-paced optical microscopy is a requirement for current trends in biotechnology and materials industry. The most reliable and adaptable technique so far to obtain higher resolution than conventional microscopy is near-field scanning optical microscopy (NSOM), which suffers from a slow-paced nature. Stemming from the principles of diffraction imaging, we present fast-paced graphene-based scanning-free wide-field optical microscopy that provides image resolution that competes with NSOM. Instead of spatial scanning of a sharp tip, we utilize the active reconfigurable nature of graphene’s surface conductivity to vary the diffraction properties of a planar digitized atomically thin graphene sheet placed in the near field of an object. Scattered light through various realizations of gratings is collected at the far-field distance and postprocessed using a transmission function of surface gratings developed on the principles of rigorous coupled wave analysis. We demonstrate image resolutions of the order of ${\lambda }_{0}/16$ using computational measurements through binary graphene gratings and numerical postprocessing. We also present an optimization scheme based on the genetic algorithm to predesign the unit cell structure of the gratings to minimize the complexity of postprocessing methods. We present and compare the imaging performance and noise tolerance of both grating types. While the results presented in this article are at terahertz frequencies (${\lambda }_{0}=10\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{\mu m}$), where graphene is highly plasmonic, the proposed microscopy principle can be readily extended to any frequency regime subject to the availability of tunable materials.
© 2017 Optical Society of America
|
2018-02-20 03:46:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5948228240013123, "perplexity": 2705.8268184854496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00378.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/a-small-telescope-has-objective-lens-focal-length-144-cm-eyepiece-focal-length-60-cm-what-magnifying-power-telescope-what-separation-between-objective-eyepiece-optical-instruments-telescope_11492
|
A Small Telescope Has an Objective Lens of Focal Length 144 Cm and an Eyepiece of Focal Length 6.0 Cm. What is the Magnifying Power of the Telescope? What is the Separation Between the Objective and the Eyepiece? - Physics
A small telescope has an objective lens of focal length 144 cm and an eyepiece of focal length 6.0 cm. What is the magnifying power of the telescope? What is the separation between the objective and the eyepiece?
Solution
Focal length of the objective lens, fo = 144 cm
Focal length of the eyepiece, fe = 6.0 cm
The magnifying power of the telescope is given as:
m =(f_@)/f_e
= 144/6 = 24
The separation between the objective lens and the eyepiece is calculated as:
f_@ + f_e
= 144 + 6 = 150 cm
Hence, the magnifying power of the telescope is 24 and the separation between the objective lens and the eyepiece is 150 cm.
Is there an error in this question or solution?
APPEARS IN
NCERT Class 12 Physics Textbook
Chapter 9 Ray Optics and Optical Instruments
Q 13 | Page 346
|
2021-03-06 02:50:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.780210554599762, "perplexity": 661.7716396118532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00294.warc.gz"}
|
http://math.stackexchange.com/questions/342792/disable-one-angle-of-rotation
|
# Disable one angle of rotation
I'd like to disable one angle of rotation of an object rotating in 3D space. Imagine a camera rotating around and displaying objects as they are in space. I'd like this object to be fixed on the horizontal axis (always in the center of the camera view) and follow the camera rotation on other two angles ( yaw and pitch).
Before multiplying the position matrix with the view matrix, I tried to anull the roll rotation by extracting Euler angles from the view matrix and then recreating it with roll value of zero. Something along these lines:
1. Take view matrix and extract Euler angles
2. Create the same matrix by replacing roll with zero
It's giving somewhat strange results, and there must be a better way to do this.
-
So in other words, if there was a horizontal line across the camera viewer, it would always appear to be parallel to the horizon, correct? Is it critical you do this with Euler angles or would you be willing to do it with quaternions? – rschwieb Mar 27 '13 at 17:23
A quaternion solution which can compute a rotation quaternion as a composition of first a yaw turn then a pitching turn:
I'm thinking of the usual right-handed $i,j,k$ axes in three space. We suppose that the camera begins looking along the $i$ axis, and that the $i,j$ plane is horizontal.
To accomplish a yaw turn through an angle of $\psi$ radians, we can apply the transformation $x\mapsto qxq^{-1}$ where $q=\cos(\psi/2)+\sin(\psi/2)k$.
At that point, we could rotate around $qjq^{-1}$ to perform a pitch turn. Set $h=qjq^{-1}$. A pitch up by $\theta$ radians is accomplished by the transformation $x\mapsto pxp^{-1}$ where $p=\cos(\theta/2)+\sin(\theta/2)h$.
The composition would just be given by $R=pq$, mapping $x\mapsto RxR^{-1}$. One would have to multiply out what $pq$ is in terms of $\psi$ and $\theta$, but that isn't too hard.
Unfortunately I am not adept at converting this solution to Euler angles or rotation matrices, but fortunately there is a wiki devoted to that subject.
-
Thanks, but my quaternion knowledge is rather limited, unfortunately. – user1304844 Mar 28 '13 at 10:26
@user1304844 Yeah, sorry if it doesn't directly help. It's the least complicated theoretical solution that I'm handy with. What texts do you have to help you solve the problem? – rschwieb Mar 28 '13 at 12:06
Actually you did help. I ended up reading your solution over and over again and studying quaternions for a day. What I did was: 1.Take rotation matrix and convert to quaternion; 2. Set Y component to zero (since the y component get multiplied with the Y axis of the vector. If there is no rotation, y is 0); 3. Normalize the quaternion; 4. OPTIONAL: Convert it back to a matrix4f (since openGL works very well with matrices) and set translation values from the original matrix ( the last column of the original matrix gets copied) – user1304844 Mar 29 '13 at 10:31
@user1304844 Awesome! Good luck with your studies! – rschwieb Mar 29 '13 at 13:14
|
2014-09-23 14:44:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6391043066978455, "perplexity": 666.8096887714992}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138980.37/warc/CC-MAIN-20140914011218-00219-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://ncatlab.org/nlab/show/place%20at%20infinity/cite
|
# nLab Cite — place at infinity
### Overview
We recommend the following .bib file entries for citing the current version of the page place at infinity. The first is to be used if one does not have unicode support, which is likely the case if one is using bibtex. The second can be used if one does has unicode support. If there are no non-ascii characters in the page name, then the two entries are the same.
In either case, the hyperref package needs to have been imported in one's tex (or sty) file. There are no other dependencies.
The author field has been chosen so that the reference appears in the 'alpha' citation style. Feel free to adjust this.
### Bib entry — Ascii
@misc{nlab:place_at_infinity,
author = {{nLab authors}},
title = {place at infinity},
howpublished = {\url{http://ncatlab.org/nlab/show/place%20at%20infinity}},
note = {\href{http://ncatlab.org/nlab/revision/place%20at%20infinity/2}{Revision 2}},
month = sep,
year = 2019
}
### Bib entry — Unicode
@misc{nlab:place_at_infinity,
author = {{nLab authors}},
title = {place at infinity},
howpublished = {\url{http://ncatlab.org/nlab/show/place%20at%20infinity}},
note = {\href{http://ncatlab.org/nlab/revision/place%20at%20infinity/2}{Revision 2}},
month = sep,
year = 2019
}
### Problems?
Please report any problems with the .bib entries at the nForum.
|
2019-09-16 15:27:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175422191619873, "perplexity": 4729.325850511438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00528.warc.gz"}
|
https://gamedev.stackexchange.com/questions/65679/bouncing-off-a-circular-boundary-with-multiple-balls
|
# Bouncing off a circular Boundary with multiple balls?
I am making a game like this :
Yellow smiley has to escape from red smileys, when yellow smiley hits the boundary game is over, when red smileys hit the boundary they should bounce back with the same angle they came, like shown below:
Every 10 seconds a new red smiley comes in the big circle, when red smiley hits yellow, game is over, speed and starting angle of red smileys should be random. I control the yellow smiley with arrow keys. The biggest problem I have reflecting the red smileys from the boundary with the angle they came. I don't know how I can give a starting angle to a red smiley and bouncing it with the angle it came. After some tips I did the reflection, looks reasonable but I am not sure if it is working because sometimes it just bounces to the way they came from.
Now the problem is after each bounce the speed of the red smiley increases and after 4-5 bounces they the speed goes to infinity and balls disappear.
How can I overcome this?
My js source code :
//Smiley.js
var canvas = document.getElementById("mycanvas");
var ctx = canvas.getContext("2d");
var vx;
var vy;
var twiceProjFactor;
// Object containing some global Smiley properties.
var SmileyApp = {
xspeed: 0,
yspeed: 0,
xpos:200, // x-position of smiley
ypos: 200 // y-position of smiley
};
var SmileyRed = {
xspeed: 0,
yspeed: 0,
xpos:350, // x-position of smiley
ypos: 67 // y-position of smiley
};
var SmileyReds = new Array();
for (var i=0; i<5; i++){
SmileyReds[i] = {
xspeed: 0,
yspeed: 0,
xpos:350, // x-position of smiley
ypos: 67 // y-position of smiley
};
SmileyReds[i].xspeed = Math.floor((Math.random()*50)+1);
SmileyReds[i].yspeed = Math.floor((Math.random()*50)+1);
}
function drawBigCircle() {
var centerX = canvas.width / 2;
var centerY = canvas.height / 2;
ctx.beginPath();
ctx.arc(centerX, centerY, radiusBig, 0, 2 * Math.PI, false);
// context.fillStyle = 'green';
// context.fill();
ctx.lineWidth = 5;
// context.strokeStyle = '#003300'; // green
ctx.stroke();
}
function lineDistance( positionx, positiony )
{
var xs = 0;
var ys = 0;
xs = positionx - 350;
xs = xs * xs;
ys = positiony - 350;
ys = ys * ys;
return Math.sqrt( xs + ys );
}
function drawSmiley(x,y,r) {
// outer border
ctx.lineWidth = 3;
ctx.beginPath();
ctx.arc(x,y,r, 0, 2*Math.PI);
//red ctx.fillStyle="rgba(255,0,0, 0.5)";
ctx.fillStyle="rgba(255,255,0, 0.5)";
ctx.fill();
ctx.stroke();
// mouth
ctx.beginPath();
ctx.moveTo(x+0.7*r, y);
ctx.arc(x,y,0.7*r, 0, Math.PI, false);
// eyes
var reye = r/10;
var f = 0.4;
ctx.moveTo(x+f*r, y-f*r);
ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI);
ctx.moveTo(x-f*r, y-f*r);
ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI);
// nose
ctx.moveTo(x,y);
ctx.lineTo(x, y-r/2);
ctx.lineWidth = 1;
ctx.stroke();
}
function drawSmileyRed(x,y,r) {
// outer border
ctx.lineWidth = 3;
ctx.beginPath();
ctx.arc(x,y,r, 0, 2*Math.PI);
//red
ctx.fillStyle="rgba(255,0,0, 0.5)";
//yellow ctx.fillStyle="rgba(255,255,0, 0.5)";
ctx.fill();
ctx.stroke();
// mouth
ctx.beginPath();
ctx.moveTo(x+0.4*r, y+10);
ctx.arc(x,y+10,0.4*r, 0, Math.PI, true);
// eyes
var reye = r/10;
var f = 0.4;
ctx.moveTo(x+f*r, y-f*r);
ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI);
ctx.moveTo(x-f*r, y-f*r);
ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI);
// nose
ctx.moveTo(x,y);
ctx.lineTo(x, y-r/2);
ctx.lineWidth = 1;
ctx.stroke();
}
// --- Animation of smiley moving with constant speed and bounce back at edges of canvas ---
var tprev = 0; // this is used to calculate the time step between two successive calls of run
function run(t) {
requestAnimationFrame(run);
if (t === undefined) {
t=0;
}
var h = t - tprev; // time step
tprev = t;
SmileyApp.xpos += SmileyApp.xspeed * h/1000; // update position according to constant speed
SmileyApp.ypos += SmileyApp.yspeed * h/1000; // update position according to constant speed
for (var i=0; i<SmileyReds.length; i++){
SmileyReds[i].xpos += SmileyReds[i].xspeed * h/1000; // update position according to constant speed
SmileyReds[i].ypos += SmileyReds[i].yspeed * h/1000; // update position according to constant speed
}
// change speed direction if smiley hits canvas edges
if (lineDistance(SmileyApp.xpos, SmileyApp.ypos) + SmileyApp.radius > 300) {
}
for (var i=0; i<SmileyReds.length; i++){
if (lineDistance(SmileyReds[i].xpos, SmileyReds[i].ypos) + SmileyReds[i].radius > 300) {
// Red Smiley collusion
//SmileyReds[i].xpos
//SmileyReds[i].xspeed
//SmileyReds[i].ypos
//SmileyReds[i].yspeed
// r = v − [2 (n · v) n] formula
//n calculation
nx = 350 - SmileyReds[i].xpos ;
ny = 350 - SmileyReds[i].ypos ;
nx = nx / (Math.sqrt(nx * nx + ny * ny));
ny = ny / (Math.sqrt(nx * nx + ny * ny));
//new calc
v_newx = SmileyReds[i].xspeed - (2 *( nx * SmileyReds[i].xspeed + ny * SmileyReds[i].yspeed ) ) * nx;
v_newy = SmileyReds[i].yspeed - (2 *( nx * SmileyReds[i].xspeed + ny * SmileyReds[i].yspeed ) ) * ny;
SmileyReds[i].xspeed = v_newx;
SmileyReds[i].yspeed = v_newy;
//to calculate "n," you do (626/L, 282/L) where L=sqrt(xpos^2+ypos^2)
}
}
/* Square canvas
if ((SmileyApp.xpos + SmileyApp.radius > canvas.width) ||
(SmileyApp.xpos - SmileyApp.radius) < 0) {
SmileyApp.xspeed = -SmileyApp.xspeed;
}
if ((SmileyApp.ypos + SmileyApp.radius > canvas.height) ||
(SmileyApp.ypos - SmileyApp.radius) < 0) {
SmileyApp.yspeed = -SmileyApp.yspeed;
}
*/
// redraw smiley at new position
ctx.clearRect(0,0,canvas.height, canvas.width);
drawBigCircle();
for (var i=0; i<SmileyReds.length; i++){
}
}
// uncomment these two lines to get every going
// SmileyApp.speed = 100;
run();
// --- Mouse wheel event handler to grow and shrink smiley
function mousewheelCB(event){
event.preventDefault();
event.stopPropagation();
}
mousewheelCB,
false);
// --- Control smiley motion with left/right arrow keys
function arrowkeyCB(event) {
event.preventDefault();
if (event.keyCode === 37) { // left arrow
SmileyApp.xspeed = -100;
SmileyApp.yspeed = 0;
} else if (event.keyCode === 39) { // right arrow
SmileyApp.xspeed = 100;
SmileyApp.yspeed = 0;
} else if (event.keyCode === 38) { // up arrow
SmileyApp.yspeed = -100;
SmileyApp.xspeed = 0;
} else if (event.keyCode === 40) { // right arrow
SmileyApp.yspeed = 100;
SmileyApp.xspeed = 0;
}
}
/*
function run(){
console.log("Here is run");
console.log(Date.now());
ctx.clearRect(0,0,canvas.width, canvas.height);
xpos = 200;
drawSmiley2(100,100,20);
xpos = xpos-50;
// decent animation 30 pictures per second
}
setInterval(run, 50);
*/
JSFiddle : http://jsfiddle.net/gj4Q7/2/
You can do this using vector math. There's a standard formula for reflecting an incoming vector (v₁) off a normal, which you can see derived on this page. The formula is
v₂ = v₁ - 2 (v₁ · n) n
Since it's a circle, the normal is just the normalized vector from the collision point toward the center of the circle. This is a good formula to put in a utility function in your math library, since it will come in handy all over the place when programming games.
• thanks for the reply firstly but in my Red Smileys I only have x,y coordinates, nothing else ... Don't we need at least 2 points for calculating a vector? From my x,y how can I do this? Or does my Red Smiley object need more variables? I'm not good at maths :( – Anarkie Nov 10 '13 at 21:32
• @Anarkie The velocity x,y already is a vector. You might want to read this article series to learn the basics of vectors. They're not very complicated, and you'll use them all the time in game programming. – Nathan Reed Nov 10 '13 at 21:34
• I read the link you gave, but still can't figure out, can you give an example calculation for example (x=340,y=320) when the smiley hits the boundary? – Anarkie Nov 10 '13 at 22:07
• in this example, the vector v1 is (xspeed,yspeed) and n=(circlex-xpos,circley-ypos), where (circlex,circle) is the position of the centre of the circle. v2 will be the new (xspeed,yspped) – Ken Nov 11 '13 at 17:49
• @Ken if I understood correct my center position is 350,350 so n should be : (350-xposition, 350-yposition)? After a little bit better understnading the vectors my questions is "Lets say a ball with xspeed: 14, yspeed: 16 hits the circular edge at xposition:626 yposition:382" then how would we calculate this? Do wee need to find out teta?If yes how :( – Anarkie Nov 11 '13 at 21:20
I wanted to comment on the answer given by @Nathan Reed (which is pretty good, btw - I do not wish to make his answer less relevant by answering again), but I do not have enough reputation yet. Anyway, I just wanted to point out that you can try to use this Vector2D class from Kevin Lindsey: http://www.kevlindev.com/gui/math/vector2d/index.htm Include that class in your project and then just do as Nathan suggested. You can find many other implementations of vector math, just google for "Vector2D Javascript".
Also, it might be useful for you to study a little about steering behaviours. These links have relevant information:
|
2019-10-19 12:49:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26720935106277466, "perplexity": 8677.700522966139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00027.warc.gz"}
|
https://www.nature.com/articles/s41597-022-01339-w?error=cookies_not_supported&code=49d0b4de-567c-43a3-bf8d-92ccd7f56542
|
## Background & Summary
Over the last decades, the topic of induced seismicity has become increasingly important, in response to the growing concern that industrial activities could induce or trigger damaging earthquakes. The occurrence of felt and damaging events has significant consequences on social acceptance of activities that may produce these events1. A recent notable case is the Mw 5.5 November 2017 Pohang (South Korea) earthquake that has been linked to geothermal energy exploitation operations close to the epicentral area2,3,4. This case highlights the need for new paradigms to manage the risk posed by induced seismicity4,5,6. Within this context, the project COntrol SEISmicity and Manage Induced earthQuakes (COSEISMIQ) aimed to test new generations of real-time induced seismicity management tools5,6 using sophisticated real-time seismic monitoring techniques, geomechanical models and seismic hazard and risk analysis methods. The site selected to test these methods is the Hengill region in Iceland (Fig. 1), where geothermal energy has been exploited for electrical power and heat production since the late 1960s7. The Hengill geothermal area is located in SW Iceland on the plate boundary between the North American and Eurasian plates. In particular it is located in the triple junction between the oblique spreading-type Reykjanes Peninsula (RP), the Western Volcanic Zone (WVZ), and the transform-type South Iceland Seismic Zone (SISZ) (see Fig. 1). From a seismological point of view this is one of the most active zones on Earth, with many thousands of earthquakes being recorded every year. The Hengill region also hosts the two largest geothermal power plants in Iceland, the Nesjavellir and the Hellisheidi power stations (Fig. 1), thus also the presence of induced seismicity characterizes this area.
The Nesjavellir power plant produces about 120 MW of electricity and supplies hot water to Reykjavik. The production of hot water began in 1990, with electricity production starting from 1998. Re-injection into shallow wells that were drilled and tested in early 2001 started in 2004, with the water entering the rock formation between 400–550 m depth. Since 2000, earthquake activity has mostly been confined to the production and re-injection area of the power plant with several earthquakes up to magnitude 3.58.
The Hellisheidi power plant is the third largest geothermal power plant in the world, producing about 300 MW of electricity and provides heat for domestic heating in Reykjavik9. The production began in 2006 and to maintain reservoir pressure, wastewater re-injection in the geothermal reservoir is necessary. Injection operations started in 2006 and increased in the fall of 2011 when a new injection site came into use. The new injection wells were drilled at the periphery of the geothermal field about 1 km northwest of the power plant, targeting the major SSW-NNE faults forming the westernmost part of the graben. Seismic activity occurred during drilling and testing operations of most of the injection wells10. The injection at this site received special attention for having triggered several earthquake swarms including two Ml 3.8 earthquakes in October 2011, a few weeks after it was initiated with a flow rate of around 550 L/s11. Since this region is also seismically active the problem of discrimination between natural and induced seismicity is also relevant12.
In this paper, we announce the release of about 2-years (from 2018/12/01 to 2021/01/31) of high-quality seismic data collected and analyzed during the COSEISMIQ project. The released dataset includes the raw continuous seismic waveforms and seismicity catalogues. The manuscript also describes the methods used to generate the seismicity catalogues. The seismic network comprises stations from a dense temporary deployment comprising broadband and short period sensors operated by the COSEISMIQ project partners, as well as from the background permanent monitoring stations operated by Iceland GeoSurvey (ISOR) and Icelandic Meteorological Office (IMO). All waveform data is distributed via the European Integrated Data Archive (EIDA; http://www.orfeus-eu.org/data/eida/). The catalogues are distributed via ETH-Zurich. All information is openly available through community standard FDSN webservices.
This large dataset is particularly valuable since a very dense network was deployed in a seismically active region where both induced and natural seismicity are occurring. The dataset includes moderate size earthquakes (Mw > 4). For this reason the collected dataset can be used within a broad range of research topics in seismology. In addition, due to the large number of recorded earthquakes within the selected period (about 12000 manually located events, roughly 16/day) this dataset is very well suited for testing new developed seismic analysis methods and is a perfect playground for the development of data intensive techniques such as waveforms or machine learning based methods.
## Methods
### Data processing
For the analysis of natural and induced seismicity recorded at the Hengill site in Iceland, we used an optimally tuned SeisComP-based processing server to produce automated seismicity catalogues. SeisComP is a widely used open-source software suite for data acquisition, processing, archiving and visualization of seismic data at global and regional scales13, and more recently, also used for microseismic monitoring operations14. To create catalogues of seismic events with absolute locations, SeisComP modules for phase detection, phase association, event detection, location, magnitude estimation and quality (score) evaluation are applied in sequential order with the output of each module in general contributing as input for the subsequent module. In a subsequent step, a catalogue of absolute location is used to generate a double difference catalogue using a new SeisComp module, rtDD. In general, SeisComp processing can be performed both in real-time and off-line mode. In this manuscript we only report catalogue information generated from off-line data reprocessing, since the real-time processing was only performed in the last months of the project outside the time-frame of this dataset. Our pipeline starts with the automatic phase picking module using an Akaike Information Criteria (AIC) picker for both P and S phases (although for S ones the picking process starts only after a detection of the P phase)15. Phase association and event detection is then performed using the module Scanloc14. A refined location is estimated using the Screloc module, which uses the NonLinLoc algorithm16 combined with a region-specific minimum 1‐D velocity model17,18 developed within the COSEISMIQ project (Table 2)19. This model is based on the inversion of about 3000 P-phases and 2200 S-phases manually picked for about 91 seismic events that were recorded during the first 12 months of the COSEISMIQ project. Finally, the local or Richter magnitude (ML) and a location quality score are calculated and the event is added to the catalogue. An important issue we encountered when processing the seismic data from the Hengill area is related to the strong ambient noise contamination of the broadband waveforms that affects local magnitude computation, where a Wood-Anderson filter is applied to the data. Iceland is surrounded by strong oceanic activity that produces an intense environmental noise in the period of 5s–12s12. This makes magnitude estimation challenging and, without addressing this issue, for events below Ml 1.0 the energy content of the noise is generally larger than that from the events, even considering the very short hypocentral distances often under 10 km that are typical in leading to an overestimation of station magnitudes if no additional high pass filter is applied to suppress the long period energy. In the catalogues presented here, in order to reduce the impact of the strong microseismic noise, we used a cosine taper filter in the range of 2–50 Hz, implemented within SeisComP. The importance of this filtering process is illustrated for a recording from a earthquake in Fig. 3. Nevertheless, the use of such a filter will lead to underestimation of station magnitudes for larger events because a considerable amount of the event energy can be removed by the filter. The magnitude where this effect becomes significant depends on the corner frequency of the high-pass filter, the 2 Hz corner used here begins to have an effect for local events with Ml above 3.0.
A common challenge, particularly in the case of automated catalogues, is providing robust estimates for the quality of an origin. To reduce the number of poor locations or even false detections in the area of interest we adopt a quality score metric (from now on termed ‘quality score’) that has been developed at the Swiss Seismological Service. The quality score, S, combines multiple key quality parameters of the origin - the azimuthal gap (G, in degrees); the number of P and S phases used, excluding gross outliers (N); the origin RMS (E in s); the minimum source-station distance (D in km); as well as the residual of the pick that corresponds to the 75th percentile (Q). The quality score, S, is then calculated using the following formula:
$$S=-1\left(Q+{\left(\frac{G}{{G}_{cr}}\right)}^{a}+{\left(\frac{E}{{E}_{cr}}\right)}^{b}+{\left(\frac{{N}_{cr}}{0.75N}\right)}^{c}+{\left(\frac{D}{{D}_{cr}}\right)}^{d}\right)$$
(1)
Gcr, Ecr, Ncr and Dcr are critical values. The larger a, b, c and d the more “step-wise” the shape. Also note that the score value is negative, a “higher score” is therefore “less negative” and closer to zero. The quality score must be properly tuned by considering the type of application and the area of interest. We optimally tuned the scoring system for the microseismic monitoring operations in the Hengill area. The score threshold and the related parameters are tuned in order to ensure that seismic events with a high-reliable location and relevant for the monitoring purposes (i.e. within the seismic network) are associated with a score ≥ −1.0. On the other hand, seismic events with a score < −5 and at least 10 seismic phases are considered low-quality events with uncertainties of the order of several kilometers and with several outlier picks. Events associated with a score between these two values are considered of intermediate quality and can be associated with small events within the network (M< −5) or events located at the edge of the network. This tuning process is generally performed by following a trial and error optimization scheme, a detailed description on how tune and use the SeisComP quality score module can be found in the official module repository at https://gitlab.seismo.ethz.ch/sed-sc3/evscore/. The equation of the quality score for this specific application is the following:
$$S=-1\left(Q+{\left(\frac{G}{225}\right)}^{5}+{\left(\frac{E}{0.15}\right)}^{5}+{\left(\frac{5}{0.75N}\right)}^{5}+{\left(\frac{D}{4}\right)}^{8}\right)$$
(2)
We use the quality score to create three different absolute catalogues of different quality as illustrated in Fig. 4 and as summarized in Table 3.
Each catalogue only contains the events located within the following geographical region: 63.9° ≤ Latitude (North) ≤ 64.2° and −21.7° ≤ Longitude (East) ≤ −20.9°. The temporal evolution of the seismicity in the Hengill area is illustrated in Fig. 5 which represents both in the magnitude and cumulative number of events versus time for the high, medium and low quality catalogues respectively.
In a final step, we further improve the quality of our automated seismic catalogue by using a double-difference relocation algorithm20,21 now integrated into SeisComP with the module rtDD22. This new module allows both real-time and offline data processing and has been already tested for real-time and offline relocation in Switzerland. In real-time mode, the module adopts the strategy implemented in RT-HypoDD21 and it uses waveform cross-correlation and double-difference methods to rapidly relocate new seismic events with high precision using the historical events with accurately known locations (background catalogue). In order to create such a background catalogue, these high-quality events can be relocated using a multi-event double-difference relative relocation procedure (i.e. using rtDD in offline mode). We create a double-difference catalogue using the multi-event procedure restricted only to events in the high quality catalogue that have been relocated by using rtDD in offline mode (Fig. 6). Note the significantly enhanced clustering and emergence of lineaments for the double difference catalogue.
## Data Records
The datasets are provided in formats and through services following seismological community standards defined by the International Federation of Digital Seismograph Networks (FDSN, https://www.fdsn.org). Data can be accessed through the the following FDSN web services:
fdsnws-station service to access the station metadata in text and XML format
fdsnws-dataselect service to access the waveform data in miniSEED format
fdsnws-event service to access the event parameters in text and QuakeML format
The continuous raw seismic waveforms are avaialble as binary files in miniSEED format, which is derived from the SEED (Standard for the Exchange of Earthquake Data) data format. While a SEED file consists in both time series values and metadata, the miniSEED format contains only the time series values (binary) and a very limited metadata (identification information). The complete metadata (i.e. station and instrument response information) is stored in a separate file called DATALESS. The metadata describing the stations is available in ascii (i.e. text) and stationXML format (https://stationxml-doc.readthedocs.io/en/release-1.1.0/). The catalogues are available in ascii and quakeMl format (https://quake.ethz.ch/quakeml/). Waveforms, station metadata and seismicity catalogues are available using standard FDSN webservices (https://www.fdsn.org/webservices/). The majority of the temporary COSEISMIQ stations are assigned to a temporary FDSN network code (https://www.fdsn.org/networks) 2 C23. For the small aperture array managed by GFZ, the network code is 4Q24. The existing stations operated by ISOR use network code OR25, and those operated by IMO use network code VI.
Waveform data and its associated metadata from 2 C are permanently hosted at the ETH node of the European Integrated Data Archive (EIDA, https://www.orfeus-eu.org/data/eida/). Data from OR and VI are temporarily hosted at the ETH node, and will be moved to a Icelandic node once it is created. Waveform data and station information can be transparently accessed using the EIDA Federator, which provides direct access to the data irrespective of the actual location of the data. Data from the 4Q network are archived at the GFZ EIDA node. Data at the ETH and GFZ EIDA nodes are stored using the SeisComP Data Structure (SDS, https://www.seiscomp3.org/doc/applications/slarchive/SDS.html), where folders are hierarchically organized by year, network code, station names, and channels. Each miniSEED file is 1-day long is named to uniquely identify the time series. The name of each file includes the network code; the station name; the channel; and the Julian date. The catalogues are available using a persistent ETH endpoint. The Table 4 we show few examples on how to access to the data using the different services. More specifically, the query in Table 4 associated with the fdsnws-station can be used to provide a list of all the COSEISMIQ stations. This query returns a text file as the format parameter is set to text. The location of the station and the temporal duration of available data is indicated. For the permanent networks OR and VI, only data recorded during the COSEISMIQ project is available, the entire dataset will be made available once an Icelandic EIDA node is created. Information at the network and the channel level can be obtained by setting the parameter level equal to network or channel, respectively. Custom requests can be performed by adding or modifying query parameters (more detail in the FDSN webservice site).
The second query in Table 4 associated with the fdsnws-dataselect service describes, with a simple example, the access to waveform data. This request will return the waveform plotted in Fig. 3.
Finally, the last query of Table 4 and associated with fdsnws-event service explains how to access the different seismicity catalogues. With this example we retrieve information about the 3 events included in the high quality catalogue on the date 1.1.2019 and in text format. By changing the contributor parameter events from the other available catalogues can be retrieved. There are 5 different catalogues that can be requested - SED_auto_LQ, SED_auto_MQ, SED_auto_HQ, SED_auto_HQ_MEDD, ISOR_manual as summarized in the Table 5. These catalogues are also accessible through the figshare repository associated with this paper26. The figshare repository also contains shell scripts containing pre-compiled FDSN queries allowing to download both continuous waveforms (full dataset) and event waveforms for each seismic catalogue previously described.
## Technical Validation
Quality checking of the recorded waveforms has been performed by looking at data completeness and noise analysis. We analyse catalogue quality and completeness by comparing with the manual ISOR catalogue from the same period. The data completeness for each station of the network (within the entire time-frame of the project) is presented in Figure e.1 (in the electronic supplement) that presents the data availability for each station and the percentage of data completeness. In addition, we calculated the Power Spectral Density (PSD) of the noise at each station of the network. These PSDs are accessible at http://www.seismo.ethz.ch/en/research-and-teaching/products-software/station-information/noise-coseismiq/). We observed that high noise levels affect broadband waveforms within the band frequency 0.1–1.0 Hz (mainly related to the primary and secondary microseisms), hence to correctly determine the magnitude of the seismic events we filtered the waveforms with a bandpass filter in the frequency range 2–50 Hz. To evaluate the overall performance of our automatically generated catalogues, we compare them with the manually reviewed catalogue provided by ISOR. In order to match automatically and manually located events we selected the following matching parameters: 1) origin time difference between two events less than 30 seconds and 2) latitude and longitude difference less than 0.1 degrees. If multiple events satisfy this condition we chose the event pair with the smallest origin-time difference. Figure 7 compares the locations of matching events between each of the low, medium and high quality automated catalogues and the manual catalogue.
From Fig. 7 it is clear that the low quality catalogue includes events with significant location errors, while the medium and high quality catalogues are more consistent with the manual catalogue provided by ISOR. The average location error (i.e. average difference between the manual and automated locations) for the low, medium and high quality catalogues are 2.6, 1.2 and 0.7 km respectively. Due to large errors on hypocentral coordinates and origin time of the low quality catalogue, we were not able to find a match with all the manually inspected earthquakes. An overview of the location errors for each automatic catalogue with respect to matched locations from the manual ISOR catalogue is shown in Fig. 8, showing that for about 80% of the events the hypocentral location difference between the automated (any quality) and matching manual locations is within 1 km. It is important to note that the the low, medium and high quality catalogues are obtained using fully automated procedures and the quality based classification has been performed by filtering the raw catalogue using the quality score and the number of phases as described in the previous section.
## Usage Notes
The Hengill region is characterized by an intense seismic activity, and using the dense seismic network that operated across the 26 months analysed in this manuscript, more than 10,000 events have been detected. The COSEISMIQ seismic network, comprising about 40 stations deployed with an average inter-station distance of about 2 km, is a unique dataset for its genre. The massive number of earthquakes that have occurred in the area, combined with the presence of many seismic sequences characterized by very short inter-event times (about 10 s) makes the analysis of this dataset particularly challenging, and hence is a perfect playground for data intensive techniques such as full-waveform or machine learning based analysis methods27. The seismic catalogues (both manual and automated) accompanying this paper can be used as reference to evaluate the performance of newly tested methods. In addition, due to the complex geology of this region, the dataset presented within this paper can be a valuable asset to better studying the natural and induced seismicity of the area. In publishing this dataset (consisting of both continuous raw waveforms and seismicity catalogues) one of our main aim is to provide a baseline for the comparison of fully automated methods for the analysis of seismicity, hence our automatic catalogues have been only sorted by quality score and not manually inspected after their generation. For this reason if not used for the benchmark of newly developed methods, these catalogues should be handled with caution, this is particularly true for the low quality catalogue that includes events with large location errors and false events. The medium and the high quality catalogue (and, of course, the double difference catalogue), on the other hand, are better suited to be used as a starting point for additional seismological analyses (e.g. focal mechanism determination, b-value analysis etc.) or interpretation.
|
2023-03-25 03:02:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30982938408851624, "perplexity": 2158.755866010111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00247.warc.gz"}
|
http://mathoverflow.net/feeds/question/28277
|
Differential operators preserving the space of harmonic functions (aka higher symmetries of the Laplacian) - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T19:32:15Z http://mathoverflow.net/feeds/question/28277 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/28277/differential-operators-preserving-the-space-of-harmonic-functions-aka-higher-sym Differential operators preserving the space of harmonic functions (aka higher symmetries of the Laplacian) robot 2010-06-15T16:58:27Z 2010-07-13T23:22:16Z <p>The article <a href="http://arxiv.org/abs/hep-th/0206233" rel="nofollow">http://arxiv.org/abs/hep-th/0206233</a> (published in Ann. of Math. (2) 161 (2005), no. 3) deals with linear differential operators $D$ for which there exists another linear differential operator $\delta$ such that $\Delta D = \delta \Delta$. Obviously these operators preserve the kernel of $\Delta$, i.e. the space of harmonic functions. The mentioned article finds essentially all such operators $D$. The result is that up to trivial operators $D = P\Delta$ all the operators $D$ have polynomial coefficients and are generated by sums of compositions of first order operators of this kind.</p> <p>First question: Let $D$ be any differential operator preserving the space of harmonic functions. It is easy to see that the operator $\delta = \Delta D (\Delta)^{-1}$ is well defined and satisfies $\Delta D = \delta \Delta$. Is $\delta$ also a differential operator?</p> <p>Second question: Is it true that all differential operators, which preserve the space of harmonic functions, are generated by first order ones with this property?</p> <p>One can also ask these questions only for linear differential operators or for operators from the Weyl algebra (i.e. linear differential operators with polynomial coefficients). For example, by a theorem of Peetre, the answer to the first question is affirmative if the operator $\delta = \Delta D (\Delta)^{-1}$ is local (i.e. the support of $\delta u$ is contained in the support of $u$).</p> <p>Third question: What makes the linked article so interesting that it was published in Annals?</p> http://mathoverflow.net/questions/28277/differential-operators-preserving-the-space-of-harmonic-functions-aka-higher-sym/28311#28311 Answer by mathphysicist for Differential operators preserving the space of harmonic functions (aka higher symmetries of the Laplacian) mathphysicist 2010-06-15T20:29:52Z 2010-06-15T21:05:05Z <p>The answer to your second question (unless I somehow misread it) is <strong>yes</strong> precisely because of the result of the paper you refer to (you may also wish to look at <a href="http://www.springerlink.com/content/r1540864nn27rt97/" rel="nofollow">this paper</a> and the preprint <a href="http://arxiv.org/abs/math-ph/0506002" rel="nofollow">math-ph/0506002</a> which address the same subject). This is the case because if $D$ is a differential operator that preserves the space of harmonic functions then there indeed exists a <strong>differential</strong> operator $\delta$ such that $\Delta D = \delta \Delta$. The latter holds (see e.g. the discussion at p.290 near Eq.(5.5) of the book <em><a href="http://books.google.com/books?id=sI2bAxgLMXYC" rel="nofollow">Applications of Lie groups to Differential Equations</a></em> by P.J. Olver) because the equation $\Delta f=0$ is totally nondegenerate in the sense of Definition 2.83 of the same book. In spite of the rather technical language the idea behind all this is very simple: if you have a submanifold $N$ of an manifold $M$ defined by the equations $F_1=0, \dots, F_k=0$ with smooth $F$'s and $k<\mathrm{dim}\ M$, then a smooth function $h$ vanishes on $N$ iff there exist smooth functions <i>h<sub>j</sub></i> on $M$ such that $$h=h_{1} F_1+\cdots+h_k F_k$$ provided $dF_1\wedge \dots \wedge dF_k\neq 0$ on $N$ (see Proposition 2.10 of the same book). In a sense, this is a smooth counterpart of the famous Hilbert's Nullstellensatz in the form stated e.g. <a href="http://mathworld.wolfram.com/HilbertsNullstellensatz.html" rel="nofollow">here</a>. This result is then applied to the case when $M$ is a <a href="http://en.wikipedia.org/wiki/Jet_bundle" rel="nofollow">jet bundle</a> and $N$ is a submanifold thereof defined by a system of differential equations and all its differential consequences (more precisely, one should rather consider the consequences only up to a certain order, to avoid dealing with infinitely many equations), et voila.</p>
|
2013-05-18 19:32:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453188180923462, "perplexity": 268.6912489705993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00070-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://rpg.stackexchange.com/questions/59243/would-the-detect-magic-spell-reveal-a-sorcerer?noredirect=1
|
# Would the Detect Magic spell reveal a sorcerer? [duplicate]
Since sorcerers are inherently magical (or maybe more accurately, inherently contain magic), would the detect magic spell reveal an aura around the sorcerer?
If so, would this also be the case with a warlock? I would assume wizards to be a definite no, since their magic is more of a harnessing of magic than any personal ability.
I believe this is unrelated to any possible aura attributed to prepared spells. To quote my below comment:
To my mind a prepared spell is just that, the preparation of materials and maybe a quick study of somatics/verbal. This wouldn't make the spellcaster any more or less magical. This is why I specifically reference the Sorcerer
## marked as duplicate by V2Blast♦, user17995, Purple Monkey dnd-5e StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Jul 25 '18 at 5:52
Excerpt from Detect Magic
For the duration, you sense the presence of magic within 30 feet of you. If you sense magic in this way, you can use your action to see a faint aura around any visible creature or object in the area that bears magic, and you learn its school of magic, if any.
Except from Sorcerer description
Magic is a part of every sorcerer, suffusing body, mind, and spirit with a latent power that waits to be tapped. Some sorcerers wield magic that springs from an ancient bloodline infused with the magic of dragons. Others carry a raw, uncontrolled magic within them, a chaotic storm that manifests in unexpected ways.
So obviously, while considering that 5e does not separate flavor text and official rules the RAW is yes. (DM's can decide how to handle it on an individual basis obviously)
As for Warlocks:
A warlock is defined by a pact with an otherworldly being. Sometimes the relationship between warlock and patron is like that of a cleric and a deity, though the beings that serve as patrons for warlocks are not gods.
So not necessarily, however once you have been blessed with an ongoing magic power like darkvision things change.
• Are you suggesting that once an invocation has permanent effect, like devil's sight, then the Warlock will always emanate a magical aura? – KorvinStarmast Jun 5 '16 at 14:19
# No, detect magic doesn't detect sorcerers (or other spellcasters) as magical
### Is the breath weapon of a dragon magical?
If you cast antimagic field, don armor of invulnerability, or use another feature of the game that protects against magical or non-magical effects, you might ask yourself, “Will this protect me against a dragon’s breath?” The breath weapon of a typical dragon isn’t considered magical, so antimagic field won’t help you but armor of invulnerability will.
You might be thinking, “Dragons seem pretty magical to me.” And yes, they are extraordinary! Their description even says they’re magical. But our game makes a distinction between two types of magic:
• the background magic that is part of the D&D multiverse’s physics and the physiology of many D&D creatures
• the concentrated magical energy that is contained in a magic item or channeled to create a spell or other focused magical effect
In D&D, the first type of magic is part of nature. It is no more dispellable than the wind. A monster like a dragon exists because of that magic-enhanced nature. The second type of magic is what the rules are concerned about. When a rule refers to something being magical, it’s referring to that second type. Determining whether a game feature is magical is straightforward. Ask yourself these questions about the feature:
• Is it a magic item?
• Is it a spell? Or does it let you create the effects of a spell that’s mentioned in its description?
• Is it a spell attack?
• Is it fueled by the use of spell slots?
• Does its description say it’s magical?
If your answer to any of those questions is yes, the feature is magical.
Let’s look at a white dragon’s Cold Breath and ask ourselves those questions. First, Cold Breath isn’t a magic item. Second, its description mentions no spell. Third, it’s not a spell attack. Fourth, the word “magical” appears nowhere in its description. Our conclusion: Cold Breath is not considered a magical game effect, even though we know that dragons are amazing, supernatural beings.
Detect magic, like other game mechanics, operates by this same logic with regard to what is considered magical. The spellcasting abilities of creatures (innate or otherwise) are considered "the background magic that is part of [...] the physiology of many D&D creatures". Detect magic is designed to detect magical effects, not the background magic that suffuses creatures or the universe.
Can detect magic detect magic potential of spellcasters even if they're not actively casting a spell?
It's not a wizard detector, if that's what you mean.
Given that the question he's responding to asks about spellcasters in general, it seems clear that his response is not specific to wizards - he's suggesting that the spell doesn't automatically detect spellcasters simply due to their magical abilities.
|
2019-07-23 13:56:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34039488434791565, "perplexity": 5371.975948440248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00462.warc.gz"}
|
http://miun.diva-portal.org/smash/resultList.jsf?af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22authority-person%3A53816+OR+0000-0001-9372-3416%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&language=en&query=
|
miun.sePublications
Change search
Refine search result
1 - 20 of 20
Cite
Citation style
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
• 1.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
A Parameter Tuning Framework for Metaheuristics Based on Design of Experiments and Artificial Neural Networks2010In: Proceeding of the International Conference on Computer Mathematics and Natural Computing 2010 / [ed] B. Brojack, WASET , 2010Conference paper (Refereed)
In this paper, a framework for the simplification andstandardization of metaheuristic related parameter tuning by applyinga four phase methodology, utilizing Design of Experiments andArtificial Neural Networks, is presented. Metaheuristics are multipurposeproblem solvers that are utilized on computational optimizationproblems for which no efficient problem-specific algorithmexists. Their successful application to concrete problems requires thefinding of a good initial parameter setting, which is a tedious andtime-consuming task. Recent research reveals the lack of approachwhen it comes to this so called parameter tuning process. In themajority of publications, researchers do have a weak motivation fortheir respective choices, if any. Because initial parameter settingshave a significant impact on the solutions quality, this course ofaction could lead to suboptimal experimental results, and therebya fraudulent basis for the drawing of conclusions.
• 2.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
A parameter-tuning framework for metaheuristics based on design of experiments and artificial neural networks2010In: World Academy of Science, Engineering and Technology: An International Journal of Science, Engineering and Technology, ISSN 2010-376X, E-ISSN 2070-3740, Vol. 64, p. 213-216Article in journal (Refereed)
In this paper, a framework for the simplification and standardization of metaheuristic related parameter-tuning by applying a four phase methodology, utilizing Design of Experiments and Artificial Neural Networks, is presented. Metaheuristics are multipurpose problem solvers that are utilized on computational optimization problems for which no efficient problem specific algorithm exist. Their successful application to concrete problems requires the finding of a good initial parameter setting, which is a tedious and time consuming task. Recent research reveals the lack of approach when it comes to this so called parameter-tuning process. In the majority of publications, researchers do have a weak motivation for their respective choices, if any. Because initial parameter settings have a significant impact on the solutions quality, this course of action could lead to suboptimal experimental results, and thereby a fraudulent basis for the drawing of conclusions.
• 3.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
An Adaptive, Searchable and Extendable Context Model,enabling cross-domain Context Storage, Retrieval and Reasoning: Architecture, Design, Implementation and Discussion2009Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
The specification of communication standards and increased availability of sensors for mobile phones and mobile systems are responsible for a significantly increasing sensor availability in populated environments. These devices are able to measure physical parameters and make this data available via communication in sensor networks. To take advantage of the so called acquiring information for public services, other parties have to be able to receive and interpret it. Locally measured datacould be seen as a means of describing user context. For a generic processing of arbitrary context data, a model for the specification ofenvironments, users, information sources and information semantics has to be defined. Such a model would, in the optimal case, enable global domain crossing context usage and hence a broader foundation for context interpretation and integration.This thesis proposes the CII-(Context Information Integration) model for the persistence and retrieval of context information in mobile, dynamically changing, environments. It discusses the terms context and context modeling under the analysis of former publications in thefield. Further-more an architecture and prototype are presented.Live and historical data are stored and accessed by the same platform and querying processor, but are treated in an optimized fashion.Optimized retrieval for closeness in n-dimensional context-spaces is supported by a dedicated method. The implementation enables self-aware,shareable agents that are able to reason or act based upon the global context,including their own. These agents can be considered as being a part of the wholecontext, being movable and executable for all context-aware applications.By applying open source technology, a gratifying implementation of CII is feasible. The document contains a thorough discussion concerning the software design and further prototype development. The use cases at the end of the document show the flexibility and extendability of the model and its implementation as a context-base for three entirely different applications.
• 4.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
An experimental study on robust parameter settings2010In: Proceedings of the 12th annual conference comp on Genetic and evolutionary computation, ACM Press, 2010, p. 1999-2002Conference paper (Refereed)
That there is no best initial parameter setting for a metaheuristicon all optimization problems is a proven fact (nofree lunch theorem). This paper studies the applicability ofso called robust parameter settings for combinatorial optimizationproblems. Design of Experiments supported parameterscreening had been carried out, analyzing a discreteParticle Swarm Optimization algorithm on three demographicallyvery dissimilar instances of the Traveling SalesmenProblem. First experimental results indicate that parametersettings produce varying performance quality forthe three instances. The robust parameter setting is outperformedin two out of three cases. The results are evensignicantly worse when considering quality/time trade-o.A methodology for problem generalization is referred to asa possible solution.
• 5.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Automatic Instance-based Tailoring of Parameter Settings for Metaheuristics2011Licentiate thesis, comprehensive summary (Other academic)
Many industrial problems in various fields, such as logistics, process management, orproduct design, can be formalized and expressed as optimization problems in order tomake them solvable by optimization algorithms. However, solvers that guarantee thefinding of optimal solutions (complete) can in practice be unacceptably slow. Thisis one of the reasons why approximative (incomplete) algorithms, producing near-optimal solutions under restrictions (most dominant time), are of vital importance.
Those approximative algorithms go under the umbrella term metaheuristics, each of which is more or less suitable for particular optimization problems. These algorithmsare flexible solvers that only require a representation for solutions and an evaluation function when searching the solution space for optimality.What all metaheuristics have in common is that their search is guided by certain control parameters. These parameters have to be manually set by the user andare generally problem and interdependent: A setting producing near-optimal resultsfor one problem is likely to perform worse for another. Automating the parameter setting process in a sophisticated, computationally cheap, and statistically reliable way is challenging and a significant amount of attention in the artificial intelligence and operational research communities. This activity has not yet produced any major breakthroughs concerning the utilization of problem instance knowledge or the employment of dynamic algorithm configuration.
The thesis promotes automated parameter optimization with reference to the inverse impact of problem instance diversity on the quality of parameter settings with respect to instance-algorithm pairs. It further emphasizes the similarities between static and dynamic algorithm configuration and related problems in order to show how they relate to each other. It further proposes two frameworks for instance-based algorithm configuration and evaluates the experimental results. The first is a recommender system for static configurations, combining experimental design and machine learning. The second framework can be used for static or dynamic configuration,taking advantage of the iterative nature of population-based algorithms, which is a very important sub-class of metaheuristics.
A straightforward implementation of framework one did not result in the expected improvements, supposedly because of pre-stabilization issues. The second approach shows competitive results in the scenario when compared to a state-of-the-art model-free configurator, reducing the training time by in excess of two orders of magnitude.
• 6.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
End-to-End Quality of Service Guarantees for Wireless Sensor Networks2015Doctoral thesis, comprehensive summary (Other academic)
Wireless sensor networks have been a key driver of innovation and societal progressover the last three decades. They allow for simplicity because they eliminate ca-bling complexity while increasing the flexibility of extending or adjusting networksto changing demands. Wireless sensor networks are a powerful means of fillingthe technological gap for ever-larger industrial sites of growing interconnection andbroader integration. Nonetheless, the management of wireless networks is difficultin situations wherein communication requires application-specific, network-widequality of service guarantees. A minimum end-to-end reliability for packet arrivalclose to 100% in combination with latency bounds in the millisecond range must befulfilled in many mission-critical applications.The problem addressed in this thesis is the demand for algorithmic support forend-to-end quality of service guarantees in mission-critical wireless sensor networks.Wireless sensors have traditionally been used to collect non-critical periodic read-ings; however, the intriguing advantages of wireless technologies in terms of theirflexibility and cost effectiveness justify the exploration of their potential for controland mission-critical applications, subject to the requirements of ultra-reliable com-munication, in harsh and dynamically changing environments such as manufactur-ing factories, oil rigs, and power plants.This thesis provides three main contributions in the scope of wireless sensor net-works. First, it presents a scalable algorithm that guarantees end-to-end reliabilitythrough scheduling. Second, it presents a cross-layer optimization/configurationframework that can be customized to meet multiple end-to-end quality of servicecriteria simultaneously. Third, it proposes an extension of the framework used toenable service differentiation and priority handling. Adaptive, scalable, and fast al-gorithms are proposed. The cross-layer framework is based on a genetic algorithmthat assesses the quality of service of the network as a whole and integrates the phys-ical layer, medium access control layer, network layer, and transport layer.Algorithm performance and scalability are verified through numerous simula-tions on hundreds of convergecast topologies by comparing the proposed algorithmswith other recently proposed algorithms for ensuring reliable packet delivery. Theresults show that the proposed SchedEx scheduling algorithm is both significantlymore scalable and better performing than are the competing slot-based schedulingalgorithms. The integrated solving of routing and scheduling using a genetic al-vvigorithm further improves on the original results by more than 30% in terms of la-tency. The proposed framework provides live graphical feedback about potentialbottlenecks and may be used for analysis and debugging as well as the planning ofgreen-field networks.SchedEx is found to be an adaptive, scalable, and fast algorithm that is capa-ble of ensuring the end-to-end reliability of packet arrival throughout the network.SchedEx-GA successfully identifies network configurations, thus integrating the rout-ing and scheduling decisions for networks with diverse traffic priority levels. Fur-ther, directions for future research are presented, including the extension of simula-tions to experimental work and the consideration of alternative network topologies.
• 7.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Finding Optimal Size TDMA Schedules using Integer ProgrammingManuscript (preprint) (Other academic)
The problem of finding a shortest TDMA is formally described as anInteger Program (IP). A brief user manual explains how the attached implementation can be used to find an optimal size TDMA for any givenWSN and routing table, fulfilling the validity criteria.
• 8.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
InPUT: The Intelligent Parameter Utilization Tool2012In: GECCO Companion 12: Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference companion, New York, NY, USA: ACM Press, 2012, p. 149-156Conference paper (Refereed)
Computer experiments are part of the daily business formany researchers within the area of computational intelligence. However, there is no standard for either human orcomputer readable documentation of computer experiments.Such a standard could considerably improve the collaboration between experimental researchers, given it is intuitiveto use. In response to this deficiency the Intelligent Param-eter Utilization Tool ( InPUT ) is introduced. InPUT offers ageneral and programming language independent format forthe definition of parameters and their ranges. It providesservices to simplify the implementation of algorithms andcan be used as a substitute for input mechanisms of existing frameworks. InPUT reduces code-complexity and increases the reusability of algorithm designs as well as the reproducibility of experiments. InPUT is available as open-source for Java and this will soon also be extended to C++, two ofthe predominant languages of choice for the development ofevolutionary algorithms.
• 9.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Iteration-wise parameter learning2011In: 2011 IEEE Congress of Evolutionary Computation, CEC 2011, New Orleans, LA: IEEE conference proceedings, 2011, p. 455-462Conference paper (Refereed)
Adjusting the control parameters of population-based algorithms is a means for improving the quality of these algorithms' result when solving optimization problems. The difficulty lies in determining when to assign individual values to specific parameters during the run. This paper investigates the possible implications of a generic and computationally cheap approach towards parameter analysis for population-based algorithms. The effect of parameter settings was analyzed in the application of a genetic algorithm to a set of traveling salesman problem instances. The findings suggest that statistics about local changes of a search from iteration i to iteration i + 1 can provide valuable insight into the sensitivity of the algorithm to parameter values. A simple method for choosing static parameter settings has been shown to recommend settings competitive to those extracted from a state-of-the-art parameter tuner, paramlLS, with major time and setup advantages.
• 10.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Recent Development in Automatic Parameter Tuning for Metaheuristics2010In: Proceedings of the 19th Annual Conference of Doctoral Students - WDS 2010 / [ed] J. Safrankova and J. Pavlu, 2010, p. -10Conference paper (Refereed)
Parameter tuning is an optimization problem with the objective of finding good static pa-rameter settings before the execution of a metaheuristic on a problem at hand. The requirementof tuning multiple control parameters, combined with the stochastic nature of the algorithms,make parameter tuning a non-trivial problem. To make things worse, one parameter vector allowing the algorithm to solve all optimization problems to the best of its potential is verifiable non-existent, as can be inferred from the no free lunch theorem of optimization. Manual tuning can be conducted, with the drawback of being very time consuming and failure prone. Hence, means for automated parameter tuning are required. This paper serves as an overview about recent work within the field of automated parameter tuning.
• 11.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
Challenges for the use of data aggregation in industrial Wireless Sensor Networks2015In: IEEE International Conference on Automation Science and Engineering, IEEE Computer Society, 2015, p. 138-144Conference paper (Refereed)
The provision of quality of service for Wireless Sensor Networks is more relevant than ever now where wireless solutions with their flexibility advantages are considered for the extension/substitution of wired networks for a multitude of industrial applications. Scheduling algorithms that give end-to-end guarantees for both reliability and latency exist, but according to recent investigations is the achieved quality of service insufficient for most control applications. Data aggregation is an effective tool to significantly improve on end-to-end contention and energy efficiency compared to single packet transmissions. In practice, though, it is not extensively used for process data processing on the MAC layer. In this paper, we outline the challenges for the use of data aggregation in Industrial Wireless Sensor Networks. We further extend SchedEx, a reliability-aware scheduling algorithm extension, for packet aggregation. Our simulations for scheduling algorithms from the literature show its great potential for industrial applications. Features for the inclusion of data aggregation into industrial standards such as WirelessHART are suggested, and remaining open issues for future work are presented and discussed.
• 12.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
An Object-Oriented Model in Support of Context-Aware Mobile Applications2010Conference paper (Refereed)
Intelligent and context-aware mobile services require usersand applications to share information and utilize services from remotelocations. Thus, context information from the users must be structuredand be accessible to applications running in end-devices. In response tothis challenge, we present a shared object-oriented meta model for a persistentagent environment. The approach enables agents to be contextawarefacilitating the creation of ambient intelligence demonstrated bya sensor-based scenario. The agents are context-aware as agent actionsare based upon sensor information, social information, and the behaviorof co-agents.
• 13.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
Latency Improvement Strategies for Reliability-Aware Scheduling in Industrial Wireless Sensor Networks2015In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, article id 178368Article in journal (Refereed)
In this paper, we propose novel strategiesfor end-to-end reliability-aware scheduling in Industrial WirelessSensor Networks (IWSN). Because of stringent reliability requirements inindustrial applications where missed packets may have disastrous or lethalconsequences, all IWSN communication standards are based on TimeDivision Multiple Access (TDMA), allowing for deterministic channel access onthe MAC layer. We therefore extend an existing generic and scalablereliability-aware scheduling approach by name SchedEx. SchedEx has proven toquickly produce TDMA schedules that guarantee auser-defined end-to-end reliability level $\underline\rho$ for all multi-hopcommunication in a WSN. Moreover, SchedEx executes orders of magnitude fasterthan recent algorithms in the literature while producing schedules withcompetitive latencies.We generalize the original problem formulation from single-channel tomulti-channel scheduling and propose a scalable integration into the existingSchedEx approach.We further introduce a novel optimal bound that produces TDMAschedules with latencies around 20\% shorter than the original SchedExalgorithm. Combining the novel strategies with multiple sinks, multiplechannels, and the introduced optimal bound, we could through simulationsverify latency improvements by almost an order of magnitude, reducingthe TDMA super-frame execution times from tens of seconds to seconds only, whichallows for a utilization of SchedEx for many time-critical control applications.
• 14.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
A Reliability-Aware Cross-layer Optimization Framework for Wireless Sensor Networks.Manuscript (preprint) (Other academic)
One of the biggest obstacles for a broad deploymentof Wireless Sensor Networks for industrial applications is the dif-ficulty to ensure end-to-end reliability guarantees while providingas tight latency guarantees as possible. In response, we proposea novel centralized optimization framework for Wireless SensorNetworks that identifies TDMA schedules and routing combi-nations in an integrated manner. The framework is shown toguarantee end-to-end reliability for all events send in a schedulingframe while minimizing the delay of all packet transmissions. Itcan further be applied using alternative Quality of Service ob-jectives and constraints including energy efficiency and fairness.We consider network settings with multiple channels, multiplesinks, and stringent reliability constraints for data collectingflows. We compare the results to those achieved by the onlyscalable reliability-aware TDMA scheduling algorithm to ourknowledge, SchedEx, which conducts scheduling only. By makingrouting part of the problem and by introducing the conceptof source-aware routing, we achieve latency improvements forall topologies, with a notable average improvement of up to31percent.
• 15.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
End-to-End Reliability-aware Scheduling for Wireless Sensor Networks2016In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 12, no 2, p. 758-767Article in journal (Refereed)
Wireless Sensor Networks (WSN) are gaining popularity as a flexible and economical alternative to field-bus installations for monitoring and control applications. For missioncritical applications, communication networks must provide endto- end reliability guarantees, posing substantial challenges for WSN. Reliability can be improved by redundancy, and is often addressed on the MAC layer by re-submission of lost packets, usually applying slotted scheduling. Recently, researchers have proposed a strategy to optimally improve the reliability of a given schedule by repeating the most rewarding slots in a schedule incrementally until a deadline. This Incrementer can be used with most scheduling algorithms but has scalability issues which narrows its usability to offline calculations of schedules, for networks that are rather static. In this paper, we introduce SchedEx, a generic heuristic scheduling algorithm extension which guarantees a user-defined end-to-end reliability. SchedEx produces competitive schedules to the existing approach, and it does that consistently more than an order of magnitude faster. The harsher the end-to-end reliability demand of the network, the better SchedEx performs compared to the Incrementer. We further show that SchedEx has a more evenly distributed improvement impact on the scheduling algorithms, whereas the Incrementer favors schedules created by certain scheduling algorithms.
• 16.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
QoS Assessment for Mission-critical Wireless Sensor Network Applications2013In: Proceedings - Conference on Local Computer Networks, LCN / [ed] Matthias Wählisch, IEEE Xplore , 2013, p. 663-666Conference paper (Refereed)
Wireless sensor networks (WSN) must ensure worst-case end-to-end delay and reliability guarantees for mission-critical applications.TDMA-based scheduling offers delay guarantees, thus it is used in industrial monitoring and automation. We propose to evolve pairs of TDMA schedule and routing-tree in a cross-layer in order to fulfill multiple conflicting QoS requirements,exemplified by latency and reliability.The genetic algorithm we utilize can be used as an analytical tool for both the feasibility and expected QoS in production. Near-optimal cross-layer solutions are found within seconds and can be directly enforced into the network.
• 17.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
QoS-Aware Cross-layer Configuration for Industrial Wireless Sensor Networks2016In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 12, no 5, p. 1679-1691, article id 7485858Article in journal (Refereed)
In many applications of Industrial Sensor Networks, stringentreliability and maximum delay constraints paired with priority demands ona sensor-basis are present. These QoS requirements pose tough challenges forIndustrial Wireless Sensor Networks that are deployed to an ever largerextent due to their flexibility and extendibility.In this paper, we introduce an integrated cross-layer framework, SchedEx-GA, spanning MAC layer and networklayer. SchedEx-GA attempts to identify a network configuration that fulfills all application-specific process requirements over a topology including the sensorpublish rates, maximum acceptable delay, service differentiation, and eventtransport reliabilities. The network configuration comprisesthe decision for routing, as well as scheduling.
For many of the evaluatedtopologies it is not possible to find a valid configuration due to the physicalconditions of the environment. We therefore introduce a converging algorithm on top of the frameworkwhich configures a given topology by additional sink positioning in order tobuild a backbone with the gateway that guaranteesthe application specific constraints.The results show that, in order to guarantee a high end-to-end reliability of 99.999% for all flows in a network containing emergency, control loop, andmonitoring traffic, a backbone with multiple sinks is often required for thetested topologies. Additional features, such as multi-channel utilization andaggregation, though, can substantially reduce the demand for required sinks.In its present version, the framework is used for centralized control, butwith the potential to be extended for de-centralized control in future work.
• 18.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
Decision support system for variable speed regulation2012Conference paper (Refereed)
The problem of recommending a suitable speed limit for roads is important for road authorities in order to increase traffic safety. Nowadays, these speed limits can be given more dynamically, with digital speed regulation signs. The challenge here is input from the environment, in combination with probabilities for certain events. Here we present a decision support model based on a dynamic Bayesian network. The purpose of this model is to predict the appropriate speed on the basis of weather data, traffic density and road maintenance activities. The dynamic Bayesian network principle of using uncertainty for the involved variables gives a possibility to model the real conditions. This model shows that it is possible to develop automated decision support systems for variable speed regulation.
• 19.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
Road Condition Imaging: Model Development2015Conference paper (Refereed)
• 20.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. ABB Corp Res, Vasterås, Sweden. Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
SAS-TDMA: A Source Aware Scheduling Algorithm for Real-Time Communication in Industrial Wireless Sensor Networks2013In: Wireless networks, ISSN 1022-0038, E-ISSN 1572-8196, Vol. 19, no 6, p. 1155-1170Article in journal (Refereed)
Scheduling algorithms play an importantrole for TDMA-based wireless sensor networks. ExistingTDMA scheduling algorithms address a multitude of objectives.However, their adaptation to the dynamics of a realistic wirelesssensor network has not been investigated in a satisfactorymanner. This is a key issue considering the challenges withinindustrial applications for wireless sensor networks, given thetime-constraints and harsh environments.In response to those challenges, we present SAS-TDMA, asource-aware scheduling algorithm. It is a cross-layer solutionwhich adapts itself to network dynamics. It realizes a tradeoffbetween scheduling length and its configurational overheadincurred by rapid responses to routes changes. We implementeda TDMA stack instead of the default CSMA stack and introduceda cross-layer for scheduling in TOSSIM, the TinyOS simulator.Numerical results show that SAS-TDMA improves the qualityof service for the entire network. It achieves significant improvementsfor realistic dynamic wireless sensor networks whencompared to existing scheduling algorithms with the aim tominimize latency for real-time communication.
1 - 20 of 20
Cite
Citation style
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
|
2020-04-04 18:10:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3257208466529846, "perplexity": 2908.891492330912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00041.warc.gz"}
|
https://mike-martinez.com/docs/837e64-quantitative-finance-cfa
|
However, I managed to pass Level 2 and 3 both on my first try and I received the charter last year. A Bernoulli Random Variable is one that can have only two outcomes. This site uses Akismet to reduce spam. This status is granted to institutions whose degree programs incorporate at least 70 percent of the CFA Program Candidate Body of Knowledge (CBOK), which provide students with a solid grounding in the CBOK and positions them well to sit for the CFA exams. How does courses at IAQS compliment Quantitative Studies? That having been said, for specialists studying for the CFA is almost certainly not going to be as enjoyable as building stochastic models and using neural networks to approximate credit risk, which is why taking a long-term view is essential. Yet, some practitioners, such as Attilio Meucci, are evolving new ways for coping with the real world's complexities. Stuart, I could not agree more! This is based on the principle that an optimal portfolio is one that minimizes the probability of a returning less than a given threshold level. All rights reserved. The U.S. Congress’s political gridlock and inability to resolve the U.S…. Excess\quad Kurtosis=\left( \frac { \Sigma { \left( { X }_{ i }-X \right) }^{ 4 } }{ { s }^{ 4 } } \ast \frac { 1 }{ n } \right) -3 D’abord reconnu uniquement en Amérique du Nord, il devient mondialement reconnu dans les années 1990-2000. Stevens business students bring a versatile blend of business and technology skills to their internships. $$x = sample \quad mean$$ Interpret Data and suggest ways in which the inferences can be used for the business. Quantitative Finance is Not Dead, Just Evolving (Video) By Jason Voss, CFA After the Great Recession, some quantitative finance watchers stated that the discipline wass dead. I'm not saying either method of study is better; I've learnt a lot doing the CFA Level I exam just as I did during my degrees. $$n= sample \quad size$$. There are three primary scenarios for calculating the confidence intervals in the CFA curriculum. Chaque année, il y a deux examens organisés pour le CFA I (en janvier et en juin) et un seul pour le CFA II et le CFA III (en juin uniquement). When New York City's sanitation department needed help getting its data in order, it got an important assist from Jeet Kothari. Finance and technology programs at Stevens benefit from the Hanlon Financial Systems Center, a one-of-a-kind teaching and research facility that's home to two labs outfitted with the latest in analytical and visualization tools. Turing Finance | L’éthique et les standards professionnels (Ethical and Professional Standards) 2. A big part of the statistics part of the curriculum deals with the use of Confidence Intervals. Here are the calculations that you need to know: The discipline spans the management of pension funds and insurance companies to the control of operational risks for manufacturing and consumer products companies and how to model the behavior of financial markets. Quant au niveau trois, il comporte des QCM mais également une partie « essay » avec des questions ouvertes. The curriculum specifies several methods of tracking and managing risk exposures using quantitative means. Don’t forget that the Standard Deviation is the square root of the Variance. A similar issue that you’ll have to deal with involves ongoing cash flows such as annuities and perpetuities. A core tenet of statistics is using samples of data to find information about the entire population of a data set. Au 14 mars, les inscriptions pour cette session seront définitivement closes. Yet unfortunately this occurred just too late, so I will be giving it a second go this December. I read with respect and clear understanding as to where everything leads to to. Have any queries? Additionally, the Stevens QF program has been accepted into the CFA Institute University Affiliation Program. Il s’adresse généralement aux étudiants (pour le niveau I) et à de jeunes actifs pour les niveaux II et III, qui comptent bien entendu faire carrière dans le secteur de la finance. Curriculum overviewCenter for Student SuccessHanlon Financial Systems CenterBusiness MinorsBoard of advisors. More than 90 percent of business students complete at least one internship. As a computer scientist I am used to solving problems and writing code. Furthermore, all information on this blog is for educational purposes and is not intended to provide financial advice. Correlation refers to the ratio of covariance between two variables and the product of their standard deviations. Le CFA certifie notamment l’excellence de ses détenteurs en analyse financière. For hypothesis testing to show whether the population correlation coefficient equals zero (absence of linear relationship between two variables), we use the t-statistic: The decision rule is to reject the null hypothesis (⍴=0)/to uphold the alternative hypothesis (⍴≠0) if t is greater than the critical value from the t-distribution table. more information Accept. © 2020 - Business Cool est la propriété du groupe média & tech 2Empower. Understand market trends to make modeling decisions, Develop and implement complex quantitative models and analytical software/tools, Test new models, products and analytics programs. When the population variance is known, we can use the more accurate z-score to determine the confidence interval instead of the student’s t-score, but the Central Limit Theorem allows us to also use the z-score when the sample size is large enough. 2 comments. Good luck for December I'm sure you will crack it! The Certificate in Quantitative Finance (CQF) is designed to transform your career by equipping you with the specialist quant skills essential to success. Cet article a vocation à répondre à ces deux questions. Students overseeing the fund have nearly half a million dollars under management and put classroom concepts into practice as they aim to improve the fund's performance. Another important formula related to the previous metrics is the Sharpe Ratio. General Tenor: Bearish. So initially I thought of enrolling in msc finance , but a friend in a hedge fund said the job opportunities for graduates of msc quantitative finance are plentiful, while front office jobs are much harder to get with cfa/msc in finance? Oliver Linton provides a good reference work for key financial econometric topics and extensions over the past 20 years. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this. For specialists like myself, this view is most probably going to become more and more valuable as I progress through my career. The primary responsibility of a financial engineer/ Quant Analyst is to have thorough knowledge of financial markets, its volatility and knowledge of financial theories. Pour passer l’examen, vous devrez débourser une somme qui varie selon la date à laquelle vous finalisez votre inscription. The joint probability of both events happening (P(AB)), is the conditional probability of A, assuming B occurs (P(A|B)), times the probability of B (P(B)). $$\frac { 4 }{ \frac { 1 }{ 1 } +\frac { 1 }{ 2 } +\frac { 1 }{ 3 } +\frac { 1 }{ 4 } } =1.92$$. Head straight to the below link, Know the course curriculum, fee structure and details of the Bachelors Degree Course, How did you hear about us?Friends and FamilyExisting StudentsCollege DuniaCounsellingCareer FairOthers, Drop in here for any query or discussions. I'm not used to sitting down in front of a pile of textbooks and reading, taking notes, and doing many small worked examples. Almost all of the speakers at the conference expressed concern about the direction of the global economy. During my studies I majored in a challenging application area of Computer Science called, To be more specific, I found that the quantity of work that needed to be studied (and understood) challenging and the nature of the studying required unfamiliar and therefore difficult. An instance where this could be necessary is when there are significant outliers that could cause the mean value of a dataset to be unrepresentative of the data. $$s=standard\quad deviation$$ Meeting recruiters in finance is easy: Wall Street is a 10-minute ride from Stevens, putting your future within easy reach. In this CFA study guide, we’ll make it easier to differentiate between the... 3,000 CFA® Practice Questions – QBank, Mock Exams, and Study Notes, 3,000 FRM® Practice Questions – QBank, Mock Exams, and Study Notes.
|
2022-05-17 17:00:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24274933338165283, "perplexity": 2641.907360033137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00300.warc.gz"}
|
https://math.stackexchange.com/questions/1144513/power-series-interval-of-convergence
|
# Power Series : Interval of Convergence
Find the interval $I$ and radius of convergence $R$ for the given power series. $$\sum_{n=1}^\infty \frac {5^n}{n}x^{n}$$
What I got was that I used the limit as it goes to infinity I ended up with $x = 1/5$ for the radius and the interval I got $[-1/5,1/5)$. But this answer turned out to be wrong.
• Your answer is OK for me ... We have $$-\ln (1-x) =\sum_{n=1}^{\infty}\frac{x^n}{n},\quad |x|<1,$$ and the case $x=-1$ is also OK. – Olivier Oloa Feb 12 '15 at 3:12
• Oh, Ok thanks :) – user214862 Feb 12 '15 at 3:14
|
2019-05-20 08:46:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918282151222229, "perplexity": 240.96218115542723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00406.warc.gz"}
|