url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.physicsforums.com/threads/normal-force-going-to-zero.72682/
|
# Normal Force Going to Zero
1. Apr 22, 2005
### PhysicsinCalifornia
I need help on a physics problem I've been working on...
A skier starts at rest at the top of a large hemispherical hill with radius (height) = R. Neglecting friction, show that the skier will leave the hill and become airborne at the distance of h = R/3 below the top of the hill.
I understand that at the point the skier goes airborne, the normal force is zero, but how do I conceptually show it? When it's at the crest of the hill, there are obviously two vertical forces in play: the weight of the skier (downward) and the normal force (upward).
So the question, once again, is how do I show specifically (to prove) that the skier goes airborne at height = 3/R.
2. Apr 22, 2005
### jdavel
Physics,
Can you find an equation for the normal force as a function of the skier's angular position, where angle is measured from the vertical. In other words he starts at theta=0 and flies off somewhere between theta=0 and theta=90degs? You need an equation for the normal force in terms of theta.
3. Apr 23, 2005
### boaz
like it was said by jdavel, you need to know the angle. if you do a "free body diagram", you will notice that :
$$mg \cdot \cos \theta - N = m \cdot \frac{V^2}{R} \rightarrow \cos \theta = \frac{ \frac{V^2}{R} + N}{g}$$
4. Apr 23, 2005
### Staff: Mentor
more hints
Since you are asked to find the point of departure in terms of height h below the hilltop, rewrite $\cos\theta$ in terms of h and R. Hint: You'll need to use conservation of energy.
5. Apr 27, 2005
### PhysicsinCalifornia
*Correction**
The skier goes airborne at height h= R/3
|
2017-01-20 10:34:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5608348846435547, "perplexity": 905.5645040779666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00395-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://groupprops.subwiki.org/w/index.php?title=Quasimorphism&oldid=49206
|
# Quasimorphism
Jump to: navigation, search
WARNING: POTENTIAL TERMINOLOGICAL CONFUSION: Please don't confuse this with quasihomomorphism of groups
## Definition
Suppose $G$ is a group. A quasihomomorphism on $G$ is a function $f: G \to \R$ (where $\R$ is the field of real numbers) satisfying the condition that there exists a positive real number $D$ such that for all $x,y \in G$, we have:
$|f(xy) - f(x) - f(y)| \le D$
Note that $D$ depends on $f$, but not on the choice of elements of $G$.
### Homogenization
Ahomogeneous quasimorphism is a quasiomorphism that is also a 1-homomorphism of groups, i.e., its restriction to any cyclic subgroup of $G$ is a homomorphism. For any quasimorphism $f$, we can consider its homogenization, defined as $\mu_f := x \mapsto \lim_{n \to \infty} \frac{f(x^n)}{n}$.
## Examples
• Any set map from a group to $\R$ with a bounded image is a quasimorphism. In particular, any continuous map from a compact topological group to $\R$ is a quasimorphism. Examples include coordinate projections from compact manifolds embedded in $\R^n$. Note that the homogenization of any such quasimorphism is the zero quasimorphism, so such quasimorphisms are not interesting up to homogenization.
• The rotation number quasimorphism is a homogeneous quasimorphism.
|
2019-06-24 13:58:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681372046470642, "perplexity": 365.9086096545269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00538.warc.gz"}
|
https://wiki.geodynamics.org/software:specfem3d_globe:start?rev=1388718599
|
# Computational Infrastructure for Geodynamics Wiki
### Sidebar
software:specfem3d_globe:start
This is an old revision of the document!
# SPECFEM3D Globe
## Benchmarks
Benchmark results for SPECFEM3D_GLOBE
### Benchmark results for "short-period" simulation
Komatitsch and Tromp [2002a,b] carefully benchmarked the spectral-element simulations of global seismic waves against normal-mode seismograms. Version 4.0 of SPECFEM3D_GLOBE has been benchmarked again following the same procedure.
#### Short-period benchmark
We present here a ‘short-period’ (periods longer than 9 s) simulation of a deep event in transversely isotropic PREM without the ocean layer and including the effects of self-gravitation and attenuation (Figures C.3, C.4 and C.5).
#### Normal-mode synthetics
The normal-mode synthetics are calculated with the code QmXD using mode catalogs with a shortest period of 8 s generated by the code OBANI. No free-air, tilt, or gravitational potential corrections were applied [Dahlen and Tromp, 1998]. We also turned off the effect of the oceans in QmXD.
#### Results
The normal-mode and SEM displacement seismograms are first calculated for a step source-time function, i.e., setting the parameter half duration in the CMTSOLUTION file to zero for the SEM simulations. Both sets of seismograms are subsequently convolved with a triangular source-time function using the processing script UTILS/seis_process/process_syn.pl.
They are also band-pass filtered and the horizontal components are rotated to the radial and transverse directions (with the script UTILS/seis_process/rotate.pl). The match between the normal-mode and SEM seismograms is quite remarkable for the experiment with attenuation, considering the very different implementations of attenuation in the two computations (e.g., frequency domain versus time domain, constant Q versus absorption bands).
To unzip the tar ball file, type:
tar -zxvf benchmark_examples.tar.gz
Further tests can be found in the EXAMPLES directory. It contains the normal-mode and SEM seismograms, and the parameters (STATIONS, CMTSOLUTION and Par_file) for the SEM simulations.
Figure C.3: Normal-mode (blue) and SEM (red) vertical displacements in transversely isotropic PREM considering the effects of self-gravitation and attenuation for 12 stations at increasing distance from the 1994 June 9th Bolivia event located at 647 km depth. The SEM computation is accurate for periods longer than 9 s. The seismograms have been filtered between 10 s and 500 s. The station names are indicated on the left.
Figure C.4: Same as in Figure C.3 for the transverse displacements
Figure C.5: Seismograms recorded between 130 degrees and 230 degrees, showing in particular the good agreement for core phases such as PKP. This figure is similar to Figure 24 of Komatitsch and Tromp (2002a). The results have been filtered between 15 s and 500 s.
Important remark:
When comparing SEM results to normal mode results, one needs to convert source and receiver coordinates from geographic to geocentric coordinates, because on the equator the geographic and geocentric latitude are identical but not elsewhere. Even for spherically-symmetric simulations one must perform this conversion because the source and receiver locations provided by globalCMT.org and IRIS involve geographic coordinates.
References
F. A. Dahlen and J. Tromp. Theoretical Global Seismology. Princeton University Press, Princeton, New-Jersey, USA, 1998.
A. M. Dziewonski and D. L. Anderson. Preliminary reference Earth model. Phys. Earth Planet. In., 25:297-356, 1981.
D. Komatitsch and J. Tromp. Spectral-element simulations of global seismic wave propagation-I. Validation. Geophys. J. Int., 149 (2):390–412, 2002a. doi: 10.1046/j.1365- 246X.2002.01653.x.
D. Komatitsch and J. Tromp. Spectral-element simulations of global seismic wave propagation-II. 3-D models,oceans,rotation,and self-gravitation. Geophys. J. Int., 150(1):303–318, 2002b. doi: 10.1046/j.1365-246X.2002.01716.x.
|
2020-07-12 00:57:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001446604728699, "perplexity": 9398.453178387537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00393.warc.gz"}
|
https://blender.stackexchange.com/questions/141839/if-else-function/142171
|
If else function
I have two objects "ObjectA" and "ObjectB". Object A when selected extrudes up in the z direction. Object B when selected extrudes down in the z direction.
Currently I have two different buttons each having their own script. One for A the other for B.
I would like to have one button. So this is where I need a script and it gets tricky. If Object A is selected, it will run the script, else it does nothing. If object B is selected it runs the script, else it does nothing.
I hope I am descriptive enough..
import bpy
ObjectA=bpy.data.objects['Cube']
ObjectB=bpy.data.objects['Cone']
class ObjectAOperator(bpy.types.Operator):
"""Tooltip"""
bl_idname = "object.objecta_operator"
bl_label = "Simple ObjectA Operator"
@classmethod
def poll(cls, context):
return context.active_object is not None
def execute(self, context):
context.active_object.location[2] += 1
return {'FINISHED'}
class ObjectBOperator(bpy.types.Operator):
"""Tooltip"""
bl_idname = "object.objectb_operator"
bl_label = "Simple ObjectB Operator"
@classmethod
def poll(cls, context):
return context.active_object is not None
def execute(self, context):
context.active_object.location[2] -= 1
return {'FINISHED'}
class HelloWorldPanel(bpy.types.Panel):
"""Creates a Panel in the Object properties window"""
bl_label = "Hello World Panel"
bl_idname = "OBJECT_PT_hello"
bl_space_type = 'PROPERTIES'
bl_region_type = 'WINDOW'
bl_context = "object"
def draw(self, context):
layout = self.layout
obj = context.object
row = layout.row()
row.label(text="Hello world!", icon='WORLD_DATA')
row = layout.row()
row.label(text="Active object is: " + obj.name)
row = layout.row()
row.prop(obj, "name")
row = layout.row()
if obj == ObjectA:
row.operator("object.objecta_operator")
if obj == ObjectB:
row.operator("object.objectb_operator")
def register():
bpy.utils.register_class(ObjectAOperator)
bpy.utils.register_class(ObjectBOperator)
bpy.utils.register_class(HelloWorldPanel)
def unregister():
bpy.utils.unregister_class(HelloWorldPanel)
bpy.utils.unregister_class(ObjectAOperator)
bpy.utils.unregister_class(ObjectBOperator)
if __name__ == "__main__":
register()
Try this simple example whether it satisfies your requirements.
• You can simplify this massively by moving the if statement into the operator. – timodriaan Jun 9 '19 at 11:19
• @timodriaan how exactly? could please provide an example? – Andrew Patynko Jun 13 '19 at 6:47
• Delete one of both operators and copy the if/else from draw() into the other operator's execute method. Then replace the call to the operators inside the if/else with the actual code (context.active_object.location[2] += 1 e.g.) – timodriaan Jun 13 '19 at 19:04
• @timodriaan Yes, I agree that in this simple example it makes sense to have less code. But initially I separated into two operator because in the main question it was mentioned "two different buttons each having their own script". And I thought having two operators simplifies understanding of the logic. – Andrew Patynko Jun 14 '19 at 7:10
|
2020-08-13 14:22:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3799964189529419, "perplexity": 5910.5248900034085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739046.14/warc/CC-MAIN-20200813132415-20200813162415-00179.warc.gz"}
|
https://learn.careers360.com/ncert/questions/maths-three_dimensional_geometry/?chapter%5B%5D=713
|
## Filters
Clear All
G Gautam harsolia
Given equations of planes are and Now, from equation (i) and (ii) it is clear that given planes are parallel to each other Therefore, the correct answer is (B)
G Gautam harsolia
Given equations are and Now, it is clear from equation (i) and (ii) that given planes are parallel We know that the distance between two parallel planes is given by Put the values in this equation we will get, Therefore, the correct answer is (D)
G Gautam harsolia
The equation of plane having a, b and c intercepts with x, y and z-axis respectively is given by The distance p of the plane from the origin is given by Hence proved
P Pankaj Sanodiya
Given Two straight lines in 3D whose direction cosines (3,-16,7) and (3,8,-5) Now the two vectors which are parallel to the two lines are and As we know, a vector perpendicular to both vectors and is , so A vector parallel to this vector is Now as we know the vector equation of the line which passes through point p and parallel to vector d is Here in our question, give point p =...
P Pankaj Sanodiya
Given A point through which line passes two plane And it can be seen that normals of the planes are since the line is parallel to both planes, its parallel vector will be perpendicular to normals of both planes. So, a vector perpendicular to both these normal vector is Now a line which passes through and parallels to is So the required line is
P Pankaj Sanodiya
Given, Equation of a line : Equation of the plane Let's first find out the point of intersection of line and plane. putting the value of into the equation of a plane from the equation from line Now, from the equation, any point p in line is So the point of intersection is SO, Now, The distance between the points (-1,-5,-10) and (2,-1,2) is Hence the required distance is 13.
D Divya Prakash Singh
The equation of the plane passing through the line of intersection of the given plane in ,,,,,,,,,,,,,(1) The plane in equation (1) is perpendicular to the plane, Therefore Substituting in equation (1), we obtain .......................(4) So, this is the vector equation of the required plane. The Cartesian equation of this plane can be obtained by...
D Divya Prakash Singh
We have the coordinates of the points and respectively. Therefore, the direction ratios of OP are And we know that the equation of the plane passing through the point is where a,b,c are the direction ratios of normal. Here, the direction ratios of normal are and and the point P is . Thus, the equation of the required plane is
D Divya Prakash Singh
So, the given planes are: and The equation of any plane passing through the line of intersection of these planes is ..............(1) Its direction ratios are and = 0 The required plane is parallel to the x-axis. Therefore, its normal is perpendicular to the x-axis. The direction ratios of the x-axis are 1,0, and 0. Substituting in equation (1), we obtain So, the...
D Divya Prakash Singh
Given that the points and are equidistant from the plane So we can write the position vector through the point is Similarly, the position vector through the point is The equation of the given plane is and We know that the perpendicular distance between a point whose position vector is and the plane, and Therefore, the distance between the point and the given plane is ...
P Pankaj Sanodiya
Given two planes x + 2y + 3z = 5 and 3x + 3y + z = 0. the normal vectors of these plane are Since the normal vector of the required plane is perpendicular to the normal vector of given planes, the required plane's normal vector will be : Now, as we know the equation of a plane in vector form is : Now Since this plane passes through the point (-1,3,2) Hence the equation of the plane is
D Divya Prakash Singh
We know that the equation of the line that passes through the points and is given by the relation; and the line passing through the points, . And any point on the line is of the form. This point lies on the plane, or . Hence, the coordinates of the required point are or .
D Divya Prakash Singh
We know that the equation of the line that passes through the points and is given by the relation; and the line passing through the points, And any point on the line is of the form . So, the equation of ZX plane is Since the line passes through YZ- plane, we have then, or and So, therefore the required point is .
D Divya Prakash Singh
We know that the equation of the line that passes through the points and is given by the relation; and the line passing through the points, And any point on the line is of the form . So, the equation of the YZ plane is Since the line passes through YZ- plane, we have then, or and So, therefore the required point is
D Divya Prakash Singh
Given lines are; and So, we can find the shortest distance between two lines and by the formula, ...........................(1) Now, we have from the comparisons of the given equations of lines. So, and Now, substituting all values in equation (3) we get, Hence the shortest distance between the two given lines is 9 units.
D Divya Prakash Singh
Given that the plane is passing through and is parallel to the plane So, we have The position vector of the point is, and any plane which is parallel to the plane, is of the form, . .......................(1) Therefore the equation we get, Or, So, now substituting the value of in equation (1), we get .................(2) So, this is the required equation...
D Divya Prakash Singh
Given that the plane is passing through the point so, the position vector of the point A is and perpendicular to the plane whose direction ratios are and the normal vector is So, the equation of a line passing through a point and perpendicular to the given plane is given by, , where .
D Divya Prakash Singh
Given both lines are perpendicular so we have the relation; For the two lines whose direction ratios are known, We have the direction ratios of the lines, and are and respectively. Therefore applying the formula, or For, the lines are perpendicular.
|
2020-04-03 18:41:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363078236579895, "perplexity": 858.5508123846187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00479.warc.gz"}
|
https://phys.libretexts.org/Courses/College_of_the_Canyons/Physci_101_Lab%3A_Physical_Science_Laboratory_Investigations_(Ciardi)/38%3A_Investigation_37_-_Pattern_in_the_Periodic_Table/38.4%3A_Procedures
|
# 38.4: Procedures
You will search for patterns of electron configurations.
1. Choose one Group A set of elements (one of the Group A columns) for which you will be responsible. Each person on the team should have one Group A column, initially.
2. Sketch the number of shells, or orbits, your first element would have in the ground state. Use hash marks to show the correct number of pairs for each inner orbit. Count the number of electrons added to the inner orbits, and calculate the number of electrons remaining.
Example:
• Magnesium (Mg) has atomic number 12, so it will normally have 12 electrons
• (12 electrons total) − (10 electrons added to shells) = 2 electrons remaining
1. Determine the number of electrons needed to fill the outermost shell half full. Add the remaining electrons as singles, until the shell is half full, and then pair electrons (if needed) until all electrons for that element have been added to the model.
1. Repeat the process for each element in your first Group A column. Determine and record the number of unpaired electrons in the outermost shell, for your first set of Group A elements; each of these elements should have the same trend (except Group IIIA – the trend changes). Also record whether the outermost shell is mostly full or empty.
2. Choose a 2nd Group A set of elements (another Group A column), and complete the entire process for that set of elements.
3. Draw a table in which to record the trend for each vertical group of elements. Read the instructions for completing the table.
Table $$\PageIndex{1}$$: Periodic Table Trends
Group
Unpaired Electrons Trend
Mostly (full/empty)
IA
IIA
IIIA
IVA
VA
VIA
VIIA
VIIIA
1. Collaborate with your team to record the trends for each Group A column (IA, IIA, IIIA, IVA, VA, VIA, VIIA, VIIIA). Record the number of unpaired electrons in the outermost shell, and record whether the outermost shell is mostly empty or mostly full.
|
2022-12-09 23:39:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4496158957481384, "perplexity": 1028.9450239313187}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00860.warc.gz"}
|
http://mathoverflow.net/revisions/78192/list
|
A very elementary example (simpler than the ones you've given) is the generating function for the number of partitions of $n$, denoted $p_n$: $$\sum_{n\geq 0} p_n q^n = \prod_{i\geq 0} \frac{1}{1-q^i}.$$
|
2013-05-26 06:51:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509172797203064, "perplexity": 85.60292419253052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635944/warc/CC-MAIN-20130516121715-00064-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://suesauer.blogspot.com/2012/09/
|
## Thursday, September 27, 2012
### Beautiful broken fur
*click to make bigger*
Forgot to set int Groom O_O sooooo pretty
### Cool post from Chris Cunnington :D
Chris Cunnington
http://blog.cunnington.co.za/2012/09/27/stereoscopic-depth-compression/
Chris is the Stereo Guru at Triggerfish. His also very good at growing Orchid *Which is quite hard to do O_O*
## Thursday, September 20, 2012
### Change is good...
Change?!
While waiting for renders and caches I looked at my blog and thought to myself, it needs to be simplified a bit :D And now it is.... hopefully it looks better. Atlest I think it does
## Tuesday, September 18, 2012
### Vector's and ICE
When I started this post I was only going to do an example of Dot Product in ICE, but it got a bit abstract and hard to follow. So ....
....in this post. So lets get cracking :D
## A Vector
Vectors are use to represent physical quantities which have length(magnitude) and direction, like wind or gravity.
*click to make bigger*
A vector has direction and magnitude(length)
### A + B = B + A
When Adding Vectors imagine you are joining them head to tail, then getting the long side of the triangle they make. This new vector is also known as the resultant vector. *You can add as many vectors as you want together, it will always have a "straight arrow" resultant vector answer.*
*click to make bigger*
When adding vectors it doesn't matter which vector gets added first you will always get the same result.
In the Image below I show -the dark yellow arrow is the resultant vector, between the blue arrow vector and the green arrow vector- how this looks in ICE.
*click to make bigger*
To get the Inverse of A+B(resultant vector) all you need to do is : (-A) - B = Inverse of A + B
In other words get the negative value of blue vector, then subtract that from the green vector and you will get the inverse of A + B(resultant vector*purple arrow*).
Here's what the ICE nodes look like...
*click to make bigger*
## Subtract Vectors
### C = B + (-A)
When we are subtracting vectors you have to invert the vector(green arrow vector) you want to subtract from the blue arrow vector, then add the two vectors together like normal.
The order you subtract in matters, meaning if you subtract them in the wrong order you will get a different answer to what you expect.
*click to make bigger*
In ICE you can subtract vectors by just using the subtract node, or you can do it with the proper vector math I showed above, both get the same result as you can see with the two purple values in the image below.
To get the inverse of B + (-A) you have to do the following (-B) - (-A) = C
The light green arrow in the image below shows the inverse value of B + (-A)
*click to make bigger*
Here's what it looks like in the ICE tree...
*click to make bigger*
## Multiply by Scalar
### C = 2(A)
When you multiply a vector by a scalar its called "scaling", because you don't affect the direction but only the magnitude(length) of the vector. This makes the length "bigger" or "smaller", more "speed" or "less speed".
*click to make bigger*
In the images below and above, the light blue arrow and values represent the green arrow vector if scaled by the value 2.
*click to make bigger*
## Cross Product
### A · B = |a| × |b| × sin(θ) n
*|a| means the magnitude (length) of vector A*
*|b| means the magnitude (length) of vector B*
Cross product is a method of multiplying two vectors in 3d space, producing a vector that is perpendicular to both initial vectors. In physics it can be used to work out the torque of a force. You can also get the magnitude between two vectors this way :D
*click to make bigger*
to visualise this we can use the right hand rule like in the image below
*click to make bigger*
In ICE you get a ready made node to help you work out cross product(shown by the red vector arrow in the images below). I show what it would look like if we didn't have the ready made node with the light blue Value and the light Red value.
*click to make bigger*
As you can see from the image below the Yellow values is the normal cross product node. The Red value is from A · B. And the light Blue Value is from the |a| × |b| × sin(θ) n. Which all results in the same answer.
*On a side note if the angle between two vectors are 180 degrees the cross product will be zero*
*click to make bigger*
## Scalar Product a.k.a Dot Product
#### A scalar product is an operation that combines two vectors(A.B) to create a scalar, a number with no direction.
*click to make bigger*
So by using a dot product we are multiplying(scaling) two vectors with each other, and end up with a scalar value (note not a vector value).
You can calculate the Dot Product of two vectors this way:
### A · B = |a| × |b| × cos(θ)
*|a| means the magnitude (length) of vector A*
*|b| means the magnitude (length) of vector B*
**You can only multiply vectors when they point in the same direction, to get A and B to point in the same direction we multiply by the cos(θ)**
(multiply the length of |a| times the length of |b|, then multiply by the cosine of the angle between(θ) A and B)
Here's a fun Calculator I found that shows what the Vectors are doing when we multiply :D I played with it quite a bit.
Vector Calculator
In the images below, although we are multiplying two vectors we end up with a scalar value of -1. This is because we as getting the dot product, not the cross product.
The Purple value shows the dot product nodes result in ICE, where as the light green value shows |a| × |b| × cos(θ) and the yellow value shows A · B. As you can see the answers are the same.
*On a side note when two vectors are at right angles to each other the dot product is zero.*
Ok, ok. So now your going like this is all really cool Math stuff, but how and what do I use this for in ICE, or just in general. :/ Cos the vector nodes in ICE pretty much does the math for you so why care right. Well, unless you know the math you won't ever really understand how powerful the vector nodes are. Once you get it you can fully utilise the nodes in ICE and use the math in other programs as well.
## Unit Vector
A unit vector is a vector with a magnitude of one. It is also known as a normalized vector.
$u-hat equals the vector u divided by its length$
The pink Value shows how the normalize node makes the dark green vector value normalized, where the light green value shows how to normalize the dark green value using U = U / ||u||
Vectors get used for all kinds of things in ICE so get to know them as much as you can.
## Backface Culling
In computer graphics backface culling determines whether something is "visible" in the camera's field of view. This process makes rendering fur a lot quicker as you only render the fur that is "seen" instead of rendering the fur that is hidden from the camera.
Image U is what the Camera sees.....
Image U
*click on image to make it bigger*
If there was no backface culling there would be a lot of fur in the scene to calculate. Like in the Image F....
Image F
*click on image to make it bigger*
But if you use ICE to create backface culling you can reduce the amount of fur that the renderer needs to calculate by quite a lot, as you can see from Image Y...
Image Y
*click on image to make it bigger*
#### So how do we do Backface Culling and Clipping in ICE?
The most common way of calculating "backface culling", would be to use a "dot product" to determine the normals of the mesh the fur is grown from then delete the fur that is not "seen" by the camera.
As you can see from the image below named "Clipping" we can see that Clipping deletes the fur that is outside the F.O.V of the camera, shown by the blue lines. For the actual "Culling" I used a similar approach to the dot product, but ignore the edge of the cameras F.O.V. Instead we focus on the side of the object that faces away from the camera, illustrated by the red line in the image called "Culling".
"Clipping"
*click on image to make it bigger*
"Culling"
*click on image to make it bigger*
When you combine the two you get Image Y, shown earlier.
Lets take a look at ICE and see how this all gets done. Here in Image Q is the node we used on Khumba to Cull the fur.
Image Q
*click on image to make it bigger*
The Red Square is a scalar that makes it possible for us to adjust the Culling after we added the original values from the camera, just like a fall off.
The Blue Square is exactly the same except it gives us a fall off for the Clipping.
So within ICE we can adjust the Field Of View(F.O.V) as we need it. Without having to touch the camera settings/setup's. The Blue lines in Images H,J and I represent the actual camera F.O.V, while the Red lines shows how ICE adjusts the F.O.V.
Image H
*click on image to make it bigger*
Image J
*click on image to make it bigger*
Image I
*click on image to make it bigger*
When we explore inside, the node is made up of three sections, Image A and Image B and then brought together with Image C
Image A
*click on image to make it bigger*
Image A has to do with the Culling. We get the angle between the emitter object and the camera (inside the Blue square), then we get the angle of the Field of View of the camera (inside the Red square), using the greater than or equal to Boolean node (inside the Orange square) we say that anything outside that angle gets deleted. So imagine a Triangle, everything inside the triangle stays, everything outside goes (deleted).
Image B deals with the clipping of the fur. Here we use a standard node in ICE called "Test Visibility From Camera", we have modified it a bit to work for our needs. We obviously don't just want to test visibility from the camera when it comes to culling fur. It works out the length of the four sides of the camera view (inside the Red square) -- making a square in space -- using a dot product sum between the fur (inside the Blue square) and the camera (inside the Green square), then we just say delete everything outside of that square (inside the Yellow square). To do this we need the cameras position in space (inside the Green square) and the F.O.V and aspect ratio of the camera (inside the Purple square).
Image B
*click on image to make it bigger*
Image C brings the Culling and the Clipping together with an IF statement. If there is fur inside the clipping square that is behind the character then that fur will also get deleted.
Image C
*click on image to make it bigger*
This is what it all looks like over time with a moving camera.
And that is culling fur in a nut shell. There are tons of Clipping Algorithms
that you can go look at here
:D
## Tuesday, September 11, 2012
### Story Elements ...... He he he
*click on image to make bigger*
I like this :D
## Monday, September 10, 2012
### Random commecial.....
the Khumba fur nodes where used by Luma for a really funny add by Cadbury...
Its nice seeing other people using our ICE nodes to make cool stuff
:D
## Friday, September 7, 2012
### The company in Cape Town I work at...
TriggerFish Animation Studio
TA DA!!!! \:D/
## Tuesday, September 4, 2012
### Boolean...
Boolean is a data type in ICE, having two values(normally being true or false) 0 or 1. They are intended to be used as truth values *hit or miss* and so on.
Here's what they look like in ICE...
*click to make bigger*
The top Boolean is a true, false node. The ports are Orange, I often use this node in connection with a if statement. For example, if the Camera sees the fur, it will display, if not it should be deleted.
You also get : Exclusive Or,Or,Not, And nodes, which look like .....
*click to make bigger*
To understand the And, Or, Not nodes is easy enough...
*click to make bigger*
But when you working in ICE with a million strands, it is easy to forget even how Boolean works. >_<
Randomness for the day ^-^
|
2018-05-27 15:58:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5756069421768188, "perplexity": 1194.778638220317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794869272.81/warc/CC-MAIN-20180527151021-20180527171021-00354.warc.gz"}
|
https://questioncove.com/updates/52169b96e4b0450ed75ea1d1
|
OpenStudy (anonymous):
What is the product of the product of 2 times the square root of 35 x and 3 times the square root of 14 x? Simplify if possible. @radar :)
4 years ago
OpenStudy (anonymous):
$2\sqrt{35x} \times 3\sqrt{14x}$
4 years ago
OpenStudy (anonymous):
answer choices: $294x \sqrt{10}$ $42x \sqrt{10}$ $12x^2\sqrt{10}$ $42\sqrt{10x^2}$
4 years ago
OpenStudy (anonymous):
@Luigi0210
4 years ago
OpenStudy (luigi0210):
Well let's break it apart to make it a bit easier, can you do that?
4 years ago
OpenStudy (anonymous):
Expand it? o.o
4 years ago
OpenStudy (luigi0210):
Yea, like break the numbers inside the sqrt's. Like if we broke this up it would be: |dw:1377213824434:dw|
4 years ago
|
2017-11-21 08:06:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967650294303894, "perplexity": 5376.465549216322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806327.92/warc/CC-MAIN-20171121074123-20171121094123-00619.warc.gz"}
|
https://socratic.org/questions/what-is-the-equilibrium-constant-of-citric-acid
|
# What is the equilibrium constant of citric acid?
Jan 14, 2015
Citric acid falls into the category of polyprotic acids, which are acids that have more than one acidic hydrogen that can react with water to produce the hydronium ion, $\text{H"_3^(+)"O}$.
Citric acid's molecular formula is ${\text{C"_6"H"_8"O}}_{7}$, and it's known as a weak organic acid. CItric acid is actually a triprotic acid, which means it has 3 acidic hydrogen atoms in its structure, as you can see below:
When placed in water, citric acid will ionize in a step-wise manner
${C}_{6} {H}_{8} {O}_{7 \left(a q\right)} + {H}_{2} {O}_{\left(l\right)} r i g h t \le f t h a r p \infty n s {C}_{6} {H}_{7} {O}_{7 \left(a q\right)}^{-} + {H}_{3}^{+} {O}_{\left(a q\right)}$ (1)
${C}_{6} {H}_{7} {O}_{7 \left(a q\right)}^{-} + {H}_{2} {O}_{\left(l\right)} r i g h t \le f t h a r p \infty n s {C}_{6} {H}_{6} {O}_{7 \left(a q\right)}^{2 -} + {H}_{3}^{+} {O}_{\left(a q\right)}$ (2)
${C}_{6} {H}_{6} {O}_{7 \left(a q\right)}^{2 -} + {H}_{2} {O}_{\left(l\right)} r i g h t \le f t h a r p \infty n s {C}_{6} {H}_{5} {O}_{7 \left(a q\right)}^{3 -} + {H}_{3}^{+} {O}_{\left(a q\right)}$ (3)
For each of these three steps we have a different value for the acid's dissociation constant, $\text{K"_"a}$. Thus,
Step (1): $\text{K"_"a1} = 7.5 \cdot {10}^{- 4}$
Step (2): $\text{K"_"a2} = 1.7 \cdot {10}^{- 5}$
Step (3): $\text{K"_"a3} = 4.0 \cdot {10}^{- 7}$
Notice that all three dissociation constants are smaller than 1, which is characteristic of a weak acid. Another interesting observation is that the dissociation constant for step (3) is very, very small, which means that the number of acid molecules that undergo ionization in this stage is, for all intended purposes, zero.
|
2019-04-23 06:25:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928853869438171, "perplexity": 1591.5509993141934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423080942-00301.warc.gz"}
|
https://www.physicsforums.com/threads/p-implicit-vs-explicit-function-of-time.8285/
|
# P implicit vs. explicit function of time
mmwave
As part of proving that d/dt <p> = <-Del V> you have to use the fact that < dp/dt> = 0 when p is not an explicit function of time.
I'm not clear on what this means. Any insights to share?
From classical mechanics, if there is a potential V = -k/r then there will be a force on a particle and the momentum will evolve over time. In Q-M, this is an implicit function of time and so still <dp/dt > = 0.
An explicit function of time would be V = -kcos(t) / r and now
< dp/dt> is not equal to zero. p still evolves over time.
I can't see why implicit or explicit function makes any difference mathematically.
Homework Helper
Does your textbook actually use the words "implicit" and "explicit"? Ah, now I see, you are talking about measurable quantities in quantum mechanics.
Notice that <dp/dt> is NOT just the derivative with respect to time. It is, rather, an operator. In order to calculate that derivative, you do "dp/dt" symbolically- that is if p= f(x,v) (x, v are position and velocity) then dp/dt= 0 even though x and v themselves depend on time- you don't use the chain rule here.
mmwave
Originally posted by HallsofIvy
Notice that <dp/dt> is NOT just the derivative with respect to time. It is, rather, an operator. In order to calculate that derivative, you do "dp/dt" symbolically- that is if p= f(x,v) (x, v are position and velocity) then dp/dt= 0 even though x and v themselves depend on time- you don't use the chain rule here.
There is a subtlety here that escapes me - why being an operator means do not apply the chain rule.
The complete relation used is
d/dt <Qhat> = i/hbar * < [Hhat,Qhat]> + < dQhat/dt>
where [] = commutator, Qhat is any operator and the last derivative is a partial with respect to t. In my example, Qhat = phat the momentum operator hbar/i * d /dt.
|
2022-09-26 06:25:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356081247329712, "perplexity": 1178.5189659164541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00636.warc.gz"}
|
https://prolactinefrance.com/dbnb3rtz/how-to-add-radicals-with-different-radicands-ad4a2b
|
By using this website, you agree to our Cookie Policy. Students add and subtract radical expressions with different radicands. In this first example, both radicals have the same radicand and index. I create online courses to help you rock your math class. So, can you only add two radicals that have the same number under the radical? If you don't know how to simplify radicals go to Simplifying Radical Expressions. Make sure that the radicals have the same index. Square Roots. Since all the radicals are fourth roots, you can use the rule to multiply the radicands. Next, we write the problem using root symbols and then simplify. Example 1: Add or subtract to simplify radical expression: $2 \sqrt{12} + \sqrt{27}$ Solution: Step 1: Simplify radicals When adding radicals with the same radicands. You can only add square roots (or radicals) that have the same radicand. The right answer. We know that $$3x+8x$$ is $$11x$$.Similarly we add $$3 \sqrt{x}+8 \sqrt{x}$$ and the result is $$11 \sqrt{x}$$. We explain Adding Radical Expressions with Unlike Radicands with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. So in the example above you can add the first and the last terms: The same rule goes for subtracting. Ignore the coefficients ( 4 and 5) and simplify each square root. you multiply the coefficients and radicands. Adding Radicals To add two square roots, they must have the same radicand. Adding square roots with the same radicand is just like adding like terms. Their domains are x has to be greater than or equal to 0, then you could assume that the absolute value of x is the same as x. Let's use this example problem to illustrate the general steps for adding square roots. To simplify a radical addition, I must first see if I can simplify each radical term. You can only add square roots (or radicals) that have the same radicand. Examples Simplify the following expressions Solutions to … Here we go! When you add and subtract variables, you look for like terms, which is the same thing you will do when you add and subtract radicals. Combining radicals is possible when the index and the radicand of two or more radicals are the same. Now that the radicands have been multiplied, look again for powers of 4, and pull them out. Sophia’s self-paced online courses are a great way to save time and money as you earn credits eligible for transfer to many different colleges and universities.*. They can only be added and subtracted if they have the same index. SIMPLIFYING RADICALS. Therefore, radicals cannot be added and subtracted with different index . The answer is 7 √ 2 + 5 √ 3 7 2 + 5 3. only. You will apply the product and quotient properties of radicals to rewrite radical expressions in the search for like radicands. Be looking for powers of 4 in each radicand. After seeing how to add and subtract radicals, it’s up to the multiplication and division of radicals. Sounds complicated, especially because the radicals are not the same in this problem. Then add. Students add and subtract radical expressions with different radicands. Adding and Subtracting Radicals with Fractions. By doing this, the bases now have the same roots and their terms can be multiplied together. Then, place a 1 in front of any square root that doesn't have a coefficient, which is the number that's in front of the radical sign. Sophia partners 5 √ 2 + 2 √ 2 + √ 3 + 4 √ 3 5 2 + 2 2 + 3 + 4 3. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Radicals with different radicands (or bases) don't want to socialize with each other, so you need to separate them. Consider the following example: You can subtract square roots with the same radicand --which is the first and last terms. And now we could leave it just like that, but we might want to take more things out of the radical sign. We explain Adding Radical Expressions with Unlike Radicands with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. Many different colleges and universities consider ACE CREDIT recommendations in determining the applicability to their course and degree programs. As you are traveling along the road of mathematics, the radical road sign wants you to take the square root of the term that is inside the symbol, or the radicand. To add and … You may immediately see the problem here: The radicands are not the same. For example, one cannot add and because their radicands are different.-----When adding two monomials, you . Express the variables as pairs or powers of 2, and then apply the square root. coefficients. Remember--the same rule applies to subtracting square roots--the radicands must be the same. Get Free Access See Review First, let’s simplify the radicals, and hopefully, something would come out nicely by having “like” radicals that we can add or subtract. Think about adding like terms with variables as you do the next few examples. false. Only the first and last square root have the same radicand, so you can add these two terms. Identify and pull out powers of 4, using the fact that . Radicals with the same index and radicand are known as like radicals. Do you want to learn how to multiply and divide radicals? This is a question that is asked by many students who intend to perform this operation, but what they do not know is that it is not possible to add or subtract radicals with a different index. Step 2. Fractional radicand . are not like radicals because they have different radicands 8 and 9. are like radicals because they have the same index (2 for square root) and the same radicand 2 x. Sometimes, you will need to simplify a radical expression before it is possible to add or subtract like terms. You can only add square roots (or radicals) that have the same radicand. Consider the following example: You can subtract square roots with the same radicand--which is the first and last terms. - When adding or subtracting two radicals, you only add the coefficients. 3 4. This involves adding or subtracting only the coefficients; the radical part remains the same. https://www.khanacademy.org/.../v/adding-and-simplifying-radicals For example: The radical is a type one radical because each of its terms are multiplied against the other terms. We add and subtract like radicals in the same way we add and subtract like terms. But, just like we can add x + x , we can add 3 + 3 . Within a radical, you can perform the same calculations as you do outside the radical. Try to simplify the radicals—that usually does the t… and identical radicands (the expressions under the radical signs in the two terms are the same), they are like terms, and adding and subtracting is … To add square roots, start by simplifying all of the square roots that you're adding together. However, if we simplify the square roots first, we will be able to add them. Real World Math Horror Stories from Real encounters. In this section we’ll talk about how to add and subtract terms containing radicals. Radicals , radicands , square roots, perfect squares, and subtracting? A radical is also in simplest form when the radicand is not a fraction. In any expression with a radical symbol, the term under the square root is the radicand - even if the expression is large, like this: In this example, 23 x ^2 y ^5 z is the radicand. Refer back to your answer to Question #4. When multiplying radicals. Practice Problems. Then circle any terms with the same radicands so they’re easier to see. For Teachers 8th - 11th. Add and Subtract Like Radicals Only like radicals may be added or subtracted. In the radical below, the radicand is the number '5'. 1 Answer Jim H Mar 22, 2015 Make the indices the same (find a common index). different radicands; different; different radicals; Background Tutorials. To simplify two radicals with different roots, we first rewrite the roots as rational exponents. So in the example above you can add the first and the last terms: The same rule goes for subtracting. The Quotient Property of Radicals is useful for radicands that are fractions. Click here to review the steps for Simplifying Radicals. guarantee 10. radicand remains the same. 3125is asking ()3=125 416is asking () 4=16 2.If a is negative, then n must be odd for the nth root of a to be a real number. How? We call square roots with the same radicand like square roots to remind us they work the same as like terms. Subtracting Radical Expressions with Like Radicands, Subtracting Radical Expressions with Unlike Radicands, Adding Radical Expressions with Unlike Radicands. If so, then you add the coefficients and leave the radicand the same. Textbook solution for Algebra 1 1st Edition McGraw-Hill/Glencoe Chapter 10.3 Problem 38HP. Type 1 Radical: Type one radicals have radicands that are entirely factored, meaning that each term of the radicand is multiplied against the other terms of the radicand. Therefore, radicals cannot be added and subtracted with different index . … Multiply the coefficients (4 and 5) by any numbers that 'got out' of the square root (3 and 2, respectively). With radicals of the same indices, you can also perform the same calculations as you do outside the radical, but still staying inside the radical… 3:16. When learning how to add fractions with unlike denominators, you learned how to find a common denominator before adding. Add the two radicals by only adding the. To find the product with different indices and radicands, follow the following steps: 1. transform the radicals to powers with fractional exponents. In this particular case, the square roots simplify "completely" (that is, down to whole numbers): 9 + 2 5 = 3 + 5 = 8. Or False: you can use the rule to multiply radicals using the fact that add +! Of indices or radicands has no square factors make Virtual Nerd a viable alternative to private tutoring examples simplify square... For adding square roots, start by Simplifying all of the square with... That have the same radicand is not a fraction 22, 2015 make indices... And then simplify call square roots with the same radicand and index then you add and like... Can you not add 2√2 and 4√3 together a common denominator have to the! The example above you can subtract square roots that have the same radicand is first! The best experience ( unless you have like radicands see how simple adding radical expressions with different radicands that already. Must first see if I can simplify each radical term with different roots, we have x! 1St Edition McGraw-Hill/Glencoe Chapter 10.3 problem 38HP Edition McGraw-Hill/Glencoe Chapter 10.3 problem 38HP you ca n't add two square with... 7√2 7 2 and 5√3 5 3 a different index - when adding or subtracting only the and. Is to how to add radicals with different radicands of radicals, radicands, you will see how similar radicals not! Can perform the same way we add and subtract terms that contain like radicals in the example you. Variables can be added and subtracted if they have a y of its are... Add fractions with unlike radicands, subtracting radical expressions with different indices and radicands square. And their terms can be added and subtracted ACE credit recommendations how to add radicals with different radicands determining the to... Radicals may be added and subtracted the moment powers of 2, and then simplify expression can be... Once you find them, you will apply the product with different index Reduce to a common index and are., Perfect squares, and pull them out before the terms can be simplified them ( unless have... You see what distinguishes this expression can not add 2√2 and 4√3 together of 2 and... Bartleby experts as rational exponents 11x.Similarly we add and subtract like terms or powers 4... Like how to add radicals with different radicands roots ( or radicals ) that have the same _____ terms containing radicals applies to subtracting roots... This tutorial, you just add or subtract the terms in the radical that radicals. The indices and radicands, do n't panic they must have the radicand... Step-By-Step this website, you can add the coefficients ; the radical symbol different indices are as! This non-linear system, users are free to take whatever path through the material serves... Be dealing with imaginary numbers indices the same 4 years, 4 months ago the following example: you just! This expression from the last terms: the radicands it is possible when the radicand is just like can. Only like radicals in the same radicand, so this expression from the last terms: the radical.. Add or subtract like terms radical addition, I must first see if I simplify... Given pre-approval for credit transfer in expression and try adding again step-by-step exercises it just that! Pull out powers of 4 in each radicand 2015 make the indices the same like. Tutorial is for you formula and practice problems Some Necessary Vocabulary you how to a., just like that, but we might want to take whatever path through material! + 4√5 they ca n't add two radicals together have accepted or given pre-approval for credit transfer it like. You rock your math class radicals can not be added or subtracted rewrite radical expressions with like,! Combining radicals is possible to add and subtract like radicals may be added and subtracted divide. Roots with different indices talk about how to factor unlike radicands before you add! Find a common denominator before adding they can only be added and subtracted the first and last root! Radicand the same way we add and subtract radicals that have the same about how to factor radicands! Virtual Nerd a viable alternative to private tutoring identify and pull out powers 2! In order to add two radicals, it ’ s up to the number under the radical is registered... Accepted or given pre-approval for credit transfer you only add square roots with different index Reduce to a denominator. Following steps: 1. transform the radicals are not the same rule applies to subtracting square with... Students solve 18 short answer problems two terms: the same radicands so they have same... Add the coefficients ; the radical symbol the moment this problem up the. Using root symbols and then simplify you multilply radicals with different index 299 Institutions have accepted given. Or subtracted, after Simplifying it can be confusing example, both radicals have the same in this problem +. Here: the radical sign you multilply radicals with different radicands is like trying to fractions! Remember -- the same, then the radicals are the same radicand which. Them the same rule applies to subtracting square roots with the same radicand -- which is first. Or given pre-approval for credit transfer now that the radicands as a single radical 4 the next examples! Students solve 18 short answer problems doing this, the radicals are in expression and try adding again change exponents... Algebra without working with variables, but we might want to socialize with each other, so you need simplify..., and pull out powers of 4, using the fact that radical addition, must. Ask Question asked 4 years, 4 months ago the variables as pairs or powers of 4 using... Addition, I must first see if I can simplify each radical term, how to add radicals with different radicands months ago subtracted... Ensure you get the best experience how radicals are similar and can be multiplied together, 're! Adding radical expressions us they work the same ( find a common index and radicand are as... Only add the coefficients of all the radicals are added and subtracted if need! Goes for subtracting 5√20 + 4√5 they ca n't be added or subtracted no addition or subtraction how to add radicals with different radicands! Learned how to add or subtract like radicals just as you do like terms also teach you how to radicals... This first example, one can compute because both radicals have the same roots and their can! One helpful tip is to think of radicals, students solve 18 short answer problems there 's lot! Not add them, you can add two radicals together work the same index add unlike terms 5/4. Simplest form, when the radicand x and we have an equation or something.. 5 ) and simplify each radical term the search for like radicands, square with. Radicands differ and are already simplified a single radical 4 as an expression, you subtract! Between terms in the same radicand -- which is the number under the radical symbol through the best., then this tutorial, you learned how to multiply and divide radicals to our Cookie Policy ': player! Look again for powers of 2, and treat them as if need... Refers to the Multiplication and Division of radicals is possible to add square roots with the same --... 'Re not going to be positive, we 're not going to be,. The following expressions Solutions to … add and subtract terms containing radicals basic,. Radicals is possible to add square roots that have the same radicand -- which is the under. It to you below with step-by-step exercises and last square root this first example, one not. Ace credit recommendations in determining how to add radicals with different radicands applicability to their course and degree.! Their own radicals that have the same can write it in a slightly different way but. Be looking for powers of 2, and then simplify 'you people need help ': NFL gets... + √ 3 + 4 √ 3 7 2 and 5√3 5 3 've ever wondered what are... Some Necessary Vocabulary not going to be dealing with imaginary numbers how simple adding radical can... Problem using root symbols and then simplify similar radicals can not be added or subtracted exactly... The previous example is simplified, so this expression can not be simplified for example you... Rational exponents 5 √ 3 5 2 + 3 have to have the same rule goes how to add radicals with different radicands. Students solve 18 short answer problems same roots and their terms can simplified. One can compute because both radicals have the same radicands so they ’ re easier to see is. The radicand is the first and last terms: the radicands have been multiplied, look for. Click here to review the steps in adding and subtracting radical expressions like! Answer problems Perfect square rearrange terms so that like radicals in the radical is a two... Radicand the same radicand answer Jim H Mar 22, 2015 make indices! Positive, we first rewrite the roots as rational exponents have an x we! Subtracted with different radicands is like trying to add or subtract radicals with the same radicand and be. Is just like adding like terms we add and subtract terms containing radicals and we have x. Of each like radical we could leave it just like that, but might... Root have the same, then add or how to add radicals with different radicands radicals with different radicands and different radicals.. 1 with numbers! Radicands differ and are already simplified, or in its simplest form, when the radicand refers to number... Roots -- the radicands must be the same index and radicand are exactly the same index have common. Next I ’ ll talk about how to factor unlike radicands before you can use the rule multiply... To take whatever path through the steps for adding square roots these unique make! Are the same, then add or subtract the terms in the denominator other than that 4 is...
|
2021-03-04 16:42:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7628312706947327, "perplexity": 884.6739459863003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00022.warc.gz"}
|
https://tex.stackexchange.com/questions/4009/latex-beamer-how-to-get-distinct-page-numbers-when-using-overlays
|
# LaTeX Beamer: How to get distinct page numbers when using overlays?
## The question
When I create slides for presentations in LaTeX with the Beamer class, I like to use constructions with overprint like this:
\begin{frame}
\begin{block}{}
\begin{itemize}
\item Some general remarks about ALL plots
\end{itemize}
\end{block}
\begin{block}{}
\begin{center}
\begin{overprint}
\only<+>{
\incgraphics{width=7cm}{plot1.pdf}\\
Explanation of Figure 1
}
\only<+>{
\incgraphics{width=7cm}{plot2.pdf}\\
Explanation of Figure 2
}
% maybe more plots...
\end{overprint}
\end{center}
\end{block}
\end{frame}
The problem is that like this all slides created from one frame are assigned the same page number which makes people complain that it's difficult to follow which slide I am on (when giving the talk via phone etc.).
How can I change this?
## A minimal example to reproduce the problem
\documentclass{beamer}
\begin{document}
\begin{frame}
Common text\\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
\only<+>{
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\end{document}
One can mess around with \setbeamertemplate and use \insertpagenumber{} instead of \insertframenumber{}, but like this you lose generality (e.g. you cannot switch between themes easily).
Maybe there is a better solution?
• I use \setbeamertemplate{footline}{\hfill\large\insertpagenumber} – Konrad Swanepoel Oct 12 '10 at 5:34
Good question. There seems to be no built-in functionality to do what you want. There must be a counter that holds the slide number, but the manual doesn't seem to expose it and I don't want to delve into the source.
My solution might be classified as "messing around", but here goes:
\documentclass{beamer}
\newcounter{slidenumber}
\defbeamertemplate*{footline}{infolines theme frame plus slide}{
\setcounter{slidenumber}{\insertpagenumber}%
\leavevmode%
\hbox{%
\end{beamercolorbox}%
\end{beamercolorbox}%
\end{beamercolorbox}}%
\vskip0pt%
}
\setbeamertemplate{footline}[infolines theme frame plus slide]
\begin{document}
\begin{frame}
Common text\\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
\only<+>{
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\begin{frame}
Common text on frame 2\\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
\only<+>{
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\end{document}
Any theme which uses the infolines inner theme can thus be modified. If you didn't want to touch any templating, you could redefine \insertframenumber to return the frame number . slide number, but I don't know if that's going to break anything else.
• I have played around with redefining \insertframenumber before; if I recall correctly, it does have the potential to break things (especially if you are using hyperlinks between frames). – ESultanik Nov 19 '10 at 18:49
I had this same question and approached it using the built-in beamer macros \insertpagenumber and \insertsectionendpage or \insertpresentationendpage to show the current page number out of the total number of pages in the section, or presentation respectively. The page number is incremented using overlays, while the frame number is not. You can use this command within footlines. An MWE is below. I know this question is old but it took me a while to find these commands so I wanted to draw attention to them.
\documentclass{beamer}
\setbeamertemplate{footline}{% set footline options
\insertpagenumber/\insertsectionendpage
}
\begin{document}
\begin{frame}
Common text\\
Page - \insertpagenumber{} of \insertpresentationendpage \\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
\only<+>{
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\end{document}
Although the question is very old, I think I might contribute. Just increase the frame number manually by one, each time it is needed. Do this by writing \addtocounter{framenumber}{1} Be sure you do this within the scope of an \only{}, otherwise it will apply to all the slides. And be sure it is the correct \only{}. Your MWE can be modified as:
\documentclass{beamer}
\begin{document}
\begin{frame}
Common text\\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\end{document}
This will also increase your total page count.
Let's make it clear, because it is important: it MUST be within an \only{}, using \onslide or \pause is not enough, only \only{} ensures the code is NOT executed in the other slides. If you are acheving the overlays using means other than \only{}, in the worst case you can just add a series of \only{} with the exact page numbers, like:
\only<2>{\addtocounter{framenumber}{1}}
and so on.
EDIT:
rather than \addtocounter{framenumber}{1}, you can directly use \stepcounter{framenumber}. Further, to have each slide numbered as a separate frame, rather than inserting the whole sequence as I pointed out above you can write:
\only<2-|handout:0>{\stepcounter{framenumber}}
because this command is applied on every slide after the first one. I added handout:0 to avoid a mess for handout mode.
SO, IN SUMMARY:
If you want to have all slides numbered as separate frames, just add \only<2-|handout:0>{\stepcounter{framenumber}} at the top and you'll be fine. Just be careful to use it only where you are ACTUALLY using overlays, because otherwise the \only<2-> will cause an overlay and the frame will be duplicated in two identical slides.
• Works. Like. Magic! Also, so elegant, many thanks, @Gabriele B :-) – s0nata Dec 27 '17 at 11:00
The number of the current slide within the frame is stored in the TeX count register \beamer@slideinframe. You could turn it into a LaTeX counter with
\makeatletter
\setcounter{currentslide}{\the\beamer@slideinframe}
\makeatother
if you like.
• I would rather use \makeatletter\def\c@slideinframe{\beamer@slideinframe}\makeatother to create an LaTeX counter alias named slideinframe (or using another name) which is always in sync with the internal beamer counter. However, the issue is that the counter doesn't hold the correct value inside the footer. I always get 2 for both slides. – Martin Scharrer May 11 '11 at 15:08
I am not sure, if this is exactly what you want, but you could also use \begin{frame} and \againframe with labels and overlay specifications:
\documentclass{beamer}
\begin{document}
\begin{frame}<1>[label=somelabel]
Common text\\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
\only<+>{
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\againframe<2>{somelabel}
\end{document}
With this solution you can at least choose which overlays should have the same frame number and which not.
I came upon this solution while searching for the opposite problem---combining the \visible command with a custom footer defined using page number creates separate slide numbers for additional text on the same frame.
This is a similar answer to one above, but using a different way to display text/figures, so I thought I'd include it as well.
\documentclass{beamer}
\defbeamertemplate{footline}{centered page number}
{
\hspace*{\fill} %Put page counter on the right
\usebeamercolor[fg]{page number in head/foot} %Color the page number
\usebeamerfont{page number in head/foot} %Change the font for page number
%======Choose one of the following two options=====%
\insertpagenumber\,/\,\insertpresentationendpage %Manually style page number out of total pages
%\insertpagenumber\, %Manually style page number only
%==================================================%
\hspace*{.3cm}\vskip8pt %Move the page counter slightly
}
\setbeamertemplate{footline}[centered page number]
\begin{document}
\begin{frame}
This text is always visible
\visible<2>{New text now available on the same frame, but counted as second slide; only available second showing of slide}
\visible<3>{\includegraphics{Images/foo.png}
This image (and text) only present on the third showing on slide; counted as a third slide number. }
\end{frame}
\end{document}
A very simple solution is to place the \addtocounter{framenumber}{1} somewhere in the "common" space of the frame. Verytime the frame is read, the counter is increased. Of course beamer itself increases the counter one time too, so we have to undo one increase.
\documentclass{beamer}
\begin{document}
\begin{frame}
Common text\\
\begin{overprint}
\only<+>{
Figure 1 and explanation of Figure 1
}
\only<+>{
Figure 2 and explanation of Figure 2
}
\end{overprint}
\end{frame}
\end{document}
I would think the easiest way is to avoid the overprint environment altogether:
\documentclass{beamer}
\begin{document}
\begin{frame}
Common text\\
\includegraphics{fig1.png} \\
Figure 1 and explanation of Figure 1
\end{frame}
\begin{frame}
Common text\\
\includegraphics{fig2.png} \\
Figure 2 and explanation of Figure 2
}
\end{frame}
\end{document}
This will give you two identical slides, but with different figures and different slide numbers. The only issue that I can think of is that compiling the handout mode will not work properly.
|
2021-07-26 16:22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814868688583374, "perplexity": 2052.6251120846487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00153.warc.gz"}
|
https://physicscatalyst.com/Class10/CG-x.php
|
# Class 10 Maths notes for Coordinate geometry
Table of Content
## Introduction
• We require two perpendicular axes to locate a point in the plane. One of them is horizontal and other is Vertical.The plane is called Cartesian plane and axis are called the coordinates axis
• The horizontal axis is called x-axis and Vertical axis is called Y-axis
• The point of intersection of axis is called origin.
• The distance of a point from y axis is called x –coordinate or abscissa and the distance of the point from x –axis is called y – coordinate or Ordinate
• The x-coordinate and y –coordinate of the point in the plane is written as (x, y) for point and is called the coordinates of the point
• Above all in details can be read in Class IX Maths Coordinate Geometry notes
## Distance formula
Distance between the points AB is given by
Distance of Point A from Origin
(1) Find the distance of point (3,4) and ( 4,3) from Origin?
Solution
Distance from Origin is given by
$D=\sqrt {x^2 + y^2}$
for (3,4),
$D=\sqrt {3^2 + 4^2} =5$
for (4,3),
$D=\sqrt {4^2 + 3^2} =5$
Practice Questions
(1) Arrange these points in ascending order of distance from origin
(a) (1,2)
(b) (5,5)
(c) (-1, 3)
(d) (1/2, 1)
(e) (.5 ,.5)
(f) (-1,1)
(2) Find the distance between the points
(a) (1,2) and (3,4)
(b) (5,6) and ( 1,1)
(c) (9,3) and ( 3,1)
## Section Formula
A point P(x,y) which divide the line segment AB in the ratio m1 and m1 is given by
The mid point P is given by
## Area of Triangle ABC
Area of triangle ABC of coordinates A(x1,y1) , B(x2,y2) and C(x3,y3) is given by
For point A,B and C to be collinear, The value of A should be zero
## How to Solve the line segment bisection ,trisection and four-section problem's
1. You will be given coordinates of the two point A , B
2. If the problem is to find bisection, then you can simply found the mid point using
3. If the problem is to find trisection(three equal parts of the line ).Let us assume the point are P and Q, then AP=PQ=QB
Now P divides the line AB into 1:2 part
While Q divides the line AB into 2:1 part
So we can use section formula to get the coordinate of point P and Q
4. If the problem is to find four equal parts .Let us assume the point are P ,Q And R such that AP=PQ=QR=RB
Now P divides the line AB into 1:3 part
Q divides the line AB into 1:1 part
R divides the line AB into 3:1 part
So we can use section formula to get the coordinate of point P ,Q and R
## How to Prove three points are collinear
1) We need to assume that if they are not collinear,they should be able to form triangle.
We will calculate the area of the triangle,if it comes zero, that no triangle can be found and they are collinear
## How to solve general Problems of Area in Coordinate geometry
Area of Triangle Three vertices will be given,you can calculate the area directly using formula Area of Square Two vertices will be given, we can calculate either side or diagonal depending on vertices given and apply the square area formula Area Of rhombus Given: all the vertices coordinates Two ways 1) Divide the rhombus into two triangle. Calculate the area of both the triangle and sum it 2) Calculate the diagonal and apply the Area formula Area of parallelogram Three vertices are sufficent to find the area of parallelogram Calculate the area of the traingle formed by the three verticles and double it to calculate the area of parallelogram Area of quadilateral Given: all the vertices coordinates Divide into two triangle. Calculate the area seperately and sum it
## Solved Examples
Question 1
The points (4, 5), (7, 6) and (6, 3) are collinear. True or false
Solution
False. Since the area of the triangle formed by the points is 4 sq. units, the points are not collinear
Question 2
If the mid-point of the line segment joining the points A (3, 4) and B (k, 6) is P (x, y) and
$x + y - 10 = 0$, find the value of k.
Solution
Mid-point of the line segment joining A (3, 4) and B (k, 6) = $\frac {(3+k)}{2}$, 5
Now mid -point is P(x,y)
$x=\frac {(3+k)}{2}$ or 3+k=2x
y=5
Since $x + y - 10 = 0$, we have
$\frac {3+k}{2} +5 - 10 = 0$
3 + k = 10
or k=7
Question 3
ABCD is a parallelogram with vertices A (x1, y1), B (x2, y2) and C (x3, y3). Find the coordinates of the fourth vertex D in terms of x1,x2,x3,y1,y2,y3
Solution
Let the coordinates of D be (x, y). We know that diagonals of a parallelogram bisect each other.
Therefore, mid-point of AC = mid-point of BD
$[ \frac{(x_1 + x_3)}{2} ,\frac{(y_1 + y_3)}{2}]=[ \frac{(x + x_2)}{2} ,\frac {(y + y_2)}{2}]$
i.e $x_1 + x_3 = x_2 + x$ and $y_1 + y_3 = y_2 + y$
$x= x_1 + x_3 - x_2$ and $y=y_1+ y_3 -y_2$
So coordinates are $(x_1 + x_3 - x_2, y_1+ y_3 - y_2)$
Question 4<
The area of a triangle with vertices (a, b + c), (b, c + a) and (c, a + b) is
Solution
$A=\frac {1}{2} [a(c+a -a-b) + b(a+b-b-c) + c( b+c -c-a)]$
$A=\frac {1}{2} [ac-bc + ba-bc + bc-ac)]$
A=0
### Quiz Time
Question 1 Point on y axis and x axis has coordinates: ?
A) (a,0) & (0,b)
B) (0,a) & (0,b)
C) (a,b) & (a,b)
D) (0,a) & (b,0)
Question 2 The perimeter of a PQR triangle with vertices P(0, 4), Q(0, 0) and R(3, 0) is: ?
A) 6
B) 7
C) 11
D) 12
Question 3 Which point is at minimum distance from origin
A) (2, -3)
B) (2,2)
C) (6,8)
D) (6,6)
Question 4 The end points of diameter of circle are (0, 0) & (24, 7). The radius of the circle is:
A) 12.5
B) 15.5
C) 12
D) 10
Question 5 The mid point of point A(4,5) and B(-8,7) lies in quadrant?
A) I
B) II
C)III
D) IV
Question 6 The point on the x-axis which is equidistant from (-2,5) and (2, -3) is?
A)(0,3)
B) (0,0)
C)(5,0)
D) (-2,0)
Crossword Puzzle
Across
1. the distance of point(3,4) from origin
3. x-coordinate is called
5. the y -axis is a .....line
6. The (-6,-2) lies in .....quadrant
7. the x-axis is a ......line
9. The point of intersection of axis
10. The figure formed by joining the point (0,0), (2,0),(2,2) and (0,2)
Down
2. the coordinates (3,0) (-3,0) and $(0,3 \sqrt {3})$ formed an ...... triangle
4. The three point are said to be ....if the area of triangle formed by them is zero
8. y-coordinate is called
Reference Books for class 10
Given below are the links of some of the reference books for class 10 math.
You can use above books for extra knowledge and practicing different questions.
### Practice Question
Question 1 What is $1 - \sqrt {3}$ ?
A) Non terminating repeating
B) Non terminating non repeating
C) Terminating
D) None of the above
Question 2 The volume of the largest right circular cone that can be cut out from a cube of edge 4.2 cm is?
A) 19.4 cm3
B) 12 cm3
C) 78.6 cm3
D) 58.2 cm3
Question 3 The sum of the first three terms of an AP is 33. If the product of the first and the third term exceeds the second term by 29, the AP is ?
A) 2 ,21,11
B) 1,10,19
C) -1 ,8,17
D) 2 ,11,20
Note to our visitors :-
Thanks for visiting our website.
|
2020-02-26 09:56:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786274909973145, "perplexity": 1321.4790841860818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146341.16/warc/CC-MAIN-20200226084902-20200226114902-00029.warc.gz"}
|
https://revisenow.net/start/revision/265-periodic-table
|
#### TOPIC: Periodic Table
Topics
Question 1 out of 5
Sodium, Potassium and Calcium are in the same group of the periodic table.
|
2022-10-04 14:13:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710328936576843, "perplexity": 4469.759899806207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00749.warc.gz"}
|
https://zenodo.org/record/3385940/export/csl
|
Journal article Open Access
# Analyzing the usability of the WYRED Platform with undergraduate students to improve its features
García-Peñalvo, F. J.; Vázquez-Ingelmo, A.; García-Holgado, A.; Seoane-Pardo, A. M.
### Citation Style Language JSON Export
{
"DOI": "10.1007/s10209-019-00672-z",
"language": "eng",
"title": "Analyzing the usability of the WYRED Platform with undergraduate students to improve its features",
"issued": {
"date-parts": [
[
2019,
9,
4
]
]
},
"abstract": "<p>The WYRED ecosystem is a technological ecosystem developed as part of WYRED (netWorked Youth Research for Empow-erment in the Digital society), a European Project funded by the Horizon 2020 program. The main aim of the project is to provide a framework for research in which children and young people can express and explore their perspectives and interests concerning digital society. The WYRED ecosystem supports this framework from a technological point of view. The WYRED Platform is one of the main software components of this complex technological solution; it is focused on supporting the social dialogues that take place between children, young people and stakeholders. The ecosystem, and in particular the Platform, are already developed, but it is vital to ensure the acceptance by the final users, the children and young people mainly. This work presents the usability test carried out to evolve the Platform through the System Usability Scale. This usability test allows the identification of the weaknesses of the Platform regarding its characteristics, also allowing the corresponding improvement of the WYRED Platform, and it will serve as a reference for further usability testing.</p>",
"author": [
{
"family": "Garc\u00eda-Pe\u00f1alvo, F. J."
},
{
"family": "V\u00e1zquez-Ingelmo, A."
},
{
},
{
"family": "Seoane-Pardo, A. M."
}
],
"version": "1.0",
"type": "article-journal",
"id": "3385940"
}
13
40
views
|
2020-08-15 08:37:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21824514865875244, "perplexity": 4717.584640085376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00388.warc.gz"}
|
https://courses.engr.illinois.edu/cs374/sp2019/hw_policies.html
|
# CS/ECE 374 sp19: Homework and Exam Policies
The course staff must critically examine close to ten thousand pages of homework submissions this semester! We desperately need your help to make sure homeworks are graded and returned quickly. If you have any questions or concerns about these policies, please don't hesitate to ask in lecture, during office hours, or on Piazza.
I apologize in advance for the length of this document. Most of this stuff is obvious to almost everybody, but after teaching this class for many years, I've seen a lot of strange things.
### Homework Logistics: How to submit
• All homework solutions must be submitted electronically via Gradescope. Submit one PDF file for each numbered homework problem. Gradescope will not accept other file formats such as plain text, HTML, LaTeX source, or Microsoft Word (.doc or .docx).
• Homework solutions may be submitted by groups of at most three students. We strongly encourage (but will not require) every student to work in a group with at least one other student. Students are are responsible for forming their own homework groups. Groups may be different for each numbered homework problem.
• For group solutions, exactly one member of each group submits the solution to each problem. Even if the groups are identical, the submitter may be different for each numbered homework problem.
• Whoever submits any group solution must also submit the names of the other group members via Gradescope. Gradescope will then automatically apply the grade for that submission to all group members. If this information is not entered correctly, the other group members' grades will be delayed or possibly lost entirely.
• If you discover that your name was omitted from a group homework submission, please submit a regrade request.
• As error correction, each submitted homework solution should include the following information in large friendly letters at the top of the first page.
• The homework number
• The problem number
For group solutions, include the Gradescope name and email address of every group member. If you are typesetting your solutions with LaTeX, please use our solution template.
• We will not accept late homework for any reason. To offset this rather draconian policy, we will count only the homework problems you actually submit toward your final course grade. Specifically:
• If you submit more than 24 homework problems, only your highest 24 problem scores count. (If you submit everything, this this is equivalent to dropping almost two complete homework sets.)
• If you submit fewer than 24 homework problems, your exams will have (slightly) larger weight in the final grade calculation.
• You must submit at least half the homework to pass the class.
We may forgive coursework under extreme circumstances, such as documented illness or injury. Forgiving homework requires a serious long-term issue that prevents submission of multiple homework sets; the regular homework policies already allow missing a few submissions without serious penalty. “Extreme circumstances” for exams do not include travel for job interviews. We will compute your final course grade as if your forgiven work simply do not exist; your other work will have more weight. Please ask Jeff for details.
### Form: How to write
Please make it easy for the graders to figure out what you mean in the short time they have to grade your solution. If your solutions are difficult to read or understand, you will lose points.
#### Be Honest
• Write everything in your own words, and properly cite every outside source you use. We strongly encourge you to use any outside source at your disposal, provided you use your sources properly and give them proper credit. If you get an idea from an outside source, citing that source will not lower your grade. Failing to properly cite an outside source—thereby taking credit for ideas that are not your own—is plagiarism.
The only sources that you are not required to cite are the official course materials (lectures, lecture notes, homework and exam solutions from this semester) and sources for prerequisite material (which we assume you already know by heart).
• List everyone you worked with on each homework problem. Again, we strongly encourage you to work together, but you must give everyone proper credit. If you work in a group of 20 students, then all 20 names should appear on your homework solution. If someone was particularly helpful, describe their contribution. Be generous; if you're not sure whether someone should be included in your list of collaborators, include them. For discussions in class, in section, or in office hours, where collecting names is impractical, it's okay to write something like "discussions in class".
#### Be Clear
• Write legibly. If we can't read your solution, we can't give you credit for it. If you have sloppy handwriting, use LaTeX. Please don't submit your first draft. Writing legibly also helps you think more clearly.
• We strongly recommend typesetting your homework using LaTeX. (In particular, we recommend TeXShop for Mac OS X, TeX Live for Linux (already included in most distributions), and MiKTeX for Windows.) We will provide a LaTeX template for homework solutions.
• You are welcome to submit scans of hand-written homework solutions, but please write clearly using a black pen on plain white unlined paper, and please use a high-quality scanning app (or an actual high-quality scanner). We recommend printing your scanned document to check for readability before submitting.
• Write sensibly. You will lose points for poor spelling, grammar, punctuation, arithmetic, algebra, logic, and so on. This rule is especially important for students whose first language is not English. Writing sensibly also helps you think sensibly.
• Write carefully. We can only grade what you actually write, not what you mean. We will not attempt to read your mind. If your answer is ambiguous, the graders are explicitly isntructed to choose an interpretation that makes it wrong. Writing carefully also helps you think carefully.
• Avoid the Three Deadly Sins. Yes, we are completely serious about these. We reserve the right to add more Deadly Sins later in the course.
1. Write solutions, not examples. Don't describe algorithms by showing the first two or three iterations and then writing "and so on". Similarly, don't try to prove something by demonstrating it for a few small examples and then writing “do the same thing for all $n$”. Any solution that includes phrases like “and so on”, “etc.”, “do this for all $n$”, or “repeat this process” automatically gets a score of zero. Those phrases indicate precisely where you should have used iteration, recursion, or induction but didn’t.
2. Declare all your variables. Whenever you use a new variable or non-standard symbol for the first time, you must specify both its type and its meaning, in English. Similarly, when you describe any algorithm, you must first describe in English precisely what the algorithm is supposed to do (not just how it works). Any solution that contains undeclared variables automatically gets a score of zero, unless it is otherwise perfect. This rule is especially important for dynamic programming problems.
3. Never use weak induction! Always, always, always use a strong induction hypothesis, even in proofs that only apply the induction hypothesis at $n-1$. Why would you even want to tie $n-2$ hands behind your back? Any proof that uses a weak induction hypothesis automatically gets a score of zero, unless it is otherwise perfect. Basically, weak induction should die in a fire.
• State your assumptions. If a problem statement is ambiguous, explicitly state any additional assumptions that your solution requires. (Please also ask for clarification in class, in office hours, or on Piazza!) For example, if the performance of your algorithm depends on how the input is represented, tell us exactly what representation you require.
• Don't submit code. Describe your algorithms using clean, human-readable pseudocode. Your description should allow a bright student in CS 225 to easily implement your algorithm in their favorite language.
• Don't submit your first draft. Revise, revise, revise. After you figure out the solution, then think about the right way to present it, and only then start writing what you plan to submit. Yes, even on exams; do your initial scratch work on the back of the page.
#### Be Concise
• Keep it short. Every homework problem can be answered completely in at most two typeset pages or five handwritten pages; most problems require considerably less. Yes, I am aware of the crushing irony.
• Omit irrelevant details. Don't write "red-black tree" when you mean "balanced binary tree" or "dictionary". Don't submit code; We want to see your ideas, not syntactic sugar. If your solution requires more than two typeset pages, you are providing too many irrelevant details.
• Don't regurgitate. Don't explain binary search; just write "binary search". Don't write the pseudocode for Dijkstra's algorithm; just write "Disjktra's algorithm". If the solution appears on page 6 of Jeff's notes, just write "See page 6 of Jeff's notes." If your answer is similar to something we've seen in class, just say so and (carefully!) describe your changes. You will lose points for vomiting.
• Autmatic zero: We will give an automatic zero to answers which we consider to be too long, unclear, and repetitious. We will also do it if we can not follow the logic of your answer. We might even do it without reading them. If you do not know the answer then use IDK - don't waste your time and don't waste our time.
### Content: What to write
• Answer the right question. No matter how clear and polished your solution is, it's worthless if it doesn't answer the question we asked. Make sure you understand the question before you start thinking about how to answer it. If something is unclear, ask for clarification! This is especially important on exams.
• Justify your answers. You must provide a brief justification for your solutions, as evidence that you understand why they are correct. Unless we explicitly say otherwise, we generally do not want a complete proof of correctness—because complete proofs would be too long, tedious, and unenlightening—but rather a high-level sketch of the major steps in the proof. Proofs/justifications are only required on exams if we specifically ask for them.
• By default, if a homework or exam problem asks you to describe an algorithm, you need to do several things to get full credit:
• If necessary, formally restate the problem in terms of combinatorial objects such as sets, sequences, lists, graphs, or trees. In other words, tell us what the problem is really asking for. This is often the hardest part of designing an algorithm.
• Give a concise pseudocode description of your algorithm. Don't regurgitate, and don't turn in code!
• Describe a correct algorithm.
• Justify the correctness of your algorithm. You usually won't have to do this on exams.
• Analyze your algorithm's running time. This may be as simple as saying "There are two nested loops from 1 to n, so the running time is O(n²)." Or it may require setting up and solving a recurrence, in which case you'll also have to justify your solution.
• Describe the fastest correct algorithm you can, even if the problem does not include the words "fast" or "efficient". Faster algorithms are worth more points; brute force is usually not worth much. We will not always tell you what time bound to shoot for; that's part of what you're trying to learn. However, if your algorithm is incorrect, you won't get any points, no matter how fast it is!
Some problems may deviate from these default requirements. For example, we may ask you for an algorithm that uses a particular approach, even though another approach may be more efficient. (Answer the right question!)
|
2019-02-18 13:01:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5787704586982727, "perplexity": 1161.5873772890477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486480.6/warc/CC-MAIN-20190218114622-20190218140622-00398.warc.gz"}
|
https://www.physicsforums.com/threads/joint-probability-of-partitioned-vectors.763723/
|
# Joint probability of partitioned vectors
1. Jul 28, 2014
### scinoob
Hi everybody, I apologize if this question is too basic but I did 1 hour of solid Google searching and couldn't find an answer and I'm stuck.
I'm reading Bishop's Pattern Recognition and Machine Learning and in the second chapter he introduces partitioned vectors. Say, if X is a D-dimensional vector, it can be partitioned like:
X = [Xa, Xb] where Xa is the first M components of X and Xb is the remaining D-M components of X.
I have no problem with this simple concept. Later in the same chapter he talks about conditional and marginal multivariate Gaussian distributions and he uses the notation p(Xa, Xb). I'm trying to understand how certain integrals involving this notation are expanded but I'm actually struggling to understand even this expression. It seems to suggest that we're denoting the joint probability of the components of Xa and the components of Xb. But those are just the components of X anyway!
What is the difference between P(Xa, Xb) and P(X)?
It will be more helpful for me if we considered a more concrete example. Say, X = [X1, X2, X3, X4] and Xa = [X1, X2] while Xb = [X3, X4]. Now, the joint probability P(X) would simply be P(X1, X2, X3, X4), right? What is P(Xa, Xb) in this case?
2. Jul 28, 2014
### mathman
My guess: in later chapters he discusses Xa and Xb as separate entities.
3. Jul 28, 2014
### gill1109
There is no difference between p(Xa, Xb) and p(X), because X = (Xa, Xb). It starts to get interesting when we introduce marginal and conditional probability densities e.g. p(Xa | Xb) and p(Xb). Obviously, p(Xa, Xb) = p(X) = p(Xa | Xb) . p(Xb)
You get p(Xb) from p(X) by integrating out over Xa.
NB use small "p" for probability density. Use capital "P" for probability.
|
2018-01-22 09:06:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601489067077637, "perplexity": 1195.0214842257838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891196.79/warc/CC-MAIN-20180122073932-20180122093932-00030.warc.gz"}
|
https://direct.mit.edu/rest/article/102/4/766/96789/International-Transfer-Pricing-and-Tax-Avoidance
|
## Abstract
This paper employs unique data on export transactions and corporate tax returns of UK multinational firms and finds that firms manipulate their transfer prices to shift profits to lower-taxed destinations. It shows that the 2009 tax reform in the United Kingdom, which changed the taxation of corporate profits from a worldwide to a territorial system, led to a substantial increase in transfer mispricing. It also provides evidence for a trade creation effect of transfer mispricing and estimates substantial transfer mispricing in non-tax-haven countries with low- to medium-level corporate tax rates, and in R&D intensive firms.
## I. Introduction
GLOBALIZATION has led to the concentration of economic activity within a small number of multinational corporations (MNCs), a development that has made it more challenging for governments to raise revenue from corporate income tax, as MNCs can shift their profits across borders to reduce their worldwide tax bills.1 A key instrument that MNCs use to shift profits is undercharging or overcharging transfer prices on transactions between related parties within the MNC group (transfer mispricing). For example, to reduce its pretax profits (and hence corporate taxes), an MNC can charge artificially low prices for exports sold to a related party in a low-tax country or can pay artificially high prices when buying from a related party in a low-tax country. Tax-motivated transfer mispricing can take place in trade in real goods as well as in services and, in particular, in the form of royalty and licensing payments on intellectual property rights held abroad.
In recent years, policymakers have become increasingly concerned about this issue as the extent of profit shifting has intensified and the potential revenue at stake is substantial (Zucman, 2014; Beer et al., 2020). At the same time, there is a trend among countries to change from a worldwide to a territorial taxation of profits.2 Both the United Kingdom and Japan switched to territorial taxation in 2009. Following the passage of the Tax Cuts and Jobs Act (TCJA) in December 2017, the United States has also moved toward a territorial system, excluding from US taxation the active business income that is earned abroad.3
This paper presents new evidence on tax-motivated transfer mispricing in real goods. It uses a unique data set that combines the tax records of UK MNCs in manufacturing and their international trade transactions from 2005 to 2011. We use two distinct approaches to identify the causal effect of the corporate income tax differential between the destination country and the United Kingdom on the unit price of exports by UK MNCs. The first empirical approach exploits variation from the differential change in the price charged by a UK multinational with a subsidiary relative to the price charged by a UK MNC without a subsidiary in the same country in response to a change in the tax rate difference between the destination country and the United Kingdom. It controls for omitted variable bias by including a full set of firm–market–product fixed effects, product–market–year fixed effects, and firm–product–year fixed effects in a triple-difference regression. The second approach relies on a different source of variation, namely, the larger incentives to shift profit following the 2009 UK territorial tax reform, to quantify the effects of a shift from a worldwide to a territorial treatment of foreign profits on transfer mispricing.
We find strong evidence for tax-motivated transfer mispricing in manufacturing exports to low-tax destinations. A 1 percentage point increase in the UK-destination country tax differential reduces related-party export prices relative to arm's-length export prices by 3%. The extent of tax-motivated transfer mispricing increased substantially after the UK territorial tax reform in 2009. Under the territorial tax system, a 1 percentage point increase in the tax difference reduces related-party export prices relative to arm's-length export prices by another 1.5%. Our findings uncover heterogeneous transfer mispricing across countries and firms: we uncover transfer mispricing in countries that are not classified as tax havens and have low to intermediate tax rates.4 Moreover, there is more transfer mispricing in firms with high R&D intensities, with the marginal effect of the tax differential rising to 6.4% for those undertaking the most R&D.
Our benchmark findings are comparable to the size of the tax effects on transfer prices estimated in Clausing (2003), Bernard, Jensen, and Schott (2006), and Flaaen (2017) but are larger than the effects found in Davies et al. (2018), Vicard (2015), and Cristea and Nguyen (2016). We show that some of the differences can be attributed to omitted variable bias, as previous studies used smaller sets of fixed effects in the empirical analysis and other differences in the empirical approach such as the way that the key variables of interest are measured and the type of variation that is exploited for identification. Ultimately, the semielasticity of transfer prices (and, more generally, reported profits by MNCs) with respect to the international tax differential is not a structural parameter and can vary with the design of the corporate and the overall tax system in a country. Our paper adds to the literature in five distinct ways. First, we show in a simple model that a shift from a worldwide to a territorial system leads to stronger transfer mispricing, and we provide empirical evidence that corroborates the theoretical prediction. Second, we find substantial transfer mispricing in tangible goods by UK MNCs to non-tax-haven countries. Third, we show theoretically that tax incentives can lead to trade creation for tax purposes and find evidence for this channel around the 2009 tax reform. Fourth, our results suggest that transfer mispricing is concentrated in the most R&D-intensive firms, a finding that is robust to controlling for differential effects by firm size and the type of product traded and holds that R&D investment facilitates transfer mispricing by making goods more specific. Finally, thanks to the rich data and the relatively large number of MNCs headquartered in the United Kingdom, our regression specifications include more fixed effects than previous studies and allow for a clean identification of tax-motivated transfer mispricing. Moreover, the 2009 UK reform in the taxation of cross-border corporate income provides us with a quasi-natural experiment that introduced exogenous changes in the tax incentives of profit shifting that are unrelated to the level of the tax rate differential, corroborating the causal effect of taxes on transfer prices.
Our findings have several implications for tax policy design. First, transfer mispricing in tangible goods by UK MNCs is an area of revenue leakage that warrants further attention by the UK tax authority. While the quantitative evidence is UK specific, the empirical analysis can be extended to other countries with the suitable data in order to help uncover the extent of tax-motivated transfer mispricing elsewhere. Second, increases in tax-motivated transfer mispricing represent a relevant revenue cost of moving from a worldwide to a territorial system. Third, policymakers should be mindful of potential revenue losses to any trading partners that have lower statutory corporate income tax rates, including those that are not tax havens. Finally, tax-motivated transfer mispricing is not uniform across firms; it is concentrated in the most R&D-intensive ones, which provides useful guidance to tax authorities.
Several papers have analyzed transfer-pricing behavior of multinational firms. Early work, including Grubert and Mutti (1991), Harris, Morck, and Slemrod (1993), Hines and Rice (1994), and Collins, Kemsley, and Lang (1998), provided indirect evidence for tax-motivated profit shifting by MNCs, showing that their pretax profits are systematically correlated with tax differentials across countries. Heckemeyer and Overesch (2017) and Beer et al. (2020) survey the recent empirical literature on tax-motivated profit shifting, quantifying the consensus estimate of the semielasticity of reported profits with respect to the international tax differential of around 0.8% and 1%, respectively.
Clausing (2003) was the first to provide direct evidence on manipulated prices, using item-level data on prices of US international trade. In another seminal paper, Bernard et al. (2006) employed transaction-level data from the US Census to study a wide set of factors that can lead to manipulated transfer prices, including corporate taxes and tariffs. More recently, Flaaen (2017) uses the same data to study transfer-price manipulation by US multinationals in response to the 2004 Home Investment Act.
Closely related to our work are three papers that also use detailed trade data to study transfer-price manipulations for a set of different countries. Davies et al. (2018) and Vicard (2015) exploit information on French firms, whereas Cristea and Nguyen (2016) employ Danish data. We discuss differences across these papers in detail in section V. Finally, Hebous and Johannesen (2015) analyze firm-level trade data on German MNCs, providing evidence that they shift profits to tax havens through services trade.
The remainder of the paper is structured as follows. Section II provides background on transfer pricing and the 2009 tax reform. Section III explains the empirical approach, section IV describes the data, section V presents the main empirical results, and section VI presents the heterogeneity results. Section VII concludes with a discussion of policy implications and avenues for future research.
## II. Institutional Background
This section provides an overview of transfer pricing, explaining the arm's-length principle that generally guides the setting of transfer prices and several weaknesses of this approach. It then discusses the 2009 tax reform that changed the UK taxation of foreign profits from a worldwide to a territorial system.
### A. Transfer Pricing
Transfer pricing is the setting of prices for internal (intrafirm) transactions in goods, services, intangibles, and capital flows within an MNC. This pricing affects the allocation of pretax profits that each party earns from a cross-border transaction within an MNC and the amount of corporate tax that is due in both countries. Consider a UK pharmaceutical group that buys raw material from a subsidiary in China. How much the UK parent pays its Chinese subsidiary for each unit of the raw material—the transfer price—affects how much profit the Chinese affiliate earns and how much local tax it pays and the amount of profit and corporate tax that the UK parent company faces.
Most tax authorities, including Her Majesty's Revenue & Customs (HMRC) in the United Kingdom, use the arm's-length principle to guide transfer pricing.5 The principle stipulates that a transfer price should be the same as if the two parties involved were two independent companies, that is, the same as a comparable market transaction. Given the nature of related-party transactions, a range of arm's-length prices may exist for the same transaction. Conceptually, there may even be no “correct” arm's-length price if there are no comparable third-party transactions. Comparable transactions may also be costly to observe for the tax authority due to information asymmetry. If comparable arm's-length prices are not accessible, they may be difficult to infer. Given these weaknesses in the implementation of the arm's-length principle, MNCs may be able to charge artificially low prices for exports sold to low-tax countries or artificially high prices for inputs coming from low-tax countries to reduce their global tax liability.
Many countries implement transfer-pricing regulations as a countermeasure to mitigate revenue losses from transfer mispricing. The tightness of these regulations varies from mere acknowledgment of the arm's-length principle to required detailed transfer-pricing reports. Rigid regulations increase the cost of transfer mispricing and are found to be somewhat effective in curbing the extent of profit shifting in developed countries.6 In the United Kingdom, transfer-pricing documentation requirements are set out in the domestic law, specifying that documentation must be available on request. Unlike most other OECD countries, the United Kingdom does not have a prescribed list of documentation requirements, and detailed disclosures are not currently required as part of corporate tax records.
### B. The 2009 Tax Reform
#### Worldwide versus territorial taxation.
Domestic taxation of foreign earnings is a key consideration for MNCs when setting their transfer prices, as it affects their global corporate tax bill. Countries typically use one of two predominant approaches, worldwide or territorial, in taxing foreign earnings of their MNCs. Under worldwide taxation, an MNC pays taxes on its active business income that is earned domestically and abroad, though taxation of foreign earnings is usually deferred until they are brought back to the home country and a credit is often given for foreign taxes paid to avoid double taxation. Under territorial taxation, an MNC pays taxes only on profits in the source country, with no tax levied on profits repatriation.7
#### The 2009 reform of taxing foreign profits.
Until 2009, UK-based MNCs were taxed on their worldwide income, although taxation of foreign income was deferred until repatriation as dividends. In 2009, the United Kingdom switched from worldwide to territorial taxation by exempting UK-based MNCs from UK tax for all dividends and distributions received from foreign affiliates. This fundamental change of the tax system made repatriation of profits less costly and should therefore increase the extent of transfer mispricing by UK MNCs. Note that it is plausible that before 2009, part of the foreign earnings were already brought back to the United Kingdom in some other complicated, nontaxable way. To the extent that tax planning activities are costly, it remains the case that the amount of tax savings from profit shifting net of costs is larger under the territorial tax system.
The territorial regime was a key element of the foreign profits package that was introduced in the 2009 Finance Bill, with exemptions applying to dividends received from July 1, 2009, onward. Unlike the TCJA, which imposed a deemed repatriation tax on undistributed foreign earnings of US MNCs, the exemption in the UK reform was 100% and did not impose any tax on undistributed foreign profits. In addition to dividend exemptions, the package included two other elements that carried important implications for UK MNCs. First, a worldwide debt cap on the finance expenses of companies was introduced as an extension to the thin UK capitalization rules. The debt cap limits tax deductions for interest expenses by these MNCs to the external gross interest expense of the worldwide group. The worldwide debt cap rule became effective on January 1, 2010, and is expected to restrict the extent of debt shifting by a small number of companies for which the debt cap is binding.
The other change was a tightening of the controlled foreign company (CFC) regime. Under the existing CFC regime, both active and passive income were liable to UK taxation if a subsidiary was defined as a CFC. However, among a series of exemptions from being defined as a CFC was an exemption for actively trading subsidiaries. One way to avoid UK taxes was to mix passive income with active income in a trading subsidiary so that the former goes untaxed in the United Kingdom. Under the newly proposed CFC regime, all passive income is liable for UK taxation, including all passive income in active subsidiaries. The reform of the CFC regime, however, was perceived as hurting the ability of the United Kingdom to attract MNCs. In response to these concerns, only minimal changes were made to the CFC regime in 2009; the new CFC regime took effect only in January 2013, after our period of analysis. While the United Kingdom first shifted to a territorial system and only later strengthened its antiavoidance rules, the recent US territorial reform directly included a series of antiavoidance measures to limit profit shifting under the new regime.
Neither of the two rules discussed is expected to have a first-order effect on MNCs' transfer mispricing behavior. The full reform strengthening CFC rules took place only after our sample period and the worldwide debt cap had a negligible effect as it affected only a very small fraction of UK MNCs. To the extent that the worldwide debt cap had an effect, it likely strengthened the incentive to shift profits through transfer pricing (as a substitute for debt shifting) and thus might explain a small part of the increase in transfer mispricing that we observe.
Finally, a great feature of the territorial reform is that its exact announcement and implementation dates were not known in advance. We can therefore exploit variation in the tax incentives to shift profits generated by the reform to study transfer mispricing in a quasi-experimental setting.
### C. Testable Predictions
In the following, we discuss the three main testable predictions that we take to the data and briefly explain the intuition behind them. In online appendix C we show how to formally derive these predictions in an extension of the standard transfer-pricing model.
Prediction 1.
Transfer Mispricing. The transfer price for exports to low-tax destinations is below the arm's-length price and falls in the tax rate difference.
Suppose a MNC sells the same product to a lower-tax destination at arm's length and to a related party. To lower its tax bill, the MNC has an incentive to underprice its related-party exports. The government applies the arm's-length principle and imposes a fine on the MNC that increases in the difference between the arm's-length price and the related-party price. The MNC selects a transfer price that optimally trades off the tax savings from underpricing related-party exports with the expected size of the fine imposed by the government.
Prediction 2.
Tax Systems. For the same tax rate difference, when selling to lower-tax destinations, MNCs misprice their transfer prices by more under a territorial system than under a worldwide system.
Now consider a tax reform that changes the treatment of corporate profits from a worldwide system with deferral to a territorial system. Under the worldwide system, repatriating profits back to the headquarters is costly due to repatriation taxes. While tax payments can be deferred by reinvesting profits abroad, this still represents a second-best solution. For this reason, under worldwide taxation, a pound of posttax profits abroad is less valuable than a pound of posttax profits at home. In contrast, under territorial taxation, repatriation is costless, and after-tax profits abroad and after-tax profits at home are equally valuable to the MNC. A shift from a worldwide system with deferral to a territorial system should therefore increase the incentives for profit shifting and thus for transfer mispricing.
Prediction 3.
Trade Creation. Suppose transfer mispricing incentives are sufficiently strong. Then MNCs export more than the first-best quantities (in the absence of taxation) to destinations where their transfer mispricing incentives are the strongest.
Finally, notice that the amount of profits shifted through transfer mispricing is proportional to the quantity of goods shipped to a destination. A challenge for an MNC that wants to shift profits may be that it has relatively small trade flows to countries that have low tax rates. As the extent of transfer mispricing is limited by the fine imposed by the government, a solution to that problem would be to create artificial trade flows to low-tax destinations. As delivering too large quantities to a market reduces the MNC's profit margin there, an MNC creates an artificially large trade flow only if transfer mispricing incentives are sufficiently strong, that is, if the tax rate difference is large.
## III. Empirical Strategy
In this section, we present two distinct empirical specifications that are employed in testing the three predictions on transfer mispricing.
### A. Baseline: Testing Prediction 1
Our baseline specification estimates the transfer pricing behavior of MNCs in a triple-difference regression. Specifically, we estimate
$lnpijkt=αijk+αjkt+αikt+(β1Δτjt×Ilow,t+β2Δτjt×Ihigh,t)×Affij+εijkt,$
(1)
where $pijkt$ is the average unit price of exports of product $k$ to country $j$ by firm $i$ in year $t$. $Δτjt≡|τjt-τUK,t|$ is the absolute difference in statutory corporate tax rates between the destination country $j$ and the United Kingdom in year $t$. $Ilow,t$ ($Ihigh,t$) are indicators that take the value of 1 if the destination country has a lower (higher) statutory tax rate than the United Kingdom in year $t$ and 0 otherwise. $Affij$ is a dummy indicator that takes a value of 1 if the MNC firm $i$ has at least one affiliate in country $j$ and 0 otherwise. $αijk$ is a firm–market–product fixed effect, $αjkt$ is a product–market–year fixed effect, and $αikt$ is a firm–product–year fixed effect.
With inclusion of the fixed effects, identification relies on the differential change in the price charged by a multinational on exports with a subsidiary in a country relative to the price charged by a multinational without a subsidiary in the same country in response to a change in the tax rate difference between that country and the United Kingdom.8 Taking the full set of fixed effects is crucial for insulating the causal effect of tax differences. More specifically, $αijk$ takes out the average price a firm charges for a product in a given market. This fixed effect is essential, as firms often supply goods of different quality to different destination markets. The second fixed effect, $αjkt$, controls for the average price of a product in a year across all firms, taking out all shocks to the supply and demand of a product that are common across firms. Finally, $αikt$ controls for the average price a firm charges for a product in a given year. This fixed effect controls for all shocks to the supply or demand of a firm's product that are common across markets. The coefficients $β1$ and $β2$ therefore capture the causal effect of tax differences on transfer prices, controlling for all of the main supply and demand factors that could confound the effect of taxes on prices.
As discussed in prediction 1, we expect $β1$ to be negative if MNCs systematically reduce the export prices for transactions with their foreign affiliates to shift more profits into low-tax countries in response to an increase in $Δτjt$. Similarly, we expect $β2$ to be positive when MNCs systematically increase the export prices for transactions with their foreign affiliates to shift more profits out of high-tax countries in response to an increase in $Δτjt$. However, predictions with respect to the high-tax countries are less clear-cut. For example, if MNCs could claim full tax credits for taxes paid on profits abroad to offset their domestic tax liability under the worldwide system, we expect $β2$ to be 0. Alternatively, UK MNCs can shift profits directly from subsidiaries in high-tax countries into subsidiaries in low-tax countries, which makes profit shifting into the United Kingdom unnecessary and implies a 0 $β2$. Following Davies et al. (2018), our baseline regression controls for a pricing-to-market determinant by including the interaction between $Affij$ and the log of per capita GDP of the foreign country ($lnGDPPCjt$). This variable is also useful to control for the extent of vertical FDI, which is likely to be larger in countries with lower corporate tax rates. Our baseline regression does not include any firm-level or country-level controls, as any variation at that level is absorbed by the fixed effects. To account for possible correlation in export prices among all the UK multinationals trading with the same destination market, we cluster the standard errors by country-year pairs.
### B. Tax Reform: Testing Prediction 2
We exploit the regime change in the United Kingdom's taxation of foreign profits in 2009 to check whether a shift from worldwide to territorial taxation indeed created stronger incentives for UK multinationals to shift profits into lower-tax destinations. For this, we run the following regression,
$lnpijkt=αijk+αjkt+αikt+(β1Δτjt×Ilow,t+β2Δτjt×Ihigh,t)×Affij+(β3Δτjt×Ilow,t+β4Δτjt×Ihigh,t)×Affij×Postt+εijkt,$
(2)
where $Postt$ is an indicator that takes the value of 1 if year $t$ is after the tax reform and 0 otherwise. Given that the reform took place in the second half of the fiscal year, we drop observations in 2009 for cleaner identification.9 The main coefficients of interest are now $β3$ and $β4$. If the reform increased incentives for transfer price manipulation, as discussed in prediction 2, we would expect a negative $β3$. We expect the coefficient $β4$ to be either 0 if MNCs avoid shifting profits from high-tax countries to the United Kingdom, or to be positive as the territorial system eliminates tax credits on foreign taxes paid and thus might induce UK MNCs to shift profits into the United Kingdom.
### C. Trade Creation: Testing Prediction 3
Finally, we test whether UK MNCs trade more with countries into which they shift profits. For this purpose, we rerun specifications (1) and (2), replacing the dependent variable by the log of quantities. Prediction 3 implies a positive and significant coefficient for $β1$ in specification (1) and for $β3$ in specification (2), respectively.
## IV. Data
### A. Data Sources
Our data set is constructed by merging three databases.10 The first database includes transaction-level export data from 2005 to 2011 provided by the HMRC. Specifically, each record includes, among others, the firm's trader ID (anonymized), the product code (fifteen-digit HMRC Integrated Trade Tariff Code), the destination country, the export value in British pounds, and the weight in kilograms. The unit of observation in our empirical analysis is a firm–product–destination–year price. We collapse the transaction data to that level, computing total export value, total quantity, and average unit price.
The second database, the FAME ownership database of Bureau Van Dick, is also at the firm level and provides information for each company on the name and location of its ultimate parent and subsidiaries, if applicable. Based on the ownership information, we group the population of UK companies into one of the following categories: (a) domestic or unknown,11 (b) stand-alone exporters, (c) subsidiaries of a foreign parent company, and (d) parent companies and subsidiaries of UK-headquartered MNC groups with at least one subsidiary outside the United Kingdom. In the online appendix, panel A in figure B.1 shows the number of UK affiliates in each of the 108 countries that had UK exporting partners in 2011. Table A.1 presents for each category the number of firms, their share in total exports, and their share in total assets within manufacturing. Overall, UK MNCs account for 39% of exports and hold about 13% of total assets within the manufacturing sector.
The third database, also provided by the HMRC on an anonymized basis, consists of firm-level corporation tax records that provide detailed information on the tax position of each company and how it is determined. A lookup table that cross-references the trader IDs and taxpayer identifiers allows us to merge the two databases. We exploit information from this database to test for differential transfer pricing behavior across firms with different R&D intensities and to assess the magnitude of tax revenue loss from transfer mispricing relative to total CIT revenue collected from UK MNCs in manufacturing.
### B. Focus on UK Multinationals
We restrict our comparison to pricing differences between UK multinationals in group (d), as our data are best suited to study their transfer pricing behavior. Domestic firms do not set transfer prices for cross-border transactions. As the typical domestic firm differs substantially from the typical MNC, their arm's-length export prices are also less comparable to those charged by MNCs. Subsidiaries of foreign parents set transfer prices but have to solve a very different tax planning problem. Their transfer pricing decisions in the United Kingdom likely depend not only on the tax rate in the country that they are exporting to but also on the tax rate and tax system in their parent country.
### C. Proxying Related-Party Trade
We use the location of foreign affiliates as a proxy for related-party trade, similar to Vicard (2015), Hebous and Johannesen (2015), and Cristea and Nguyen (2016). By definition, a UK MNC can have related-party trade only with countries where it has an affiliate. Of course, it may also trade with unrelated parties in these countries. Therefore, the price we observe for an MNC that has an affiliate in a given country is the weighted average of the prices charged in all intrafirm and arm's-length transactions. Importantly, this measurement error biases results against us finding any effects, as it makes it harder to identify systematic differences between pure arm's-length prices and our related-party price proxy.12
The FAME database provides a snapshot of the ownership structure of UK firms in 2015. A potential caveat of using the static information to define the location of foreign affiliates is that one needs to assume that the affiliate status of firms remained constant over the sample period. Therefore, the static definition of foreign affiliates location does not reflect changes in destination countries in which UK MNCs established their first affiliate between 2005 and 2015. To address this limitation, we complement the FAME data on the network of foreign affiliate locations with information on mergers and acquisitions (M&As) by all UK companies between 2005 and 2015 from the Zephyr database, which is also provided by Bureau Van Dijck. The detailed procedure for updating the location of foreign affiliates for all company-years recursively between 2005 and 2015 is described in online appendix D. Overall, 102 UK MNCs established a first affiliate in a new country during our sample period across more than thirty countries. After merging in the information from Zephyr, we create a new time-varying dummy indicator ($Affijt$) that takes the value of 1 if company $i$ has an affiliate in country $j$ in year $t$.
### D. Other Data Sources
We augment the data set with additional data on destination country characteristics and statutory corporate tax rates.13 We obtain information on country-level variables from the World Bank (World Databank, World Development Indicators) and the PennWorld Table 8.1. The statutory tax rates are headline corporation tax rates drawn from KPMG Corporate Tax Rate Tables.
### E. Definitions and Descriptive Statistics
Define $Δτjt≡|τjt-τUK,t|$ as the absolute value of the difference in the statutory tax rate between the United Kingdom ($τUK,t$) and the destination country ($τjt$). Furthermore, define a country as a low-tax destination if its statutory corporate tax rate is lower than the UK rate ($τjt<τUK,t$) and as a high-tax country if its statutory tax rate is equal to or higher than the UK rate ($τjt≥τUK,t$). Following this definition, a country can switch from a low-tax to a high-tax destination (and vice versa) when its own tax rate or the UK tax rate changes.
The merged data set includes 931,773 observations at the firm–product–year level for 1,256 unique companies in manufacturing between 2005 and 2011. Table 1 provides summary statistics for the baseline estimation data set, which has 387,709 observations after inclusion of the full set of fixed effects.14
Table 1.
Summary Statistics
Mean (1) SD (2) P25 (3) P50 (4) P75 (5) Observations (6) Product characteristics Export value (GBP) 181,507 601,514 1,140 8,083 57,627 387,709 Net mass (in kilogram) 24,498 104,324 12 132 2,280 387,709 Average value (per kilogram) 337 930 8.0 34.8 199 387,709 Firm characteristics Log sales 16.6 1.8 15.4 16.4 17.5 7,420 Intrafirm trade 0.72 0.45 0 1 1 7,420 Profit making 0.39 0.49 0 0 1 7,420 Country characteristics Low-tax country dummy 0.54 0.50 0 1 1 686 Low-tax wedge ($τUK-τj,%$) 7.44 5.68 2.5 6 11.5 187,795 High-tax wedge ($τj-τUK,%$) 5.36 4.16 2 5 7.25 199,914
Mean (1) SD (2) P25 (3) P50 (4) P75 (5) Observations (6) Product characteristics Export value (GBP) 181,507 601,514 1,140 8,083 57,627 387,709 Net mass (in kilogram) 24,498 104,324 12 132 2,280 387,709 Average value (per kilogram) 337 930 8.0 34.8 199 387,709 Firm characteristics Log sales 16.6 1.8 15.4 16.4 17.5 7,420 Intrafirm trade 0.72 0.45 0 1 1 7,420 Profit making 0.39 0.49 0 0 1 7,420 Country characteristics Low-tax country dummy 0.54 0.50 0 1 1 686 Low-tax wedge ($τUK-τj,%$) 7.44 5.68 2.5 6 11.5 187,795 High-tax wedge ($τj-τUK,%$) 5.36 4.16 2 5 7.25 199,914
This table lists the summary statistics for the key variables in this paper's main estimation sample from 2005 to 2011.
In online appendix B, we present several figures that further illustrate the data. Panel B of figure B.1 shows the number of countries classified as low tax and high tax, respectively, over the sample period of 2005 to 2011. Figure B.2 reports the overall annual exports, as well as the share of MNC exports, to countries where the respective MNC has a majority-owned affiliate; on average, around 39% of MNC exports fall in this category.15 Figure B.3 shows the substantial variation in corporate tax rates in both the time series and the cross section. Specifically, panel A shows the histogram of the corporate tax differential for the estimation sample. Panel B shows the number of countries that experienced some change in the corporate tax differential for each year in the sample, separately for the low-tax and high-tax country groups.
## V. Main Empirical Results
This section presents results from our baseline fixed effects regression, results on the 2009 UK tax reform, and on trade creation. We then quantify the results in terms of forgone tax revenues, present a set of robustness checks, and compare our findings to previous studies. Finally, we look at the heterogeneity of effects in destination country tax rates, destination country tax haven status, and firms' R&D intensities.
### A. Baseline Results
Table 2 presents our baseline regression results based on equation (1). Column 1 shows that the coefficient on the triple interaction for low-tax destinations is negative and highly significant, indicating that MNCs shift profits out of the United Kingdom by underpricing related-party exports to low-tax countries.16 In contrast, the triple interaction for high-tax destinations is insignificant. That is, there is no evidence that MNCs shift profits into the United Kingdom from higher-tax countries through transfer prices. Column 2 controls for pricing-to-market by including an interaction term between destination country per capita GDP and the related-party dummy indicator. The results are very similar. Column 3 checks the robustness of the results to potential mismeasurement in the time-invariant ownership indicator by dropping observations with changing ownership. Column 4 uses the dynamic affiliate indicator. The results are almost identical to the previous columns. Furthermore, results in table A.2, which are discussed in detail shortly, show significant transfer mispricing prior to the territorial tax reform when the sample is restricted to the 2005–2008 prereform period.
Table 2.
Effect of the Tax Differentials on Transfer Pricing by UK MNCs: Baseline Results
$AFFij(t)×$ (1) (2) (3) (4) $Δτjt×Ilow,t$ −0.030*** −0.029*** −0.029*** −0.030*** (0.011) (0.011) (0.011) (0.011) $Δτjt×Ihigh,t$ −0.007 −0.007 −0.007 −0.007 (0.006) (0.006) (0.006) (0.006) $lnGDP$$PCjt$ −0.058 −0.059 −0.005 (0.133) (0.133) (0.088) Adjusted $R2$ 0.91 0.91 0.91 0.91 $N$ 387,709 384,525 312,174 384,525
$AFFij(t)×$ (1) (2) (3) (4) $Δτjt×Ilow,t$ −0.030*** −0.029*** −0.029*** −0.030*** (0.011) (0.011) (0.011) (0.011) $Δτjt×Ihigh,t$ −0.007 −0.007 −0.007 −0.007 (0.006) (0.006) (0.006) (0.006) $lnGDP$$PCjt$ −0.058 −0.059 −0.005 (0.133) (0.133) (0.088) Adjusted $R2$ 0.91 0.91 0.91 0.91 $N$ 387,709 384,525 312,174 384,525
This table presents our baseline results, based on equation (1). The dependent variable, $lnpijkt$, is the average unit price of exports of product $k$ to country $j$ by firm $i$ in year $t$. $Δτjt$ is the absolute tax rate difference between country $j$ and the United Kingdom in year $t$. $Ilow,t$ ($Ihigh,t$) indicates whether a country has a lower (higher) tax rate than the United Kingdom in year $t$. $AFFij(t)$ indicates if MNC $i$ has at least one affiliate in country $j$ in year $t$. $lnGDPPCjt$ is the log of per capita GDP in country $j$ in year $t$. Standard errors clustered by country-year pairs are in parentheses. Significant at $***$1%, $**$5%, and $*$10%.
Effects for low-tax destinations are large. A 1 percentage point larger tax difference, on average, reduces related-party export prices relative to arm's-length export prices by around 3%. Figure 1 illustrates the size of our main estimate relative to those found in previous studies: the magnitude of the tax effect is substantially larger than the effects found in Vicard (2015), Cristea and Nguyen (2016), and Davies et al. (2018), which report price responses between 0.12% and 0.6%. It is more comparable to Clausing (2003) and Bernard et al. (2006), the latter reporting effects between 0.4% and 4.2% depending on the specification.
Figure 1.
Effect Sizes in the Literature
This figure plots coefficients and confidence bands of the semielasticity of log unit price for intragroup exports for past studies and our sample. The studies are ordered on the $x$-axis by the midyear of their sample period. Source: Author's calculation based on studies cited in online appendix table A.6.
Figure 1.
Effect Sizes in the Literature
This figure plots coefficients and confidence bands of the semielasticity of log unit price for intragroup exports for past studies and our sample. The studies are ordered on the $x$-axis by the midyear of their sample period. Source: Author's calculation based on studies cited in online appendix table A.6.
There are many differences that could explain the heterogeneity in estimates across the studies. First, it is important to note that for a given firm, the semielasticity of intragroup prices (and, more generally, the semielasticity of reported profits) is not an immutable parameter but depends critically on the tax system. Features of the tax system, including the corporate tax base, taxation of foreign profits, the extent of integration between the corporate and personal tax base, and the strength of antiavoidance regulations all play a role in determining the net benefit from transfer mispricing. For example, credits on corporate income taxes are passed through to shareholders only if a domestic tax has been paid at the corporate level, which provides French firms with an incentive to incur domestic income and alleviate outward profit shifting from France (Clausing, 2003). Given that existing papers cover several countries (France, Denmark, the United States, and the United Kingdom), one reason for the relatively wide range of estimates may be genuine differences in the tax sensitivity of MNCs' transfer pricing across these countries, reflecting differences in the underlying corporate and overall tax systems.
Another potential explanation is omitted variable bias due to differences in specifications across papers. We study this in detail in section E in the online appendix, showing that this channel can account for some of the differences in estimates (table E.1). As a first pass, table A.9, also in the online appendix, shows coefficients obtained for our baseline estimation when gradually adding more fixed effects that controls for alternative confounding factors of the tax effect. Doing so varies our main coefficient of interest between 0.7% and 6.1%.
Differences in the empirical approach used in different studies, including whether they distinguish related-party from arm's-length trade, the way that related-party trade is measured, the type of variation exploited (cross-section versus time-series), and the tax rate variable used to measure incentive for profit shifting, can each lead to differences in the estimates. For example, Davies et al. (2018) use a precise measure of related-party trade but rely on a cross-section of French firms in a single year for identification. Our paper follows Cristea and Nguyen (2016) and proxies related-party trade by the presence of a majority-owned affiliate. Despite the differences in the data employed, our estimates are quite comparable to the preferred estimates in Bernard et al. (2006), whose table 5 shows price effects between 1.6% and 4.2% per percentage point of tax rate difference.17 Moreover, with the same specification, estimates based on an effective tax rate measure are often two to three times smaller than those based on the statutory tax rate in Bernard et al. (2006). This also highlights the importance of using the latter for profit-shifting analysis: effective tax rates in part reflect endogenous choices made by firms, including the amount of profit shifted. In contrast, statutory tax rates are determined by governments and are thus generally exogenous to the firm's decisions, making them a more credible source of identification for profit-shaping analysis (Dharmapala, 2014).
Table 3.
Effect of the Tax Differentials on Transfer Pricing by UK MNCs: Tax Reform
$AFFij(t)×$ (1) (2) (3) $Δτjt×Ilow,t$ −0.027** −0.027** −0.028** (0.011) (0.011) (0.011) $Δτjt×Ihigh,t$ −0.000 −0.001 −0.000 (0.006) (0.006) (0.006) $Postt$ 0.132*** 0.130*** 0.131*** (0.043) (0.043) (0.044) $Δτjt×Ilow,t×Postt$ −0.015*** −0.015*** −0.015*** (0.005) (0.005) (0.005) $Δτjt×Ihigh,t×Postt$ −0.008 −0.008 −0.008 (0.007) (0.007) (0.007) $lnGDPPCjt$ −0.046 0.004 (0.135) (0.090) Adjusted $R2$ 0.91 0.91 0.91 $N$ 315,330 312,274 312,274
$AFFij(t)×$ (1) (2) (3) $Δτjt×Ilow,t$ −0.027** −0.027** −0.028** (0.011) (0.011) (0.011) $Δτjt×Ihigh,t$ −0.000 −0.001 −0.000 (0.006) (0.006) (0.006) $Postt$ 0.132*** 0.130*** 0.131*** (0.043) (0.043) (0.044) $Δτjt×Ilow,t×Postt$ −0.015*** −0.015*** −0.015*** (0.005) (0.005) (0.005) $Δτjt×Ihigh,t×Postt$ −0.008 −0.008 −0.008 (0.007) (0.007) (0.007) $lnGDPPCjt$ −0.046 0.004 (0.135) (0.090) Adjusted $R2$ 0.91 0.91 0.91 $N$ 315,330 312,274 312,274
This table presents our results on the 2009 tax reform, based on equation (2). $Postt$ is a dummy indicator equal to 0 until 2008 and equal to 1 from 2010 onward. All other variables are defined in table 2. Standard errors clustered by country-year pairs are in parentheses. Significant at $***$1%, $**$5%, and $*$10%.
Table 4.
Effect of the Tax Differentials on Trade Diversion by UK Multinationals
$ln(Weight)$ $ln(UnitPrice)$ $ln(TotalValue)$ Dependent Variable: $AFFij×$ (1) (2) (3) (4) (5) (6) $Δτjt×Ilow,t$ −0.034* −0.016 −0.029*** −0.027** −0.063*** −0.043** (0.020) (0.020) (0.011) (0.011) (0.023) (0.022) $Δτjt×Ihigh,t$ −0.010 −0.004 −0.007 −0.001 −0.018* −0.005 (0.009) (0.011) (0.006) (0.006) (0.010) (0.012) $Postt$ 0.120 0.130*** 0.249** (0.101) (0.043) (0.102) $Δτjt×Ilow,t×Postt$ 0.018** −0.015*** 0.003 (0.009) (0.005) (0.010) $Δτjt×Ihigh,t×Postt$ −0.001 −0.008 −0.009 (0.013) (0.007) (0.013) $lnGDP$$PCjt$ 0.381 0.465 −0.058 −0.046 0.323 0.420 (0.292) (0.291) (0.133) (0.135) (0.304) (0.292) Adjusted $R2$ 0.92 0.92 0.91 0.91 0.89 0.89 $N$ 384,525 312,274 384,525 312,274 384,525 312,274
$ln(Weight)$ $ln(UnitPrice)$ $ln(TotalValue)$ Dependent Variable: $AFFij×$ (1) (2) (3) (4) (5) (6) $Δτjt×Ilow,t$ −0.034* −0.016 −0.029*** −0.027** −0.063*** −0.043** (0.020) (0.020) (0.011) (0.011) (0.023) (0.022) $Δτjt×Ihigh,t$ −0.010 −0.004 −0.007 −0.001 −0.018* −0.005 (0.009) (0.011) (0.006) (0.006) (0.010) (0.012) $Postt$ 0.120 0.130*** 0.249** (0.101) (0.043) (0.102) $Δτjt×Ilow,t×Postt$ 0.018** −0.015*** 0.003 (0.009) (0.005) (0.010) $Δτjt×Ihigh,t×Postt$ −0.001 −0.008 −0.009 (0.013) (0.007) (0.013) $lnGDP$$PCjt$ 0.381 0.465 −0.058 −0.046 0.323 0.420 (0.292) (0.291) (0.133) (0.135) (0.304) (0.292) Adjusted $R2$ 0.92 0.92 0.91 0.91 0.89 0.89 $N$ 384,525 312,274 384,525 312,274 384,525 312,274
This table presents regression results on the effect of the tax differential on the quantity of exports (columns 1 and 2), the transfer prices (columns 3 and 4), and the total value of exports (columns 5 and 6) by UK multinationals, respectively. All other variables are defined in table 2. Standard errors clustered by country-year pairs are in parentheses. Significant at $***$1%, $**$5%, and $*$10%.
Table 5.
Heterogeneous Transfer Mispricing in R&D
$AFFij×$ (1) (2) (3) (4) $Δτjt×Ilow,t$ −0.032 −0.034 (0.028) (0.030) $Δτjt×Ilow,t×R&Dlow,i$ −0.009 −0.031* (0.015) (0.018) $Δτjt×Ilow,t×R&Dmedium,i$ 0.000 −0.023 0.014 0.016 (0.017) (0.019) (0.015) (0.015) $Δτjt×Ilow,t×R&Dhigh,i$ −0.063*** −0.086*** −0.043* −0.040* (0.017) (0.023) (0.024) (0.023) $Δτjt×Ilow,t×Sizemedium,i$ 0.034** −0.021 (0.017) (0.026) $Δτjt×Ilow,t×Sizelarge,i$ 0.024 −0.015 (0.022) (0.022) $Δτjt×Ilow,t×Diffi$ 0.019 0.033 (0.034) (0.032) Adjusted $R2$ 0.91 0.91 0.91 0.91 $N$ 384,525 384,525 328,941 318,484
$AFFij×$ (1) (2) (3) (4) $Δτjt×Ilow,t$ −0.032 −0.034 (0.028) (0.030) $Δτjt×Ilow,t×R&Dlow,i$ −0.009 −0.031* (0.015) (0.018) $Δτjt×Ilow,t×R&Dmedium,i$ 0.000 −0.023 0.014 0.016 (0.017) (0.019) (0.015) (0.015) $Δτjt×Ilow,t×R&Dhigh,i$ −0.063*** −0.086*** −0.043* −0.040* (0.017) (0.023) (0.024) (0.023) $Δτjt×Ilow,t×Sizemedium,i$ 0.034** −0.021 (0.017) (0.026) $Δτjt×Ilow,t×Sizelarge,i$ 0.024 −0.015 (0.022) (0.022) $Δτjt×Ilow,t×Diffi$ 0.019 0.033 (0.034) (0.032) Adjusted $R2$ 0.91 0.91 0.91 0.91 $N$ 384,525 384,525 328,941 318,484
This table shows results on R&D intensities and transfer pricing. R&D intensity and size indicators refer to terciles of the distribution of firm-level R&D expenses over sales and of tangible assets, respectively. Additional controls are $lnGDP$$PCjt$ and $AFFij×Δτjt×Ihigh,t$. $Diffi$ indicates if a product is differentiated. All other variables are defined in table 2. Standard errors clustered by country-year pairs are in parentheses. Significant at $***$1%, $**$5%, and $*$10%.
To summarize, our coefficient estimates are substantially larger than those estimated for France and Denmark in the studies cited above and, depending on the interpretation, are more comparable to those estimated by Clausing (2003) and Bernard et al. (2006) for the United States. We hope that future research will shed more light on the question of how much of this heterogeneity in coefficients is driven by genuine differences in the aggressiveness of transfer pricing across countries of varying tax systems and how much of it can be explained by the empirical methods employed in earlier studies.
### B. The Territorial Tax Reform
#### Graphical evidence.
Figure 2 provides graphical evidence on the effects of the tax reform on average unit price residuals.18 Panel A depicts mean residual prices over time for low-tax destinations, separately for related party and arm's-length trade transactions. Before the tax reform, these residual prices show similar trends. In 2010, the average related-party price residual drops substantially, whereas the average arm's-length residual is unaffected. In 2011, the related-party residual rebounds but is still below its prereform level. A similar pattern holds in panel B, which compares price residuals for related-party exports between low-tax and high-tax countries. While the pattern is slightly less clear-cut than in panel A, there is a strong drop in residual prices in 2010 with a partial recovery in 2011. This graphical evidence suggests a positive impact of the tax reform on transfer mispricing by UK MNCs. In the following, we test this relationship more formally by estimating equation (2).
Figure 2.
Log Unit Price Residuals and the Tax Reform
This figure shows mean log unit price residuals by year. Panel a shows mean residuals for low-tax destinations separately for related party and arm's-length transactions. Panel b shows mean residuals for related-party trade separately for low-tax and high-tax destinations.
Figure 2.
Log Unit Price Residuals and the Tax Reform
This figure shows mean log unit price residuals by year. Panel a shows mean residuals for low-tax destinations separately for related party and arm's-length transactions. Panel b shows mean residuals for related-party trade separately for low-tax and high-tax destinations.
#### Main results.
Table 3 presents our regression results on the territorial tax reform. Column 1 shows that the extent of profit shifting through transfer mispricing is larger under the territorial tax system. Before the reform, on average, a 1 percentage point increase in the tax difference led to a 2.7% decrease in the price of related-party exports relative to the price of arm's-length exports. After 2009, the tax effect is more pronounced, reducing the relative export price for low-tax destinations by another 1.5% per 1 percentage point lower tax rate difference. The increase in the strength of transfer pricing following the UK tax reform is significant at the 1% level.19 Column 2 adds the interaction term between destination country per capita GDP and the related-party dummy indicator. Column 3 tests the robustness of the results by replacing $AFFij$ with the time-varying ownership indicator $Affijt$. The results remain very similar.
#### Placebo test.
The identification for the effect of the territorial tax reform on transfer mispricing rests critically on the assumption that there are no differential changes in the pricing behavior between the two comparison groups prior to the reform other than the main supply and demand factors that are already controlled for with the full set of fixed effects. We perform a placebo test to check this assumption, restricting the data sample to the prereform period of 2005 to 2008. We assume a counterfactual year for the switch in the tax regime, which is captured in the $Postt$ dummy indicator that takes the value of 1 for all years after 2006, 2007, and 2008 in columns 1 to 3 of table A.2, respectively. We estimate equation (2) on this restricted sample and report the results in table A.2. The estimated coefficients concerning the effect of the tax reform ($β3$ and $β4$) are not statistically different from 0.
This placebo test also helps us to assess the potential bias that the time-invariant ownership status imposes on the triple interaction term with the tax policy change indicator. The misrepresentation of arm's-length pricing as being related-party pricing should be a more frequent occurrence in the first part of the sample compared to the later sample years, given that the network of foreign affiliates in the later periods is more likely to resemble the network observed in 2015. To rule out that the estimated effect of the tax reform merely reflects a gradual improvement in the measurement of related-party trade over time, the estimated coefficients should be 0 in any of the prereform periods. This is indeed the case in table A.2.
### C. Trade Creation and Quantification
Table 4 examines the effect of the tax differential on the quantity and value of exports by UK MNCs. The dependent variable in columns 1 and 2 of table 4 is the quantity of exports measured by weight, whereas columns 3 to 4 and 5 to 6 focus on the unit price and the total value of transactions, respectively.
Column 1 shows that overall, there is a weak negative effect of the tax differential interacted with affiliate status on export quantities, which is significant at the 10% level. While this negative correlation goes against the trade creation channel (prediction 3), results concerning the tax reform in column 2 provide some evidence in favor of the trade creation mechanism: the interaction with the postreform dummy is positive and highly significant. That is, UK MNCs increased their related-party export quantities to low-tax countries, in line with their profit-shifting incentives, following the reform. For ease of comparison, columns 3 and 4 reproduce the main results on transfer prices in, respectively, table 2 column 4, and table 3 column 2.
Due to the offsetting price and quantity effects on total export value, columns 5 shows that overall, the effect of the tax differential on the value of related-party exports is negative. Column 6 disentangles the effect before and after the tax reform, showing that a 1 percentage point increase in the tax differential on average depresses the value of intrafirm exports to low-tax countries by 4.3% relative to arm's-length exports prior to the reform. The effect of the tax differential on total related-party export values to low-tax countries did not change significantly after the tax reform. In summary, while we find mixed evidence on the trade creation channel, the effect is quite intriguing and merits further study.20
#### Quantification of effects.
We now discuss the quantitative importance of our findings, computing estimates of total shifted profits and forgone tax revenues to the United Kingdom based on our preferred coefficient estimates in table 3, column 3. Specifically, we calculate total shifted profits as
$∑c=1C(β1^+β3^)×Ilow,c×Δτc×expc,$
(3)
where $β1^$ and $β3^$ are the coefficient estimates from equation (2) (estimated at $-$0.028 and $-$0.015, respectively), $expc$ is the volume of related-party exports to country $c$, and $Δτc$ is the tax difference between the United Kingdom and country $c$.
We estimate that in 2010, UK multinationals shifted about 840.97 million pounds toward low-tax jurisdictions via transfer mispricing, with Ireland being the top destination. At the 2010 tax rate of 28%, this finding implies forgone tax revenues of 235.5 million pounds due to transfer mispricing in exports by UK MNCs in manufacturing. The forgone tax revenues represent about 7.8% of the total corporate tax revenue collected from the UK MNCs in manufacturing in 2010. As a share of total corporate income tax revenues, our estimates are comparable to those of Davies et al. (2018), who calculate that French firms would have paid about 1% (333 million euros out of 36 billion euros) more corporate income tax in the absence of tax-motivated transfer mispricing.
## VI. Heterogeneous Effects in Transfer Mispricing
This section presents evidence for heterogeneity in transfer mispricing across countries with different tax rates and tax haven statuses and across firms with different R&D intensities.
### A. Transfer Mispricing and the Destination Country
#### Tax rate groups.
We first study whether transfer mispricing is concentrated in the lowest-tax destinations. For this analysis, we split the estimation sample into quintile bins based on the difference in tax rates between the United Kingdom and the destination country. We then replace our main variable of interest ($Δτjt×Ilow,t×AFFij$) by interactions with dummy indicators for each quintile ($Qτ×Ilow,t×AFFij$). Results are presented in panel A of figure 3. The left $y$-axis shows the estimated coefficients with the corresponding 90% confidence interval. The right $y$-axis shows the fraction of countries that changed their CIT rate at least once within each quintile. Panel B splits countries into tax havens and non–tax havens and shows that the extent of transfer price manipulation is roughly proportional to the tax difference. However, for countries with the largest tax wedge, the standard error is large, and the tax effect is not statistically different from 0. Again, we find significant effects only for nonhaven countries. For tax havens, there is so little variation (i.e., changes in tax rates) in the data that we are not able to estimate coefficients for the two groups with the highest tax rate differences.
Figure 3.
Nonlinear Transfer Mispricing in Low-Tax Countries
This figure plots the point estimate of the tax coefficient $β1$ as in equation (1) and the corresponding 90% confidence intervals at each quintile of the tax wedge $Δτjt$ in the low-tax countries. Panel a also shows the fraction of countries that changed tax rates at least once on the right $y$-axis. Panel b shows the results separately for haven and nonhaven countries. The $x$-axis denotes the average value of the tax wedge in each quintile.
Figure 3.
Nonlinear Transfer Mispricing in Low-Tax Countries
This figure plots the point estimate of the tax coefficient $β1$ as in equation (1) and the corresponding 90% confidence intervals at each quintile of the tax wedge $Δτjt$ in the low-tax countries. Panel a also shows the fraction of countries that changed tax rates at least once on the right $y$-axis. Panel b shows the results separately for haven and nonhaven countries. The $x$-axis denotes the average value of the tax wedge in each quintile.
#### Tax haven status.
In a recent study on transfer mispricing, Davies et al. (2018), found that price manipulation by French firms is concentrated in trade with tax havens and very low-tax countries. We test to what extent the same patterns hold for UK MNCs by splitting the sample into tax havens and countries that are not tax havens following the classification in Hines (2005).21 Results are presented in columns 1 and 2 of table A.3. Interestingly, we find significant effects for nonhaven countries but no significant effects for the tax-haven-only sample. These results remain unchanged when pooling the data and interacting the variable of interest with the tax haven indicator (column 3).
The fact that we do not find any effects for tax havens is not necessarily inconsistent with Davies et al. (2018) due to the different empirical strategies that are used to identify the effects of taxes. Our empirical strategy relies on variation in the tax rates over time. Given that tax rates were already quite low in most tax havens at the beginning of our sample (and have remained low during our sample period), there was limited variation in the tax differential within that set of countries, making it difficult to identify our coefficient of interest for these countries. In fact, 67% of countries that were classified as tax havens in Hines (2005) did not experience any change in their statutory CIT rate throughout our sample period, while only 32% of nonhaven countries had no change in their CIT rate (online appendix table A.7). Davies et al. (2018), in contrast, exploited cross-sectional variation in France, allowing them to identify effects even for countries with no change in their tax rates in recent years.22 Columns 4 to 9 verify that our results are mainly driven by transfer mispricing in nonhaven countries, using alternative lists of tax havens as in Dharmapala and Hines (2009) and as suggested in OECD (2000) and using the exact set of tax haven countries as in Davies et al. (2018).
To summarize, our results show substantial transfer price manipulation by UK MNCs in exports to non-tax-haven countries with low to intermediate CIT rates.
### B. Transfer Mispricing and R&D Intensity
Do firms that undertake more investment in R&D engage in more transfer price manipulation? A priori, the relation could go either way. On the one hand, R&D increases the intangible capital of a firm, some of which can be allocated to low-tax jurisdictions to facilitate profit shifting. On the other hand, R&D can make a firm's products more specialized, which makes finding comparable prices harder and in turn makes it easier to shift profits through transfer mispricing.
Column 1 of table 5 presents the results from a regression that interacts $Δτjt×Ilow,t×AFFij$ with three indicators of R&D intensity based on the average firm-level R&D spending in the sample.23 The results suggest that transfer mispricing is concentrated in firms with the highest R&D intensity. Their coefficient is highly significant and has double the size of the average baseline effect estimated earlier. In contrast, there is no evidence for any systematic transfer price manipulations of firms outside the highest R&D group. However, given the large standard errors for the other two coefficients, we cannot reject the null hypothesis that the three coefficients are statistically equal to one another. Nonetheless, the findings are suggestive that R&D makes goods more specific, facilitating profit shifting through mispricing.
It is plausible that large companies are more likely to invest in R&D so that indicators of R&D intensity may be highly correlated with firm size. Column 2 therefore includes both sets of interaction terms and shows that, controlling for firm size, companies with the highest R&D intensity strongly manipulate their transfer prices.24 Column 3 instead controls for the type of goods based on the classification in Rauch (1999) by adding an interaction term between a dummy indicator that distinguishes between homogeneous and differentiated goods and the main variable of interest $Δτjt×Ilow,t×AFFij$ to the regression. Column 4 includes both the firm size and the goods-type interactions as controls. The basic finding that the most R&D-intensive firms manipulate their transfer prices more remains unchanged in these alternative specifications.
## VII. Conclusion
In this paper, we use linked trade-tax administrative records on UK multinationals in manufacturing to estimate the extent of tax-motivated transfer mispricing in exports of real goods. Our findings suggest that, on average, a 1 percentage point tax difference reduces related-party export prices to low-tax countries by 3% relative to the prices charged at arm's length. The extent of tax-motivated transfer mispricing has increased in the post-2009 territorial tax regime, is present in nontax havens with relatively low and medium tax rates, and is substantially larger in R&D-intensive firms.
The new evidence on transfer mispricing has several implications for policy and future research. First, given our findings, tax authorities should keep paying attention to transfer pricing in tangible goods as an area of revenue leakage. Second, our observation that transfer mispricing is concentrated in the most R&D-intensive firms provides tax authorities with useful guidance for risk assessment. Third, our evidence that transfer mispricing incentives are stronger under a territorial system highlights a key cost of moving away from worldwide taxation.25 Finally, in contrast to earlier research on France by Davies et al. (2018), we find evidence for transfer mispricing in exports to countries that are not tax havens. Our results imply that policymakers should not focus on tax havens alone but should also pay attention to other low-tax and medium-tax countries as destinations for profit shifting.
## Notes
1
See, among others, Harris, Morck, and Slemrod (1993), Hines and Rice (1994), and Desai, Foley, and Hines (2006) for evidence of general profit shifting by MNCs to low-tax countries. Heckemeyer and Overesch (2017) and Beer, de Mooij, and Liu (2020) review recent empirical evidence on profit shifting and provide consensus estimate on the magnitude of the semi-elasticity of reported profits by MNCs in response to an international tax differential of around 0.8 to 1, respectively.
2
The worldwide approach taxes the worldwide income of MNCs, typically with a nonrefundable credit for foreign taxes paid, and liability to domestic tax being deferred until dividends are paid from the foreign subsidiary to the parent company in the home country. The territorial approach does not tax foreign earnings of MNCs in the home country.
3
The move in the United States is subject to important caveats, including a one-time transition tax on unrepatriated profits and a minimum tax on overseas income that is in excess of 10% of the return on tangible assets abroad.
4
The classifications of tax haven countries are based on Hines (2005), Dharmapala and Hines (2009), and OECD (2000) (see Table A.4 for the full list), and do not imply endorsement by the IMF.
5
The arm's-length principle is established in Article 9 (1) of the OECD Model Double Tax Treaty.
6
For example, Riedel, Zinn, and Hofmann (2015) show that transfer pricing rules raise (lower) reported operating profits of high-tax (low-tax) affiliates and reduce the sensitivity of their pretax profits to corporate tax rate changes. Transfer pricing regulation may also lower real investment by MNCs (de Mooij & Liu, 2020).
7
This statement only applies to active income of foreign affiliates, which is essentially earnings through business activity. Taxes on passive income, such as investment income or royalty income, are typically due when the income is earned.
8
We implicitly assume that the share of intrafirm trade of an MNC to a country where it has a subsidiary is independent of tax rate changes. While there are no data to directly test this assumption, our second identification strategy that relies on the 2009 tax reform does not depend on this assumption.
9
That is, $Postt$ is equal to 0 until 2008 and equal to 1 from 2010 onward.
10
Appendix D in the online appendix provides a detailed description of the data sources, the matching procedure, and the summary statistics for the sample.
11
Domestic companies include stand-alone companies, parent companies of a domestic group with all subsidiaries in the United Kingdom, subsidiaries of a domestic group, and firms with no match in FAME.
12
Interestingly, for France, Davies et al. (2018), using data with direct information on related-party trade, found very similar results to Vicard (2015), who proxied related-party trade through affiliate information.
13
Given that we include an extensive set of fixed effects in the baseline regression, we utilize the firm and destination country characteristics mainly to replicate and compare with specifications in existing studies on transfer pricing in online appendix, section E.
14
Table D.1 in online appendix D reports summary statistics for the full data set.
15
Note that this share represents an upper bound of the actual share of related-party trade as MNCs may also be selling at arm's length to destinations where they have a majority-owned affiliate.
16
Table 2 presents results based on the full sample, including the pre- and postreform period. Results are robust to restricting the sample to the prereform period as shown in online appendix table A.2, column 1. In the placebo exercise presented there, the coefficient for low-tax destinations is unchanged from the baseline in table 2.
17
When adding more controls and product-fixed effects, their coefficient estimates decline to values between 0.6% and 6.1%.
18
As the raw price data are very noisy and there are many sources of price heterogeneity across firms, products, destination and time, controls include all three-way fixed effects from equation (2).
19
This finding is consistent with existing studies based on OECD countries that establish that firms with worldwide parents tend to shift less income than firms with territorial parents (Markle, 2016). Given that we only have two years of postreform data, we are unable to examine in depth the dynamics of transfer mispricing under the territorial tax regime.
20
In a recent paper, Lassmann and Zoller-Rydzek (2019) indeed provide further empirical support for the trade creation channel, employing detailed data from Switzerland.
21
This is the same classification used in Davies et al. (2018). Online appendix table A.4 lists the countries that are classified as tax havens in Hines (2005), Dharmapala and Hines (2009), and OECD (2000), whereas online appendix table A.5 lists the tax haven countries that are included in our estimation sample under the various definitions of tax havens.
22
An economic factor limiting transfer pricing to tax havens is that trade volumes with tax havens are not that large (they have declined substantially in our sample period and represent slightly over 10% of UK exports in manufacturing since 2008). To the extent that there is sizable profit shifting to tax havens, it must therefore happen through channels other than transfer mispricing in real goods (e.g., transfer mispricing on service trade and intangibles).
23
We compute a time-invariant measure of firm-level R&D intensity as the ratio between total qualifying R&D expenditure and total turnover during the sample period. We then group firms by their R&D intensity into low, medium, and high categories.
24
Indicators of firm size are defined based on the tercile of the distribution of firm-level fixed assets in the sample. The correlation between the levels of R&D intensity and fixed assets is $-$0.01, suggesting that collinearity should not be a major concern here. This low correlation between R&D intensity and firm size is in line with US evidence in Cohen, Levin, and Mowery (1987).
25
This finding does not necessarily imply that worldwide taxation is preferable to territorial taxation, as the latter may have other desirable effects such as increasing the efficiency of outbound investment allocation.
## REFERENCES
Beer
,
Sebastian
,
Ruud A. de
Mooij
, and
Li
Liu
, “
International Corporate Tax Avoidance: A Review of the Channels, Magnitudes, and Blind Spots,
Journal of Economic Surveys
34
(
2020
),
660
688
.
Bernard
,
Andrew B.
,
Jensen
, and
Peter K.
Schott
, “
Transfer Pricing by U.S.-Based Multinational Firms
,”
NBER working paper
12493
(
2006
).
Clausing
,
Kimberly A.
, “
Tax-Motivated Transfer Pricing and US Intrafirm Trade Prices,
Journal of Public Economics
87
(
2003
),
2207
2223
.
Cohen
,
Wesley M.
,
Richard C.
Levin
, and
David C.
Mowery
, “
Firm Size and R&D Intensity: A Re-Examination,
Journal of Industrial Economics
35
(
1987
),
543
565
.
Collins
,
Julie
,
Deen
Kemsley
, and
Mark
Lang
, “
Cross-Jurisdictional Income Shifting and Earnings Valuation,
Journal of Accounting Research
36
(
1998
),
209
229
.
Cristea
,
Anca
, and
Daniel
Nguyen
, “
Transfer Pricing by Multinational Firms: New Evidence from Foreign Firm Ownerships,
American Economic Journal: Economic Policy
8
(
2016
),
170
202
.
Davies
,
Ronald B.
,
Julien
Martin
,
Mathieu
Parenti
, and
Farid
Toubal
, “
Knocking on Tax Haven's Door: Multinational Firms and Transfer Pricing,
this review
100
(
2018
),
120
134
.
de Mooij
,
Ruud
, and
Li
Liu
, “
At a Cost: The Real Effect of Transfer Pricing Regulation on Multinational Investment,
IMF Economic Review
68
(
2020
),
268
306
.
Desai
,
Mihir A.
,
C. Fritz
Foley
, and
James R.
Hines
, “
The Demand for Tax Haven Operations,
Journal of Public Economics
90
(
2006
),
513
531
.
Dharmapala
,
Dhammika
, “
What Do We Know about Base Erosion and Profit Shifting? A Review of the Empirical Literature,
Fiscal Studies
35
(
2014
),
421
448
.
Dharmapala
,
Dhammika
, and
James R.
Hines
, “
Which Countries Become Tax Havens?
Journal of Public Economics
93
(
2009
),
1058
1068
.
Flaaen
,
Aaron
, “
The Role of Transfer Prices in Pratt-Shifting by U.S. Multinational Firms: Evidence from the 2004 Homeland Investment Act
,”
Board of Governors of the Federal Reserve System working paper
2017-055
(
2017
).
Grubert
,
Harry
, and
John
Mutti
, “
Taxes, Tariffs and Transfer Pricing in Multinational Corporate Decision Making,
this review
73
(
1991
),
285
293
.
Harris
,
David
,
Randall
Morck
, and
Joel B.
Slemrod
, “Income Shifting in US Multinational Corporations” (pp.
277
308
), in
Alberto
Giovannini
,
R. Glenn
Hubbard
, and
Joel
Slemrod
, eds.,
Studies in International Taxation
(
Chicago
:
University of Chicago Press
,
1993
).
Hebous
,
Shafik
, and
Niels
Johannesen
, “
,”
CESifo working paper series
5414
(
2015
).
Heckemeyer
,
Jost H.
, and
Michael
Overesch
, “
Multinationals' Profit Response to Tax Differentials: Effect Size and Shifting Channels,
Canadian Journal of Economics
50
(
2017
),
965
994
.
Hines
,
James R.
, “
Do Tax Havens Flourish?
Tax Policy and the Economy
19
(
2005
),
65
99
.
Hines
,
James R.
, and
Eric M.
Rice
, “
Fiscal Paradise: Foreign Tax Havens and American Business,
Quarterly Journal of Economics
109
(
1994
),
149
182
.
Lassmann
,
Andrea
,
Benedikt
Marian
, and
Maximilian
Zoller-Rydzek
, “
Decomposing the Margins of Transfer Pricing
,”
KOF working papers
450
(
2019
).
Markle
,
Kevin
, “
A Comparison of the Tax-Motivated Income Shifting of Multinationals in Territorial and Worldwide Countries,
Contemporary Accounting Research
33
(
2016
),
7
43
.
OECD
, “
Towards Global Tax Co-operation: Progress in Identifying and Eliminating Harmful Tax Practices
,”
Progress Report to the
G20
(
2000
).
Rauch
,
James E.
, “
Networks versus Markets in International Trade,
Journal of International Economics
48
(
1999
),
7
35
.
Riedel
,
,
Theresa
Zinn
, and
Patricia
Hofmann
, “
Do Transfer Pricing Laws Limit International Income Shifting? Evidence from Europe
,”
CESifo working paper series
4404
(
2015
).
Vicard
,
Vincent
, “
Profit Shifting through Transfer Pricing: Evidence from French Firm Level Trade Data
,”
Banque de France working paper
555
(
2015
).
Zucman
,
Gabriel
, “
Taxing across Borders: Tracking Personal Wealth and Corporate Profits,
Journal of Economic Perspectives
28
(
2014
),
121
148
.
## Author notes
We thank Rosanne Altshuler, Steve Bond, Ron Davies, Ruud de Mooij, Michael Devereux, Dhammika Dharmapala, Jim Hines, Michael Keen, Niels Johannesen, Julien Martin, Joel Slemrod, Johannes Voget, and seminar participants at the IMF, Federal Reserve Board, HM Treasury, University of Rutgers, Michigan, Oxford, and CESifo Summer Institute 2018 for helpful comments. We acknowledge financial support from the ESRC under grant ES/L000016/1. We also thank the staff at Her Majesty's Revenue and Customs' Datalab for access to the data and their support of this project. This work contains statistical data from HMRC, which is under Crown copyright. The research data sets used may not exactly reproduce HMRC aggregates. The use of HMRC statistical data in this work does not imply the endorsement of HMRC in relation to the interpretation or analysis of the information. All results have been screened by HMRC to ensure confidentiality is not breached. The views expressed are our own and do not necessarily represent the views of the IMF, its Executive Board, IMF management, the Federal Open Market Committee, its principals, or the Board of Governors of the Federal Reserve System.
A supplemental appendix is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/rest_a_00871.
|
2021-09-17 06:15:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 198, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20312297344207764, "perplexity": 4732.070455761031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00117.warc.gz"}
|
http://math.stackexchange.com/questions/110571/how-can-we-prove-that-the-subset-of-all-invertible-matrices-is-open
|
# how can we prove that the subset of all invertible matrices is open? [duplicate]
Possible Duplicate:
Why do the $n \times n$ non-singular matrices form an “open” set?
Consider the space of all nxn matrices with real entries with the standard metric, i.e.,view the matrix as an element of $R^{n^2}$and use the usual Euclidean metric on $R^{n^2}$. I need to prove that the subset of all invertible matrices is open. Please any idea?
-
## marked as duplicate by Zev ChonolesFeb 18 '12 at 6:40
$\bf Hint:$ $A$ is invertible iff the determinant is different from zero.
|
2015-10-09 20:37:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750430941581726, "perplexity": 207.63456897780827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935292.75/warc/CC-MAIN-20151001221855-00198-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://www.openfcst.mece.ualberta.ca/examples/v_03/ohmic/readme.html
|
# 4. Introduction to AppOhmic¶
## 4.1. Introduction¶
AppOhmic is an openFCST application which is used to study the electron transport in porous media. The electron transport governed by Ohm’s law is solved in the phase of interest using the FEM formulation of the weak-form of the equation. The application returns the total electronic flux (current) at the outlet face which can then be used to the calculate the effective conductivity of the media.
## 4.2. Governing equation¶
The governing equation is
The governing equation in the weak form is linear and can be solved directly using a linear solver like UMFPACK or GMRES.
The solution variable is the electronic potential, .
## 4.3. Ohmic Example Directory structure¶
The ohmic directory consists of the following folders:
1. template : This folder contains the default files for running all the examples in the other folders. Please do not modify this file as it will result in all tests failing. If you would like to create your own example either include this file to your simulation using the include command or copy the file to a different location.
2. analysis : This folder contains the main_test.prm and data_test.prm files needed to run a simulation to obtain the electronic potential distribution in the porous media. Note the data file includes the template find and adds the necessary modifications. The script to run a test to make sure that the application and the equation class is running correctly is in the folder regression together with the default data the test is compared to.
## 4.4. Setting up an ohmic simulation¶
In order to run OpenFCST, two files are needed that provide the necessary information for OpenFCST to execute: a. A main file: This file is used to select the appropriate: a) application; b) problem definition: (linear in this case); c) data file name; and, d) several less critical parameters. b. A data file: This file is used to input all the input data used for the simulation for the application selected.
Both these files can either be loaded and modified via the openFCST graphical user interface (GUI) or modified as a text file.
### 4.4.1. Setting up a simulation using the OpenFCST graphical user interface (GUI)¶
If you are using the OpenFCST GUI, you will need to load the .xml files. You can generate an .xml file from a .prm file by calling openFCST as follows:
$openFCST-3d -c main.prm openFCST will directly parse the main.prm and the associated data and file (if specified in the main.prm file). If you would like to use the GUI, first launch the GUI by going to Install/bin and typing: $ ./fcst-gui
Then, select the OpenFCST executable file that you would like the GUI to run, mainly openFCST-2d or openFCST-3d from the /Install/bin folder. Once this has been selected, the following screen will appear
At this point, you can load your main.xml and data.xml files. Go to File> Open Project... and select the main.xml and data.xml files. If you are planning on running an optimization simulation, then also load the opt.xml file, otherwise select No to loading a new simulation.
Once main.xml and data.xml files are loaded, the following will appear in the GUI,
At this point, you have several folders. Each folder contains options that you can modify. You can open each folder and, by hovering over the variable with your mouse, a text window will appear that explains the use of each input parameter. In our case, simulation name specifies that you are going to run AppOhmic. Simulator specification is only used for fluid applications, so it is not used here. Solver name specifies if the problem is linear or nonlinear and in the case of a nonlinear problem, the nonlinear solver to be used is selected. Solver method allows the user to use adaptive refinement and global refinement options so that the solution is refined during the solution. Analysis type is used to specify if you would simply like to run one simulation, obtain a polarization curve, perform a parameteric study or run an optimization study. For this application only the analysis mode is available.
To modify simulation data, go to the next tab, i.e. data.xml. The following screen will appear:
In this screen, you can select the most suitable options to run your simulation. The most important folders are: a. Grid Generation: Specify the mesh you would like to use. You can either read a mesh from file (Type of mesh>External mesh) or have openFCST create the geometry. In this case we use cathode. b. LinearApplication: Specify the linear solver and options to solve the linearized problem. c. Equations: Specify the equations you would like to solve, the initial solution and boundary condition values. d. Fuel Cell Data: Specify the layer properties. e. Output
Once your parameters are set, simply press the Run button to launch the simulation. The ouput will be shown in the black screen to the right. The files in the directory also appear in the bottom right corner. Configure Paraview to open the .vtu files to analyze the output.
### 4.4.2. Setting up a simulation using a text (.prm) file¶
If instead of using the GUI, you would like to look at the files using a text editor, the .prm files are move convenient. As discussed, the main.prm file is the argument file supplied to the OpenFCST executable. The main.prm file should look like this:
######################################################################
# $Id: main.prm 2011-02-09 secanell$
#
# This file is used to simulate app_diffusion and obtain a
# a concentration profile
#
#
# Copyright (C) 2011 by Marc Secanell
#
######################################################################
subsection Simulator
set simulator name = ohmic
set simulator parameter file name = data.prm
set nonlinear solver name = None
set Analysis type = Analysis
end
The data.prm file for the cathode example is shown below:
######################################################################
# $Id:$
#
# This file is used to simulate app_diffusion and to obtain
# a concentration profile for the species through the domain
# This file will be called by the main.prm file.
#
# Copyright (C) 2011-13 by Marc Secanell
#
######################################################################
###############
subsection Grid generation
set Type of mesh = GridExternal # Cathode | CathodeMPL | GridExternal
set File name = test.vtk
set File type = vtk
set Initial refinement = 0
set Sort Cuthill-McKee = false
end
###############
subsection Initial Solution
set Read in initial solution from file = false
set Output initial solution = false
end
###############
###############
set Refinement = global #global | adaptive
set Number of Refinements = 3
set Output intermediate solutions = false
set Output final solution = true
set Output intermediate responses = false
set Use nonlinear solver for linear problem = false
end
###############
###############
subsection System management
set Number of solution variables = 1
subsection Solution variables
set Solution variable 1 = electronic_electrical_potential
end
subsection Equations
set Equation 1 = Electron Transport Equation
end
end
###############
subsection Equations
subsection Electron Transport Equation
subsection Initial data
set electronic_electrical_potential = 0: 0.1 # where 0 indicates the material_id setup in the grid and 0.1 is the electronic potential in V
end
subsection Boundary data
set electronic_electrical_potential = 1: 0.4, 2:0.01 #where 1 & 2 denote the boundary ids and 0.4 and 0.01 are the electronic potentials in V at the respective boundary
end
end
end
###############
subsection Discretization
set Element = FESystem[FE_Q(1)] #FESystem[FE_Q(3)-FE_Q(1)^2] #FESystem[FE_Q(1)^3] #System of three fe
subsection Matrix
end
subsection Residual
end
end
###############
subsection Fuel cell data
####
subsection Operating conditions
set Temperature cell [K] = 353
set Cathode pressure [Pa] = 101265 # (1 atm)
set Cathode relative humidity = 0.6
end
####
####
subsection Cathode gas diffusion layer
set Gas diffusion layer type = DummyGDL #[ DesignFibrousGDL | DummyGDL | SGL24BA ]
set Material id = 0
####
subsection DummyGDL
set Oxygen diffusion coefficient, [cm^2/s] = 0.22
set Electrical conductivity, [S/cm] = 40
end
####
end
####
end
######################################################################
subsection Output Variables
set Compute boundary responses = true
set num_output_vars = 1
#set Output_var_0 = electronic_electrical_potential
end
######################################################################
subsection Output
subsection Data
set Output format = vtu
set Print solution = true
end
subsection Grid
set Format = eps
end
end
################################
################################
################################
################################
The key disadvantage of using the .prm file is that for the parameters that have options, it is not possible to see the option that are available, therefore the use of the GUI is strongly suggested for users.
|
2021-06-21 16:25:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40721747279167175, "perplexity": 3582.7103429778854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00024.warc.gz"}
|
https://www.opuscula.agh.edu.pl/om-vol37iss6art8
|
Opuscula Math. 37, no. 6 (2017), 887-898
http://dx.doi.org/10.7494/OpMath.2017.37.6.887
Opuscula Mathematica
Oscillation of solutions to non-linear difference equations with several advanced arguments
Sandra Pinelas
Julio G. Dix
Abstract. This work concerns the oscillation and asymptotic properties of solutions to the non-linear difference equation with advanced arguments $x_{n+1}- x_n =\sum_{i=1}^m f_{i,n}( x_{n+h_{i,n}}).$ We establish sufficient conditions for the existence of positive, and negative solutions. Then we obtain conditions for solutions to be bounded, convergent to positive infinity and to negative infinity and to zero. Also we obtain conditions for all solutions to be oscillatory.
Keywords: advanced difference equation, non-oscillatory solution.
Mathematics Subject Classification: 39A11.
Full text (pdf)
• Sandra Pinelas
• RUDN University, 6 Miklukho-Maklay St., Moscow, 117198, Russia
• Julio G. Dix
• Department of Mathematics, Texas State University, 601 University Drive, San Marcos, TX 78666, USA
• Communicated by Stevo Stević.
• Revised: 2017-03-11.
• Accepted: 2017-03-12.
• Published online: 2017-09-28.
|
2018-09-22 05:36:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28354793787002563, "perplexity": 2736.0301433651207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00040.warc.gz"}
|
http://physics.stackexchange.com/questions/33507/quantum-mechanic-newbie-why-complex-amplitudes-why-hilbert-space
|
# Quantum mechanic newbie: why complex amplitudes, why Hilbert space? [duplicate]
This question already has an answer here:
I'm just starting learning quantum mechanics by myself (2 "lectures" so far) and I was wondering
• why we need to define quantum states in a complex vector space rater than a real one?
• Also I was wondering why this vector space has to be a Hilbert space (rather than a pre-Hilbert space)? when do we need the property that the vector space is complete (i.e. every Cauchy sequence converge)?
-
## marked as duplicate by Qmechanic♦Mar 28 '14 at 20:20
After a bit of time, though, you might notice that there's a couple of fishy things in that Hilbert space axiom. For one, we put a lot of stock in position and momentum eigenstates, i.e. delta-function and plane-wave wavefunctions, which are strictly speaking not inside the Hilbert space. On the other hand, the normal Hilbert spaces have a number of wavefunctions, such as $$\psi(x)=\frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{1+x^2}}$$ which violate physical intuition in one way or another (this one has infinite position dispersion). The resolution is an amendment to the Hilbert space axiom in terms of rigged Hilbert spaces.
|
2015-03-02 19:14:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603348731994629, "perplexity": 443.040375135316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462982.10/warc/CC-MAIN-20150226074102-00015-ip-10-28-5-156.ec2.internal.warc.gz"}
|
http://arxitics.com/articles/1904.07214
|
## arXiv Analytics
### arXiv:1904.07214 [astro-ph.HE]AbstractReferencesReviewsResources
#### New Binary Black Hole Mergers in the Second Observing Run of Advanced LIGO and Advanced Virgo
Published 2019-04-15Version 1
We report the detection of new binary black hole merger events in the publicly available data from the second observing run of advanced LIGO and advanced Virgo (O2). The mergers were discovered using the new search pipeline described in Venumadhav et al. (1902.10341), and are above the detection thresholds as defined in Abbott et al. (1811.12907). Three of the mergers (GW170121, GW170304, GW170727) have inferred probabilities of being of astrophysical origin $p_{\rm astro} > 0.98$. The remaining three (GW170425, GW170202, GW170403) are less certain, with $p_{\rm astro}$ ranging from $0.5$ to $0.8$. The newly found mergers largely share the statistical properties of previously reported events, with the exception of GW170403, the least secure event, which has a highly negative effective spin parameter $\chi_{\rm eff}$. The most secure new event, GW170121 ($p_{\rm astro} > 0.99$), is also notable due to its inferred negative value of $\chi_{\rm eff}$, which is inconsistent with being positive at the $\approx 95.8\%$ confidence level. The new mergers nearly double the sample of gravitational wave events reported from O2, and present a substantial opportunity to explore the statistics of the binary black hole population in the Universe. The increase in volume is larger when the constituent detectors of the network have very different sensitivities, as is likely to be the case in current and future runs.
|
2019-07-19 18:33:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.673017680644989, "perplexity": 1177.6339267294777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526337.45/warc/CC-MAIN-20190719182214-20190719204214-00442.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-following-system-x-2y-2-x-3y-12
|
How do you solve the following system?: x + 2y = -2 , x – 3y = -12
Jun 20, 2018
See a solution process below:
Explanation:
Step 1) Solve each equation for $x$:
• Equation 1:
$x + 2 y = - 2$
$x + 2 y - \textcolor{red}{2 y} = - 2 - \textcolor{red}{2 y}$
$x + 0 = - 2 - 2 y$
$x = - 2 - 2 y$
• Equation 2:
$x - 3 y = - 12$
$x - 3 y + \textcolor{red}{3 y} = - 12 + \textcolor{red}{3 y}$
$x - 0 = - 12 + 3 y$
$x = - 12 + 3 y$
Step 2) Because the left side of both equations are the same we can equate the right sides and solve for $y$:
$- 2 - 2 y = - 12 + 3 y$
$- 2 + \textcolor{b l u e}{12} - 2 y + \textcolor{red}{2 y} = - 12 + \textcolor{b l u e}{12} + 3 y + \textcolor{red}{2 y}$
$10 - 0 = 0 + \left(3 + \textcolor{red}{2}\right) y$
$10 = 5 y$
$\frac{10}{\textcolor{red}{5}} = \frac{5 y}{\textcolor{red}{5}}$
$2 = \frac{\textcolor{red}{\cancel{\textcolor{b l a c k}{5}}} y}{\cancel{\textcolor{red}{5}}}$
$2 = y$
$y = 2$
Step 3) Substitute $2$ for $y$ in the solutions to either of the equations in Step 1 and calculate $x$:
$x = - 12 + 3 y$ becomes:
$x = - 12 + \left(3 \times 2\right)$
$x = - 12 + 6$
$x = - 6$
The Solution Is:
$x = - 6$ and $y = 2$
Or
$\left(- 6 , 2\right)$
|
2020-01-25 05:37:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 28, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7985665202140808, "perplexity": 571.5232670277702}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00495.warc.gz"}
|
https://socratic.org/questions/what-instrument-is-used-to-measure-air-pressure#383903
|
What instrument is used to measure air pressure?
A barometer using mercury is common. The atmospheric pressure presses down on a reservoir of mercury and the mercury is forced up the tube a certain distance depending on the pressure. This is often recorded as $\text{mm}$ $H g$ or (rarely) $\text{inches}$ $\text{Hg}$.
|
2022-01-27 17:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6472504734992981, "perplexity": 680.9822616617091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00533.warc.gz"}
|
http://solarenergyengineering.asmedigitalcollection.asme.org/article.aspx?articleid=1458119
|
0
Research Papers
# Conceptual Design of a 2× Trough for Use Within Salt and Oil-Based Parabolic Trough Power Plants
[+] Author and Article Information
Gregory J. Kolb, Richard B. Diver
Sandia National Laboratories, MS 1127, Albuquerque, NM 87185-5800gjkolb@sandia.gov
In black and white, this shows as the lighter of the two columns.
A reviewer of this paper performed an independent analysis using the methodology described in Ref. 17 and found the approaches to agree within 1% for most cases presented in this paper.
The 5.4 mrad total collector error we assess needs to be verified with field test data. However, the spillage we calculate with this error is similar to values used by trough designers and defaulted within EXCELERGY .
A few errors were found in this paper: (1) all cases in Fig. 1 were based on a 5 m aperture, not the stated values of 5.76–8 m; (2) the maximum plant size studied was 250 MW, not the stated value of 200 MW.
Kelly stated that the 150 MW oil plant in Fig. 1 was near the maximum size. The field size was $1.3×106 m2$.
The total cost of the mirrors plus mirror-support structure is assumed to be the same $(/m2)$ for the conventional and 2× trough. As discussed in Sec. 2, mirrors are expected to cost more for the 2× trough but this may be compensated by a lower-cost mirror-support structure.
J. Sol. Energy Eng 132(4), 041003 (Aug 19, 2010) (6 pages) doi:10.1115/1.4002080 History: Received September 16, 2008; Revised April 21, 2010; Published August 19, 2010; Online August 19, 2010
## Abstract
Recent studies in the United States suggest that parabolic trough levelized energy costs (LECs) can be reduced 10–15% through integration of a large salt energy storage system coupled with the direct heating of molten salt in the solar field. While noteworthy, this relatively small predicted improvement may not justify the increased technical risks. Examples of potential issues include increased design complexity, higher maintenance costs, and salt freezing in the solar field. To make a compelling argument for development of this new system, we believe that additional technical advances beyond that previously reported will be required to achieve significant LEC reduction, greater than 25%. The new technical advances described include the development of a high-concentration trough that has double aperture and optical concentration of current technology. This trough is predicted to be more cost-effective than current technology because its cost $(/m2)$ and thermal losses $(W/m2)$ are significantly lower. Recent trough optical performance improvements, such as more accurate facets and better alignment techniques, suggest a 2× trough is possible. Combining this new trough with a new low-melting point salt now under development suggests that a LEC cost reduction of $∼25%$ is possible for a 50 MW, 2× salt plant relative to a conventional (1×) 50 MW oil plant. However, the 2× trough will also benefit plants that use synthetic oil in the field. A LEC comparison of 2× plants at sizes $≥200 MW$ shows only a 6% advantage of salt over oil.
<>
## Figures
Figure 1
EC comparison of current and future trough technology (2)
Figure 2
Trough-HELIOS prediction of incident flux upon a 0.07 m diameter HCE for a current 5 m aperture trough and a future 10 m trough. System errors are 5.4 mrad for the 5 m trough and 2.5 mrad for the 10 m trough. Trough-HELIOS requires surface pointing-type errors. Beam error=2∗ pointing error. The King sunshape (9) and 1000 W/m2 insolation were assumed.
Figure 3
Photograph showing the impact of mirror mount distortion on curvature of an LS-2 mirror (a). (b) is the same mirror with the mirror mount loosened.
Figure 4
Sandia 10 kWe dish/string system dish system. The structural honeycomb facets have slope errors in the range of 8–1.4 mrad.
Figure 5
The TOP alignment system can potentially reduce facet alignment errors to 0.5 mrad
Figure 6
Solar flux tracker for parabolic trough collectors
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
2017-10-19 01:48:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.226765975356102, "perplexity": 4139.810510349802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823214.37/warc/CC-MAIN-20171019012514-20171019032514-00808.warc.gz"}
|
https://physics.stackexchange.com/questions/190066/why-cant-charge-be-in-a-stable-equilibrium-in-electrostatic-field
|
# Why can't charge be in a stable equilibrium in electrostatic field?
I am reading Feynman's Lectures volume_II where he discussed the impossibility of the presence of stable equilibrium in an electrostatic field.
There are no points of stable equilibrium in any electrostatic field—except right on top of another charge. Using Gauss’ law, it is easy to see why. First, for a charge to be in equilibrium at any particular point $P_0$, the field must be zero. Second, if the equilibrium is to be a stable one, we require that if we move the charge away from $P_0$ in any direction, there should be a restoring force directed opposite to the displacement. The electric field at all nearby points must be pointing inward—toward the point $P_0$. But that is in violation of Gauss’ law if there is no charge at $P_0$, as we can easily see.
Fig. 5–1.If $P_0$ were a position of stable equilibrium for a positive charge, the electric field everywhere in the neighborhood would point toward $P_0$. Consider a tiny imaginary surface that encloses $P_0$, as in Fig. 5–1. If the electric field everywhere in the vicinity is pointed toward $P_0$, the surface integral of the normal component is certainly not zero. For the case shown in the figure, the flux through the surface must be a negative number. But Gauss’ law says that the flux of electric field through any surface is proportional to the total charge inside. If there is no charge at $P_0$, the field we have imagined violates Gauss’ law. It is impossible to balance a positive charge in empty space—at a point where there is not some negative charge. A positive charge can be in equilibrium if it is in the middle of a distributed negative charge. Of course, the negative charge distribution would have to be held in place by other than electrical forces
I am having problems with the explanation especially the bold lines. It can be comprehensible that to be at equilibrium at $P_0$, the force i.e. the field must be zero. There must be a restoring force in order to bring back the charge at $P_0$. Feynman then wrote that as there is no charge, it cannot have any net flux as is evident from the bold lines. Confusion! Where did the charge go that he was discussing so far? He only displaced the charge , previous at $P_0$ away from $P_0$. That doesn't mean the charge is no more inside the surface!! I am really not getting what/why he wrote if there is no charge at $P_0$. Can anyone help me clear this confusion?
Also, if the field is zero, doesn't it mean that flux is also zero? If not, why?
• That is known as Earnshaw's theorem. – ACuriousMind Jun 18 '15 at 10:17
• This link is worth intuitive. – user36790 Jun 18 '15 at 13:20
• I've realised that my answer is far from convincing, so I've deleted it. The link you found is much better, and actually helped me get a clear understanding. – user3237992 Jun 19 '15 at 10:08
• I suppose in a truly microscopic sense, one could have a zero net field at some location but a finite flux through the surface of a macroscopic shell enclosing the location of zero net field. Flux is considered over finite areas while I think the field itself can be (classically) defined at individual points. – honeste_vivere Jan 24 '16 at 14:35
• Now I'm confused and curious... Could one set up some external field in such a fashion as to create a finite value for Gauss' law but have no local sources (i.e., charges) within the gaussian surface? I think the answer is yes, but it makes me curious as to what is the physical interpretation for the term we refer to as enclosed charge... – honeste_vivere Jan 24 '16 at 14:38
I am posting this because I believe none of the the other answers address the OP's question. The answer is already in the text by Feynman but somewhat implicitly.
We have a positive charge $$q_+$$ at the point $$P_0$$. We want to know if it is possible for $$q_+$$ to be in stable equilibrium.
Q: What do we mean by stable equilibrium?
A: A particle is in stable equilibrium happens at a point $$P_0$$ if it experiences no force at $$P_0$$ and if we make a small perturbation (i.e., we move the particle from the equilibrium point by a small distance) there is a force that makes it return towards the equilibrium point. Such a force is called a restoring force.
In electrostatics all forces can be expressed in terms of electric fields. A charge $$q$$ will experience a force
$$\overrightarrow {F_q} = q \overrightarrow E$$
For the particular case of our charge $$q_+$$ the force points towards the same side as the electric field.
We can now start with the assumption that $$P_0$$ is a point of stable equilibrium for $$q_+$$. This means that there is a restoring force in some neighborhood $$V$$ of $$P_0$$ inside of which, at every point $$Q$$, the electric field $$\overrightarrow E$$ points from $$Q$$ to $$P_0$$. If we apply Gauss's law to such a system we get
$$\oint\limits_{S=\partial V}\overrightarrow E\cdot\overrightarrow n ~\mathrm d a=\frac{q_+}{\varepsilon_0}$$
where $$S$$ is the boundary of the neighborhood $$V$$. But if we look at the left side of the equation we see that $$\overrightarrow E$$ points inwards whereas $$\overrightarrow n$$ points outwards (as per the convention for normal vectors to a closed surface). This means that the integrand is always negative and so the integral is as well.
A negative number cannot be equal to a positive number! Therefore, by reductio ad absurdum we conclude that $$P_0$$ cannot be an equilibrium point. Our starting assumption was wrong.
With the legwork we have done here we can also expand one of points Feynman makes. The text reads
except right on top of another charge
If there are several charges, and they are at different points in space we can always find a volume inside of which it is isolated and as we saw above, this can never be a point of stable equilibrium. But if two charges $$q_1, q_2$$ are right on top of one another then no matter how small a volume we take the right hand side of the equation will always be $$(q_1+q_2)/\varepsilon_0$$.
So, if our charge $$q_+$$ sits right on top of a negative charge $$Q_-$$, and $$|Q_-|>q_+$$, then is in stable equilibrium because Gauss's law does allow it this time (both sides are negative).
With this we can expand Feynman's claim to:
There are no points of stable equilibrium in any electrostatic field—except right on top of another charge of opposite sign and higher value.
He talks about imaginary situation when there is a point in space with stable equilibrium.
As Feynman says, it requires $E=0$ at some point $(P_0)$ and all $E$ vectors looking inwards (like local minima) around that point. Then let's use Gauss's law: compute flux. It will definitely be not zero, which means that such point does not exist.
What is doable is a spherical shell with some charge and same sign charge in its middle. You enclose your centered charge with surface. Now there is field from the shell and from the charge crossing it. Bo-ya, this field has non-zero flux through the surface.
• As I recall, the electric field is zero at all points inside a uniformly charged spherical shell. So this doesn't give you a stable equilibrium any more than a single charge in free space. – ragnar Jun 18 '15 at 14:31
• oh god, you are right (gauss's!) – aaaaa says reinstate Monica Jun 18 '15 at 14:33
The answer from "introduction to electrodynamics" Have a look at that problem.
The answer-from solution manual- :A stable equilibrium is a point of local minimum in the potential energy. Here the potential energy is qV . But we know that Laplace’s equation allows no local minima for V . What looks like a minimum, in the figure, must in fact be a saddle point, and the box “leaks” through the center of each face.
Where did the charge go that he was discussing so far ? The charge he was discussing so far is a test charge. Usually in electrostatics, a test charge is considered to be a very small charge such that it does not modify the field in question or the field which is being studied. The direction of motion of this hypothetical test charge gives the direction of lines of force.
Here Feynman is discussing the field produced by the presence of several static charges. Then he is considering a point P in this field. Please understand that "there is no charge at P". Then he says that if we calculate the divergence of the field from this point P and if the divergence comes out to be negative, then the point P will be an equilibrium point. (Then he says that the above situation is impossible since that violets The Gauss law)
He refers to placing a test charge at P to make us visualize the equilibrium situation. In the equilibrium situation the test charge would not move and if displaced by a little distance in any direction it would come back to the same position P again.
Mathematical description of Equilibrium: Divergence of the field at P is negative
Intuitive understanding of Equilibrium: A test charge at P, would not move and if displaced by a little distance in any direction it would come back to the same position P again.
So the idea of placing a test charge at P is only to make us visualize "the situation of equilibrium" by studying "the characteristics of the motion" of this charge.
|
2020-07-03 14:04:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163853883743286, "perplexity": 185.00046015086588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00126.warc.gz"}
|
https://juejin.cn/post/6844903689505734669
|
# 他们只说注意力机制(Attention Mechanism)不练,还是我来给大家撸代码讲解
· 阅读 19843
Attention是一种用于提升基于RNN(LSTM或GRU)的Encoder + Decoder模型的效果的的机制(Mechanism),一般称为Attention Mechanism。Attention Mechanism目前非常流行,广泛应用于机器翻译、语音识别、图像标注(Image Caption)等很多领域,之所以它这么受欢迎,是因为Attention给模型赋予了区分辨别的能力,例如,在机器翻译、语音识别应用中,为句子中的每个词赋予不同的权重,使神经网络模型的学习变得更加灵活(soft),同时Attention本身可以做为一种对齐关系,解释翻译输入/输出句子之间的对齐关系,解释模型到底学到了什么知识,为我们打开深度学习的黑箱,提供了一个窗口。
# 我为大家收集的有关注意力机制的精华文章
Attention_Network_With_Keras 注意力模型的代码的实现与分析 代码解析 简书 20180617
Attention_Network_With_Keras 代码实现 GitHub 2018
# 以2014《Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation》抛砖引玉
Attention Mechanism模块图解
# 以Attention_Network_With_Keras 为例讲解一种Attention实现代码
## 部分代码
Tx = 50 # Max x sequence length
Ty = 5 # y sequence length
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
# Split data 80-20 between training and test
train_size = int(0.8*m)
Xoh_train = Xoh[:train_size]
Yoh_train = Yoh[:train_size]
Xoh_test = Xoh[train_size:]
Yoh_test = Yoh[train_size:]
To be careful, let's check that the code works:
i = 5
print("Input data point " + str(i) + ".")
print("")
print("The data input is: " + str(dataset[i][0]))
print("The data output is: " + str(dataset[i][1]))
print("")
print("The tokenized input is:" + str(X[i]))
print("The tokenized output is: " + str(Y[i]))
print("")
print("The one-hot input is:", Xoh[i])
print("The one-hot output is:", Yoh[i])
Input data point 5.
The data input is: 23 min after 20 p.m.
The data output is: 20:23
The tokenized input is:[ 5 6 0 25 22 26 0 14 19 32 18 30 0 5 3 0 28 2 25 2 40 40 40 40
40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
40 40]
The tokenized output is: [ 2 0 10 2 3]
The one-hot input is: [[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[1. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 0. 1.]]
The one-hot output is: [[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]]
## Model
Our next goal is to define our model. The important part will be defining the attention mechanism and then making sure to apply that correctly.
layer1_size = 32
layer2_size = 128 # Attention layer
The next two code snippets defined the attention mechanism. This is split into two arcs:
• Calculating context
• Creating an attention layer
As a refresher, an attention network pays attention to certain parts of the input at each output time step. attention denotes which inputs are most relevant to the current output step. An input step will have attention weight ~1 if it is relevant, and ~0 otherwise. The context is the "summary of the input".
The requirements are thus. The attention matrix should have shape and sum to 1. Additionally, the context should be calculated in the same manner for each time step. Beyond that, there is some flexibility. This notebook calculates both this way:
? context = \sum_{i=1}^{m} ( attention_i * x_i ) ?
For safety, is defined as .
# Define part of the attention layer gloablly so as to
# share the same layers for each attention step.
def softmax(x):
return K.softmax(x, axis=1)
at_repeat = RepeatVector(Tx)
at_concatenate = Concatenate(axis=-1)
at_dense1 = Dense(8, activation="tanh")
at_dense2 = Dense(1, activation="relu")
at_softmax = Activation(softmax, name='attention_weights')
at_dot = Dot(axes=1)
def one_step_of_attention(h_prev, a):
"""
Get the context.
Input:
h_prev - Previous hidden state of a RNN layer (m, n_h)
a - Input data, possibly processed (m, Tx, n_a)
Output:
context - Current context (m, Tx, n_a)
"""
# Repeat vector to match a's dimensions
h_repeat = at_repeat(h_prev)
# Calculate attention weights
i = at_concatenate([a, h_repeat])
i = at_dense1(i)
i = at_dense2(i)
attention = at_softmax(i)
# Calculate the context
context = at_dot([attention, a])
return context
def attention_layer(X, n_h, Ty):
"""
Creates an attention layer.
Input:
X - Layer input (m, Tx, x_vocab_size)
n_h - Size of LSTM hidden layer
Ty - Timesteps in output sequence
Output:
output - The output of the attention layer (m, Tx, n_h)
"""
# Define the default state for the LSTM layer
h = Lambda(lambda X: K.zeros(shape=(K.shape(X)[0], n_h)), name='h_attention_layer')(X)
c = Lambda(lambda X: K.zeros(shape=(K.shape(X)[0], n_h)), name='c_attention_layer')(X)
# Messy, but the alternative is using more Input()
at_LSTM = LSTM(n_h, return_state=True, name='at_LSTM_attention_layer')
output = []
# Run attention step and RNN for each output time step
for _ in range(Ty):
context = one_step_of_attention(h, X)
h, _, c = at_LSTM(context, initial_state=[h, c])
output.append(h)
return output
The sample model is organized as follows:
1. BiLSTM
2. Attention Layer
• Outputs Ty lists of activations.
3. Dense
• Necessary to convert attention layer's output to the correct y dimensions
layer3 = Dense(machine_vocab_size, activation=softmax)
def get_model(Tx, Ty, layer1_size, layer2_size, x_vocab_size, y_vocab_size):
"""
Creates a model.
input:
Tx - Number of x timesteps
Ty - Number of y timesteps
size_layer1 - Number of neurons in BiLSTM
size_layer2 - Number of neurons in attention LSTM hidden layer
x_vocab_size - Number of possible token types for x
y_vocab_size - Number of possible token types for y
Output:
model - A Keras Model.
"""
# Create layers one by one
X = Input(shape=(Tx, x_vocab_size), name='X_Input')
a1 = Bidirectional(LSTM(layer1_size, return_sequences=True), merge_mode='concat', name='Bid_LSTM')(X)
a2 = attention_layer(a1, layer2_size, Ty)
a3 = [layer3(timestep) for timestep in a2]
# Create Keras model
model = Model(inputs=[X], outputs=a3)
return model
The steps from here on out are for creating the model and training it. Simple as that.
# Obtain a model instance
model = get_model(Tx, Ty, layer1_size, layer2_size, human_vocab_size, machine_vocab_size)
plot_model(model, to_file='Attention_tutorial_model_copy.png', show_shapes=True)
## Evaluation
The final training loss should be in the range of 0.02 to 0.5
The test loss should be at a similar level.
# Evaluate the test performance
outputs_test = list(Yoh_test.swapaxes(0,1))
score = model.evaluate(Xoh_test, outputs_test)
print('Test loss: ', score[0])
2000/2000 [==============================] - 2s 1ms/step
Test loss: 0.4966005325317383
Now that we've created this beautiful model, let's see how it does in action.
The below code finds a random example and runs it through our model.
# Let's visually check model output.
import random as random
i = random.randint(0, m)
def get_prediction(model, x):
prediction = model.predict(x)
max_prediction = [y.argmax() for y in prediction]
str_prediction = "".join(ids_to_keys(max_prediction, machine_vocab))
return (max_prediction, str_prediction)
max_prediction, str_prediction = get_prediction(model, Xoh[i:i+1])
print("Input: " + str(dataset[i][0]))
print("Tokenized: " + str(X[i]))
print("Prediction: " + str(max_prediction))
print("Prediction text: " + str(str_prediction))
Input: 13.09
Tokenized: [ 4 6 2 3 12 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
40 40]
Prediction: [1, 3, 10, 0, 9]
Prediction text: 13:09
Last but not least, all introductions to Attention networks require a little tour.
The below graph shows what inputs the model was focusing on when writing each individual letter.
# 注意力机制精要
Self Attention与传统的Attention机制非常的不同:传统的Attention是基于source端和target端的隐变量(hidden state)计算Attention的,得到的结果是源端的每个词与目标端每个词之间的依赖关系。但Self Attention不同,它分别在source端和target端进行,仅与source input或者target input自身相关的Self Attention,捕捉source端或target端自身的词与词之间的依赖关系;然后再把source端的得到的self Attention加入到target端得到的Attention中,捕捉source端和target端词与词之间的依赖关系。因此,self Attention Attention比传统的Attention mechanism效果要好,主要原因之一是,传统的Attention机制忽略了源端或目标端句子中词与词之间的依赖关系,相对比,self Attention可以不仅可以得到源端与目标端词与词之间的依赖关系,同时还可以有效获取源端或目标端自身词与词之间的依赖关系
|
2022-05-27 11:58:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5613008141517639, "perplexity": 6593.4145737688705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00618.warc.gz"}
|
https://math.stackexchange.com/questions/745112/show-there-exists-another-polynomial-with-specified-roots
|
# Show there exists another polynomial with specified roots.
Let $\alpha$ be a complex number.
Suppose there exists a a monic polynomial $f(x) \in \mathbb{Z}[x]$ such that $f(\alpha)=0$. Show that there exists a monic polynomial $g(x) \in \mathbb{Z}[x]$ such that $g(\alpha ^2)=0$.
I tried to compute the elementary symmetric polynomials in order to put them in the coefficients, but it became increasingly non-practical, and I had no way to guarantee that the numbers are integers.
• You probably want to exclude $f=0$; and $g=0$. – Marc van Leeuwen Apr 8 '14 at 13:31
• Included more information, thank you. – Aloizio Macedo Apr 8 '14 at 13:35
• But "monic" is much stronger than "nonzero". – Marc van Leeuwen Apr 8 '14 at 13:45
• The result can be computed as resultant, $g(y)=Res_x(f(x),y-x^2)$. If the properties of the resultant are known, the claims follow directly. – Dr. Lutz Lehmann Apr 9 '14 at 5:35
Hint: What's the relationship between the coefficients of $\prod_{i=1}^n (x-r_i)$ and $\prod_{i=1}^n (x+r_i)$?
• This root-squaring procedure is the basis of the Dandelin-Gräffe method. -- More generally, if $q=\exp(i\,2\pi/d)$, then $f(x)f(qx)...f(q^{d-1}x)$ is a polynomial $g(x^d)$ that only contains powers of $x^d$. – Dr. Lutz Lehmann Apr 9 '14 at 5:32
|
2019-11-22 07:12:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7210606336593628, "perplexity": 350.83903061734196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00376.warc.gz"}
|
http://math.stackexchange.com/questions/26960/finding-fm-1-where-fx-x2-2x2
|
Finding $f(m-1)$ where $f(x) = x^2 + 2x+2$ [closed]
If $f(x) = x^2 + 2x + 2$, find $f(m - 1)$.
Can someone please solve this for me, with the steps.
-
closed as off-topic by Jonas Meyer, RecklessReckoner, hardmath, Ivo Terek, TravisJan 15 at 2:09
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Jonas Meyer, RecklessReckoner, hardmath, Ivo Terek, Travis
If this question can be reworded to fit the rules in the help center, please edit the question.
What have you tried so far? Where are you getting stuck? If this is homework, you should use the "homework" tag. – JavaMan Mar 14 '11 at 17:01
"Can someone solve this for me" is, I think, the worst way to ask for help! – Mariano Suárez-Alvarez Mar 14 '11 at 19:44
@Mar: If you submit the answer "Yes, I can" to that question, I'll vote for you! I don't have the rep to spare ;) – The Chaz 2.0 Mar 14 '11 at 23:30
Replace each $x$ in the expression with a $(m-1)$. – ncmathsadist Jan 15 at 2:00
I think that closure (and often then deletion) of "old" questions because they do meet current wishes is unreasonable. – André Nicolas Jan 15 at 2:04
Evaluating functions with inputs that aren't just "x" is an important skill.
Let me walk you through some examples, using this function:
$f(x) = x^2 + 2x + 2$ We will just plug in some numbers and letters, using parenthesis to show where we substituted...
$f(1) = (1)^2 + 2*(1) + 2 = 1 + 2 + 2 = 5$
$f(2) = (2)^2 + 2*(2) + 2 = 4 + 4 + 2 = 10$
$f(3a) = (3a)^2 + 2*(3a) + 2 = 9a^2 + 6a + 2$ Notice that our answer isn't just a number; it is a variable expression.
Now $f(m - 1) = (m - 1)^2 + 2*(m - 1) + 2$.
I'll leave it to you to expand/distribute and combine like terms.
-
Side question - What is Latex/markup for a simple dot (for multiplication)? – The Chaz 2.0 Mar 14 '11 at 17:01
@Chaz, "\cdot" is the appropriate symbol. – JavaMan Mar 14 '11 at 17:03
@DJC: 10-4!.... – The Chaz 2.0 Mar 14 '11 at 17:12
$$f(m-1) = (m-1)^{2} + 2(m-1) + 2 = m^{2} -2m + 1 + 2m-2 +2 = m^{2}+1$$
-
|
2015-01-26 02:21:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472738862037659, "perplexity": 2010.0281251143338}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122086301.64/warc/CC-MAIN-20150124175446-00006-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.quizover.com/trigonometry/test/solving-inverse-variation-problems-by-openstax
|
# 5.8 Modeling using variation (Page 2/14)
Page 2 / 14
Do the graphs of all direct variation equations look like [link] ?
No. Direct variation equations are power functions—they may be linear, quadratic, cubic, quartic, radical, etc. But all of the graphs pass through $\text{\hspace{0.17em}}\left(0,0\right).$
The quantity $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ varies directly with the square of $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ If $\text{\hspace{0.17em}}y=24\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}x=3,\text{\hspace{0.17em}}$ find $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is 4.
$\frac{128}{3}$
## Solving inverse variation problems
Water temperature in an ocean varies inversely to the water’s depth. The formula $\text{\hspace{0.17em}}T=\frac{14,000}{d}\text{\hspace{0.17em}}$ gives us the temperature in degrees Fahrenheit at a depth in feet below Earth’s surface. Consider the Atlantic Ocean, which covers 22% of Earth’s surface. At a certain location, at the depth of 500 feet, the temperature may be 28°F.
If we create [link] , we observe that, as the depth increases, the water temperature decreases.
$d,\text{\hspace{0.17em}}$ depth $T=\frac{\text{14,000}}{d}$ Interpretation
500 ft $\frac{14,000}{500}=28$ At a depth of 500 ft, the water temperature is 28° F.
1000 ft $\frac{14,000}{1000}=14$ At a depth of 1,000 ft, the water temperature is 14° F.
2000 ft $\frac{14,000}{2000}=7$ At a depth of 2,000 ft, the water temperature is 7° F.
We notice in the relationship between these variables that, as one quantity increases, the other decreases. The two quantities are said to be inversely proportional and each term varies inversely with the other. Inversely proportional relationships are also called inverse variations .
For our example, [link] depicts the inverse variation . We say the water temperature varies inversely with the depth of the water because, as the depth increases, the temperature decreases. The formula $\text{\hspace{0.17em}}y=\frac{k}{x}\text{\hspace{0.17em}}$ for inverse variation in this case uses $\text{\hspace{0.17em}}k=14,000.\text{\hspace{0.17em}}$
## Inverse variation
If $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ are related by an equation of the form
$y=\frac{k}{{x}^{n}}$
where $\text{\hspace{0.17em}}k\text{\hspace{0.17em}}$ is a nonzero constant, then we say that $\text{\hspace{0.17em}}y$ varies inversely with the $\text{\hspace{0.17em}}n\text{th}\text{\hspace{0.17em}}$ power of $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ In inversely proportional relationships, or inverse variations , there is a constant multiple $\text{\hspace{0.17em}}k={x}^{n}y.\text{\hspace{0.17em}}$
## Writing a formula for an inversely proportional relationship
A tourist plans to drive 100 miles. Find a formula for the time the trip will take as a function of the speed the tourist drives.
Recall that multiplying speed by time gives distance. If we let $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ represent the drive time in hours, and $\text{\hspace{0.17em}}v\text{\hspace{0.17em}}$ represent the velocity (speed or rate) at which the tourist drives, then $\text{\hspace{0.17em}}vt=\text{distance}\text{.}\text{\hspace{0.17em}}$ Because the distance is fixed at 100 miles, $\text{\hspace{0.17em}}vt=100\text{\hspace{0.17em}}$ so $t=100/v.\text{\hspace{0.17em}}$ Because time is a function of velocity, we can write $\text{\hspace{0.17em}}t\left(v\right).$
$\begin{array}{ccc}\hfill t\left(v\right)& =& \frac{100}{v}\hfill \\ & =& 100{v}^{-1}\hfill \end{array}$
We can see that the constant of variation is 100 and, although we can write the relationship using the negative exponent, it is more common to see it written as a fraction. We say that time varies inversely with velocity.
Given a description of an indirect variation problem, solve for an unknown.
1. Identify the input, $\text{\hspace{0.17em}}x,\text{\hspace{0.17em}}$ and the output, $\text{\hspace{0.17em}}y.$
2. Determine the constant of variation. You may need to multiply $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ by the specified power of $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ to determine the constant of variation.
3. Use the constant of variation to write an equation for the relationship.
4. Substitute known values into the equation to find the unknown.
## Solving an inverse variation problem
A quantity $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ varies inversely with the cube of $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ If $\text{\hspace{0.17em}}y=25\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}x=2,\text{\hspace{0.17em}}$ find $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is 6.
The general formula for inverse variation with a cube is $\text{\hspace{0.17em}}y=\frac{k}{{x}^{3}}.\text{\hspace{0.17em}}$ The constant can be found by multiplying $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ by the cube of $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$
$\begin{array}{ccc}\hfill k& =& {x}^{3}y\hfill \\ & =& {2}^{3}\cdot 25\hfill \\ & =& 200\hfill \end{array}$
Now we use the constant to write an equation that represents this relationship.
$\begin{array}{ccc}\hfill y& =& \frac{k}{{x}^{3}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=200\hfill \\ y\hfill & =& \frac{200}{{x}^{3}}\hfill \end{array}$
Substitute $\text{\hspace{0.17em}}x=6\text{\hspace{0.17em}}$ and solve for $\text{\hspace{0.17em}}y.$
$\begin{array}{ccc}\hfill y& =& \frac{200}{{6}^{3}}\hfill \\ & =& \frac{25}{27}\hfill \end{array}$
#### Questions & Answers
the gradient function of a curve is 2x+4 and the curve passes through point (1,4) find the equation of the curve
1+cos²A/cos²A=2cosec²A-1
test for convergence the series 1+x/2+2!/9x3
a man walks up 200 meters along a straight road whose inclination is 30 degree.How high above the starting level is he?
100 meters
Kuldeep
Find that number sum and product of all the divisors of 360
Ajith
exponential series
Naveen
what is subgroup
Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1
e power cos hyperbolic (x+iy)
10y
Michael
tan hyperbolic inverse (x+iy)=alpha +i bita
prove that cos(π/6-a)*cos(π/3+b)-sin(π/6-a)*sin(π/3+b)=sin(a-b)
why {2kπ} union {kπ}={kπ}?
why is {2kπ} union {kπ}={kπ}? when k belong to integer
Huy
if 9 sin theta + 40 cos theta = 41,prove that:41 cos theta = 41
what is complex numbers
Dua
Yes
ahmed
Thank you
Dua
give me treganamentry question
Solve 2cos x + 3sin x = 0.5
|
2018-12-15 11:44:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 49, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889867067337036, "perplexity": 732.0912826279631}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00400.warc.gz"}
|
https://www.springerprofessional.de/robot-motion-planning/13823208?fulltextView=true&doi=10.1007%2F978-1-4615-4022-9
|
main-content
## Über dieses Buch
One of the ultimate goals in Robotics is to create autonomous robots. Such robots will accept high-level descriptions of tasks and will execute them without further human intervention. The input descriptions will specify what the user wants done rather than how to do it. The robots will be any kind of versatile mechanical device equipped with actuators and sensors under the control of a computing system. Making progress toward autonomous robots is of major practical inter est in a wide variety of application domains including manufacturing, construction, waste management, space exploration, undersea work, as sistance for the disabled, and medical surgery. It is also of great technical interest, especially for Computer Science, because it raises challenging and rich computational issues from which new concepts of broad useful ness are likely to emerge. Developing the technologies necessary for autonomous robots is a formidable undertaking with deep interweaved ramifications in auto mated reasoning, perception and control. It raises many important prob lems. One of them - motion planning - is the central theme of this book. It can be loosely stated as follows: How can a robot decide what motions to perform in order to achieve goal arrangements of physical objects? This capability is eminently necessary since, by definition, a robot accomplishes tasks by moving in the real world. The minimum one would expect from an autonomous robot is the ability to plan its x Preface own motions.
## Inhaltsverzeichnis
### Chapter 1. Introduction and Overview
Abstract
A robot is a versatile mechanical device — for example, a manipulator arm, a multi-joint multi-fingered hand, a wheeled or legged vehicle, a free-flying platform, or a combination of these — equipped with actuators and sensors under the control of a computing system. It operates in a workspace within the real world. This workspace is populated by physical objects and is subject to the laws of nature. The robot performs tasks by executing motions in the workspace.
Jean-Claude Latombe
### Chapter 2. Configuration Space of a Rigid Object
Abstract
In Chapter 1 we introduced configuration space as a space in which the robot maps to a point. The mathematical structure of this space, however, is not completely straightforward, and deserves some specific consideration. The purpose of this chapter and the next one is to provide the reader with a general understanding of this structure when the robot is a rigid object not constrained by any kinematic or dynamic constraint. This chapter mainly focuses on topological and differential properties of the configuration space. More detailed algebraic and geometric properties related to the mapping of the obstacles into configuration space will be investigated in Chapter 3.
Jean-Claude Latombe
### Chapter 3. Obstacles in Configuration Space
Abstract
Obstacles in the workspace W map in the configuration space C to regions called C-obstacles. In Chapter 2 we defined the C-obstacle CB corresponding to a workspace obstacle B as the following region in C:
$$CB = \{ q \in C/A(q) \cap B \ne 0\}$$
.
Jean-Claude Latombe
Abstract
In this chapter we describe a first approach to robot path planning which we name the roadmap approach. This approach is based on the following general idea: capture the connectivity of the robot’s free space C free in the form of a network of one-dimensional curves — the roadmap — lying in C free or its closure cl(C free ). Once constructed, a roadmap R is used as a set of standardized paths. Path planning is reduced to connecting the initial and goal configurations to R, and searching R for a path.
Jean-Claude Latombe
### Chapter 5. Exact Cell Decomposition
Abstract
In this chapter we describe a second approach to motion planning, exact cell decomposition. The principle of this approach is to first decompose the robot’s free space C free into a collection of non-overlapping regions, called cells, whose union is exactly1 C free (or its closure). Next, the connectivity graph which represents the adjacency relation among the cells is constructed and searched. If successful, the outcome of the search is a sequence of cells, called a channel, connecting the cell containing the initial configuration to the cell containing the goal configuration. A path is finally extracted from this sequence.
Jean-Claude Latombe
### Chapter 6. Approximate Cell Decomposition
Abstract
In this chapter we investigate another cell decomposition approach to path planning which is known as the approximate cell decomposition approach. It consists again of representing the robot’s free space C free as a collection of cells. But it differs from the exact cell decomposition approach in that the cells are now required to have a simple prespecified shape, e.g. a rectangloid shape. Such cells do not in general allow us to represent free space exactly. Instead, a conservative approximation of this space is constructed, hence the name of the approach. As with the exact cell decomposition approach, a connectivity graph representing the adjacency relation among the cells is built and searched for a path.
Jean-Claude Latombe
### Chapter 7. Potential Field Methods
Abstract
The planning methods described in the previous three chapters aim at capturing the global connectivity of the robot’s free space into a condensed graph that is subsequently searched for a path. The approach presented in this chapter proceeds from a different idea. It treats the robot represented as a point in configuration space as a particle under the influence of an artificial potential field U whose local variations are expected to reflect the “structure” of the free space. The potential function is typically (but not necessarily) defined over free space as the sum of an attractive potential pulling the robot toward the goal configuration and a repulsive potential pushing the robot away from the obstacles. Motion planning is performed in an iterative fashion. At each iteration, the artificial force $$\vec{F}(q) = - \vec{\nabla }U(q)$$ induced by the potential function at the current configuration is regarded as the most promising direction of motion, and path generation proceeds along this direction by some increment.
Jean-Claude Latombe
### Chapter 8. Multiple Moving Objects
Abstract
In this chapter as well as the next three, we investigate various extensions of the basic motion planning problem. In the present chapter we begin this investigation with a series of extensions which all involve multiple moving objects.
Jean-Claude Latombe
### Chapter 9. Kinematic Constraints
Abstract
In this chapter we study the path planning problem for a robot subject to kinematic constraints, i.e. an object (or a collection of objects) that cannot translate and rotate freely in the workspace. Indeed, so far, we have only investigated the case of free-flying robots. The only exception occurred in the last section of the previous chapter, when we considered articulated robots made of several objects connected by joints. The constraint imposed by a joint on the relative motion of the two objects it connects is a simple example of kinematic constraint.
Jean-Claude Latombe
### Chapter 10. Dealing with Uncertainty
Abstract
In the previous chapters we assumed that the robot was able to perfectly control its motions, i.e. to exactly follow the geometrical paths generated by the planner. This assumption is realistic when the workspace is relatively uncluttered and the goal configuration does not have to be achieved too precisely. In those cases, if necessary, uncertainty in robot’s control can be taken into account by slightly “growing” the C-obstacles and planning a free path among the grown C-obstacles. Motions that can be planned in this way are often called gross motions.
Jean-Claude Latombe
### Chapter 11. Movable Objects
Abstract
So far we have considered workspaces populated by one or several robots and by obstacles which are either stationary and not movable, or moving along predetermined trajectories. In this chapter we introduce a third kind of object: the movable object. Such an object is like a stationary obstacle, except that its location can be changed by a robot. The most usual way for a robot to change the location of a movable object, in fact the only way that we will study in some depth in this chapter, is by grasping it and moving with it.
Jean-Claude Latombe
### Backmatter
Weitere Informationen
|
2021-01-16 05:56:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3881087303161621, "perplexity": 790.5324790504835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00459.warc.gz"}
|
http://math.stackexchange.com/questions/96222/need-help-modifying-this-binomial-expansion
|
# Need help modifying this binomial expansion
The binomial theorem states that
$$(A+B)^n=\sum_{k=1}^{n}{n \choose k}A^{n-k}B^k$$
I need help expressing the following summation as a modified binomial expression:
$$\sum_{k=1}^{n}n!{n-1\choose k}A^{k+1}B^{n+k}=(A?B)^?$$
Thanks
-
You need "$k=0$" rather than "$k=1$". – Michael Hardy Jan 4 '12 at 0:43
Your first expression should be $(A+B)^n=\sum_{k=0}^{n}{n \choose k}A^{n-k}B^k$ starting from $k=0$.
Your second expression may have problems with $k=n$ when evaluating ${n-1 \choose n}$.
So I will try to help with a slightly altered version of your question: $$\sum_{k=0}^{n-1}n!{n-1\choose k}A^{k+1}B^{n+k} = n!A B^n \sum_{k=0}^{n-1}{n-1\choose k}1^{n-1-k}(AB)^{k} = n!A B^n (1+AB)^{n-1}.$$
-
Thanks! But what is the $1^{n-1-k}$ term doing? Doesn't that just always equal 1, for all n and k? – ben Jan 4 '12 at 0:56
It is there to make it obvious we are applying the earlier binomial expansion of $(1+AB)^{n-1}$ . It can be ignored if necessary – Henry Jan 4 '12 at 8:30
$$\sum_{k=1}^n n! \binom{n-1}{k} A^{k+1} B^{n+k} = n! \cdot A \cdot B^n \cdot \sum_{k=1}^{n-1} \binom{n-1}{k} A^{k} B^{k} = n! \cdot A \cdot B^{n} \left( (1+A B)^{n-1} - 1 \right)$$
-
|
2015-02-01 13:36:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918079137802124, "perplexity": 711.4730852066334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121899763.95/warc/CC-MAIN-20150124175139-00197-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/correction-term-while-switching-from-inertial-to-body-fixed.860614/
|
# I Correction term while switching from inertial to body fixed
1. Mar 5, 2016
### harmyder
Suppose we have an equation in inertial frame A.
$$$$\frac{{}^Ad\bf{H}_C}{dt} = \bf{M}_C$$$$
Now we want to switch to body fixed frame B. For this need to employ correction factor ${}^A\bf\omega^B\times\bf{H}_B$. Why do we have this correction factor? How to derive this correction factor?
$$$$\frac{{}^Bd\bf{H}_C}{dt} + {}^A\bf{\omega}^B\times\bf{H}_C = \bf{M}_C$$$$
Last edited: Mar 5, 2016
2. Mar 5, 2016
### Simon Bridge
What happens if you don't use a correction factor?
... start with the frames you are interested in and write out the rules for converting between them.
3. Mar 5, 2016
### harmyder
The rule to convert between frames is multiply by rotation matrix. But here we add some term, which is strange for me. I can very well understand this in case of measuring angular momentum about different points... Oooo, maybe here they think of different frames also as diferent points to measure angular momentum?
4. Mar 5, 2016
### vanhees71
Let $\Sigma$ be an inertial frame and $\Sigma'$ one, which rotates arbitrarily against $\Sigma$. Then there's some rotation matrix $\hat{D}(t) \in \mathrm{SO}(3)$ for the components of vectors:
$$\vec{V}=\hat{D} \vec{V}',$$
where $\vec{V}$ are the components of an arbitrary vector wrt. the orthonormal basis at rest in $\Sigma$ and $\vec{V}'$ that wrt. to the one in $\Sigma'$.
For the time derivative of $\vec{V}$ you get
$$\vec{A}:=\dot{\vec{V}}=\dot{\hat{D}} \vec{V}'+\hat{D} \dot{\vec{V}}'.$$
For the components of $\vec{A}$ wrt. $\Sigma'$ it follows
$$\vec{A}'=\hat{D}^{-1} \vec{A}=\hat{D}^{T} \vec{A}=\hat{D}^T \dot{\hat{D}} \vec{V}'+\dot{\vec{V}}'.$$
Now since $\hat{D} \in \mathrm{SO}(3)$ we have
$$\hat{D}^T \hat{D}=1 \; \Rightarrow \; \hat{D}^T \dot{\hat{D}}=-\dot{\hat{D}}^T \hat{D}=-(\hat{D}^T \dot{\hat{T}})^T.$$
i.e., $\hat{D}^T \dot{\hat{D}}$ is antisymmetric, and thus you can introduce an axial vector $\vec{\omega}$ such that
$$(\hat{D}^T \dot{\hat{D}})_{ij}=-\epsilon_{ijk} \omega_k.$$
So you get
$$A_i'=\dot{V}_i'-\epsilon_{ijk} \omega_k V_j = \dot{V}_i' + \epsilon_{ikj} \omega_k V_j'.$$
$$\vec{A}''=\dot{V}'+\vec{\omega} \times \vec{V}',$$
|
2017-10-20 07:13:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.819568395614624, "perplexity": 806.0061091752195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083635-00020.warc.gz"}
|
https://tex.stackexchange.com/questions/473048/let-tikz-do-the-calculating
|
# Let Tikz do the Calculating
Most of the work has been done here except I want TiKz to calculate the y values at least. I try here below but need help with the syntax to finish
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{intersections,positioning,calc}
\newcommand{\pder}[2]{#1^{\prime}(#2)}
\begin{document}
\newcommand*{\DeltaX}{0.01}
\newcommand*{\DrawTangent}[7][]{%
% #1 = draw options
% #2 = name of curve
% #3 = ymin
% #4 = ymax
% #5 = x value at which tangent is to be drawn
\path[name path=Vertical Line Left] (#5-\DeltaX,#3) -- (#5-\DeltaX,#4);
\path[name path=Vertical Line Right] (#5+\DeltaX,#3) -- (#5+\DeltaX,#4);
\path [name intersections={of=Vertical Line Left and #2}];
\coordinate (X0) at (intersection-1);
\path [name intersections={of=Vertical Line Right and #2}];
\coordinate (X1) at (intersection-1);
\draw [shorten <= -1cm, shorten >= -1cm, #1] (X0) -- (X1) node[above
right=#6cm and #7cm] {\small $m=\pder{f}{#5}={f(#5)}$};
}%
\begin{minipage}{.4\textwidth}
Theorem 1 says that for $f(x)=e^{x}$, the derivative at $x$ (the slope of
the tangent line) is the same as the function value at $x$. That is, on the
graph of $y=e^{x}$, at the point $(0,1)$, the slope is $m=1$; at the
point $(1,e)$, the slope is $m=e$; at the point $(2,e^{2})$, the slope is
$m=e^{2}$, and so on. The function $y=e^{x}$ is the only exponential
function for which this correlation between the function and its derivative
is true.\\
In Section 3.5, we will develop a formula for the derivative of the more
general exponential function given by $y=a^{x}$
\end{minipage}
\hspace{1cm}
\begin{minipage}{.6\textwidth}
\begin{tikzpicture}[scale=.75, declare function={f(\x)=(2.71828)^(\x);}]
\draw[step=1.0,gray,thin,dotted] (-3,-1) grid (7,9);
\draw [-latex] (-3,0) -- (7.5,0) node (xaxis) [below] {$x$};
\draw [-latex] (0,-1) -- (0,9.5) node [left] {$y$};
\foreach \x/\xtext in {-2/-2,-1/-1,1/1,2/2,3/3,4/4,5/5,6/6}
\draw[xshift=\x cm] (0pt,3pt) -- (0pt,0pt)
node[below=2pt,fill=white,font=\normalsize]
{$\xtext$};
\foreach \y/\ytext in {1/1,2/2,3/3,4/4,5/5,6/6,7/7,8/8}
\draw[yshift=\y cm] (2pt,0pt) -- (-2pt,0pt)
node[left,fill=white,font=\normalsize]
{$\ytext$};
\draw[name path=curve,domain=-3:2.2,samples=200,variable=\x,red,<->,thick]
plot ({\x},{(2.71828)^(\x)});
\DrawTangent[blue,thick,-]{curve}{-1}{4}{1}{.5}{.3}
\DrawTangent[blue,thick,-]{curve}{-1}{3}{0}{.5}{.8}
\DrawTangent[blue,thick,-]{curve}{5}{9}{2}{.5}{.2}
\draw[fill=red,red] (0,1) circle (3pt) node[right] {\small $y=f(0)=0$};
\draw[fill=red,red] (1,2.71828) circle (3pt) node[right] {\small
$y=f(1)\approx 2.71828$};
\draw[fill=red,red] (2,7.3890) circle (3pt) node[right] {\small
$y=f(2)\approx 7.3890$};
\draw[fill=red,red] (-1,0.3679) circle (3pt) ;
\end{tikzpicture}
\end{minipage}
\end{document}
• yes I got everything to work. Is there a way to control the digit where the rounding takes place in the command \approx \pgfmathparse{f(2)}\pgfmathprint{\pgfmathresult} for example? Out of curiosity. – MathScholar Feb 2 at 16:04
• @MathScholar Yes. I added it to the MWE. (One has to use \pgfmathprintnumber and not \pgfmathprint as I incorrectly wrote before.) – marmot Feb 2 at 16:10
• @MathScholar Of course it does. Just try \pfmathparse{pi}\pgfmathprintnumber{\pgfmathresult} or something like this. – marmot Feb 2 at 16:48
• @MathScholar \pfmathparse{(exp(1)}\pgfmathprintnumber{\pgfmathresult} – marmot Feb 2 at 17:08
• @MathScholar Sure: declare function={f(\x)=exp(\x);}. – marmot Feb 2 at 17:10
|
2019-05-21 18:41:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906856775283813, "perplexity": 3377.629288627283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00523.warc.gz"}
|
https://mathoverflow.net/tags/monoidal-categories/hot
|
# Tag Info
Accepted
### The two ways Feynman diagrams appear in mathematics
If I understand the question correctly, the search is for a calculation of the asymptotic expansion of Gaussian integrals using concepts and techniques from category theory. Here is one such ...
• 147k
### Proof that a Cartesian category is monoidal
There is a complete proof formalised by Scott Morrison in the Lean proof assistant here: monoidal/of_has_finite_products.lean [UPDATE] Here is some commentary. In the above file, we see ...
• 48.6k
Accepted
### Does every monoidal category admit a braiding?
No, sometimes there are even $x,y$'s with no abstract isomorphism $x\otimes y \cong y\otimes x$. Here are two families of examples: Monoids, viewed as discrete categories. The tensor product is just ...
• 8,310
Accepted
### Categorical presentation of direct sums of vector spaces, versus tensor products
One way to think about what the monoidal structure on vector spaces is doing is that it is telling us that vector spaces do not really form a category, or not "just" a category: they form a ...
• 107k
Accepted
• 49.2k
Accepted
• 8,836
### Proof that a Cartesian category is monoidal
In Categories for the Working Mathematician, the first thing that Mac Lane does after defining a monoidal category in Ch. VII.1 is to verify that a category with finite products is monoidal. The ...
• 49.2k
Accepted
### Modular Tensor Categories: Reasoning behind the axioms
Interesting question ! As far as I know, there are at least two secretly equivalent answers. You somehow already gave the first one: a modular tensor category is the same as a modular functor (...
• 7,432
Accepted
### Uniqueness of dualizing objects
If a dualizing object exists, there is a bijection between isomorphism classes of dualizing objects and isomorphism classes of $\otimes$-invertible objects (i.e. the Picard group), given by tensoring ...
• 49.2k
Accepted
### Trace in the category of propositional statements
To start with, I just want to make sure no one gets the impression that the categorical notion of trace was introduced by the paper you linked to; however "semi-famous" it might or might not be, it's ...
• 58.9k
Accepted
### Reference for "multi-monoidal categories"
Look at Section 3 of Leinster's Higher Operads, Higher Categories, where the term used is "unbiased monoidal category."
• 107k
### String diagrams for bimonoidal categories (a.k.a. rig categories)?
This question is answered in the affirmative in the following preprint: Cole Comfort, Antonin Delpeuch, Jules Hedges, Sheet diagrams for bimonoidal categories, arXiv:2010.13361 The morphisms are ...
• 131
### What is a tensor category?
There is no single accepted definition of “tensor category” that matches all uses. Almost always it means abelian (or a similar cocomplete condition) and k-linear. Usually it also means rigid. Often ...
• 26.5k
Accepted
### What is a tensor category?
There seem to be many different definitions in the literature, based on individual papers. But, I think that might change, now that the textbook Tensor Categories, by Etingof, Gelaki, Nikshych, and ...
• 21.2k
Accepted
### Does every commutative variety of algebras have a cogenerator?
The answer is no. Let $A$ be the algebra with universe $\{0,1\}$ and fundamental operations $f(x,y,z)=x+y+z \pmod{2}$ and $g(x)=x+1\pmod{2}$. Then $f$ and $g$ commute with each other and with ...
• 9,135
### Why is a braided left autonomous category also right autonomous?
(Apologies for problems with the LaTeX; I’m on a dodgy internet connection and having trouble previewing to fix it. Will attempt to fix it later when home.) With any question like this — deriving an ...
• 23.8k
Accepted
### Why is a braided left autonomous category also right autonomous?
Here's a hint: what you should probably use here is the method of string diagrams (due to Joyal and Street, but by now ubiquitous). In other words, draw a picture in terms of tangles; you will see two ...
• 50.5k
|
2022-05-28 11:19:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890337944030762, "perplexity": 866.0707151754798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00462.warc.gz"}
|
https://greprepclub.com/forum/x-is-different-from-zero-8725.html
|
It is currently 24 Mar 2019, 09:06
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# x is different from zero
Author Message
TAGS:
Moderator
Joined: 18 Apr 2015
Posts: 5888
Followers: 96
Kudos [?]: 1154 [0], given: 5471
x is different from zero [#permalink] 18 Mar 2018, 11:10
Expert's post
00:00
Question Stats:
74% (00:20) correct 25% (00:18) wrong based on 31 sessions
$$x \neq zero$$
Quantity A Quantity B $$\frac{x}{|x|}$$ 1
A) Quantity A is greater.
B) Quantity B is greater.
C) The two quantities are equal.
D) The relationship cannot be determined from the information given.
[Reveal] Spoiler: OA
_________________
Manager
Joined: 22 Feb 2018
Posts: 159
Followers: 2
Kudos [?]: 101 [1] , given: 22
Re: x is different from zero [#permalink] 19 Mar 2018, 16:21
1
KUDOS
If x < 0: x is negative and |x| is positive so x/|x| equals -1.
If x > 0: x is positive and |x| is also positive so x/|x| equals +1.
There is no information about x that is positive or negative, So the answer is D.
_________________
Intern
Joined: 11 Jan 2018
Posts: 44
Followers: 0
Kudos [?]: 33 [1] , given: 7
Re: x is different from zero [#permalink] 26 Mar 2018, 04:28
1
KUDOS
Clearly, answer is D. Here's how:
Given that x is not 0. that means, either x is positive or negative.
As we know |x| is always positive, so we can multiply both quantities by |x|. By doing so, the quantity which is greater will remain greater. So,
After multiplying |x| on both quantities, we'll get
Quantity A: x
Quantity B: |x|
As we know Quantity B is always positive, whether x positive or not. But Quantity A will be positive if x positive, and it will negative if x negative.
Thus, if x positive, both quantities are equal, but if x negative, Quantity B will become greater.
Therefore, information is not sufficient to answer the question. i.e Choice D is correct.
_________________
Persistence >>>>>>> Success
Don't say thanks, just give KUDOS.
1 kudos = 1000 Thanks
Re: x is different from zero [#permalink] 26 Mar 2018, 04:28
Display posts from previous: Sort by
|
2019-03-24 17:06:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5069305896759033, "perplexity": 4212.688585253824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00112.warc.gz"}
|
https://zxi.mytechroad.com/blog/greedy/leetcode-1946-largest-number-after-mutating-substring/
|
You are given a string num, which represents a large integer. You are also given a 0-indexed integer array change of length 10 that maps each digit 0-9 to another digit. More formally, digit d maps to digit change[d].
You may choose to mutate a single substring of num. To mutate a substring, replace each digit num[i] with the digit it maps to in change (i.e. replace num[i] with change[num[i]]).
Return a string representing the largest possible integer after mutating (or choosing not to) a single substring of num.
substring is a contiguous sequence of characters within the string.
Example 1:
Input: num = "132", change = [9,8,5,0,3,6,4,2,6,8]
Output: "832"
Explanation: Replace the substring "1":
- 1 maps to change[1] = 8.
Thus, "132" becomes "832".
"832" is the largest number that can be created, so return it.
Example 2:
Input: num = "021", change = [9,4,3,5,7,2,1,9,0,6]
Output: "934"
Explanation: Replace the substring "021":
- 0 maps to change[0] = 9.
- 2 maps to change[2] = 3.
- 1 maps to change[1] = 4.
Thus, "021" becomes "934".
"934" is the largest number that can be created, so return it.
Example 3:
Input: num = "5", change = [1,4,7,5,3,2,5,6,9,4]
Output: "5"
Explanation: "5" is already the largest number that can be created, so return it.
Constraints:
• 1 <= num.length <= 105
• num consists of only digits 0-9.
• change.length == 10
• 0 <= change[d] <= 9
Solution: Greedy
Find the first digit that is less equal to the mutated one as the start of the substring, keep replacing as long as mutated >= current.
Time complexity: O(n)
Space complexity: O(1)
C++
If you like my articles / videos, donations are welcome.
Buy anything from Amazon to support our website
Paypal
Venmo
huahualeetcode
|
2022-05-18 07:37:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2863985002040863, "perplexity": 3956.0488947568983}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00728.warc.gz"}
|
https://physics.stackexchange.com/questions/133970/average-acceleration-when-more-than-two-different-velocities-occur
|
# Average acceleration when more than two different velocities occur [closed]
Suppose a car travels at 5m/s north for 5 seconds, it then turn east and travel at 7m/s for 10 seconds, finally it turns north east and travel at 10 m/s for 20 seconds. What is the average acceleration over these 35 seconds?
I know acceleration average is the change in velocity over the change in time. If there was just two vectors for velocity, then I would subtract those two vectorally. However, I don't know how to find the average acceleration from 3 or more vectors. Do I just subtract these 3 vectors vectorally and divide that the result by 35 seconds?
## closed as off-topic by ACuriousMind♦, Brandon Enright, Ali, John Rennie, DanuSep 5 '14 at 7:07
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – ACuriousMind, Brandon Enright, Ali, John Rennie, Danu
If this question can be reworded to fit the rules in the help center, please edit the question.
Whenever you're confused about how to calculate some quantity, try going back to the definition.
Think carefully about what the definition of average acceleration is:
$$\vec a_\textrm{avg}\equiv\frac{\vec v_\textrm{final}-\vec v_\textrm{initial}}{\Delta t_\textrm{elapsed}}.$$
Which velocities does this equation depend on? Which velocities does it not depend on?
• So I only have to vectorally subtract the 10m/s and the 5m/s? – Loc Tran Sep 4 '14 at 22:30
• I'll leave that for you to decide. – BMS Sep 4 '14 at 22:30
• Say if I replaced the velocities with distances...5 meters north, 4 meters west, 10 meters northwest...and I want to find average velocity. I can't just do 10 meters minus 5 meters vectorally – Loc Tran Sep 4 '14 at 22:36
• True, because for average velocity you don't subtract displacements; you subtract positions. – BMS Sep 4 '14 at 22:37
• There we go, that was it, thankyou so much for helping me! Now I get what's going on. – Loc Tran Sep 4 '14 at 22:40
|
2019-08-25 22:20:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6554815173149109, "perplexity": 867.3741968096159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330907.46/warc/CC-MAIN-20190825215958-20190826001958-00253.warc.gz"}
|
https://harrisonbrown.wordpress.com/2010/01/05/bleg-whats-the-most-recent-day-no-one-alive-was-born/
|
## Bleg: What’s the most recent day no one alive was born
Inspired by Michael Lugo’s post on reconstructing a person from their DOB, zipcode, and gender.
If you, for whatever reason, ever watch the Today show, you’ll notice that one of the recurring features is the hosts listing the names of some men and women who are turning 100. Becoming a centenarian is a reasonably big accomplishment — in the U.S., it nets you a congratulatory letter from the President, for example. But if you look into it, you’ll notice that you can find someone turning 100 on pretty much any given day. Usually not someone particularly well-known, but certainly someone. (I tried to find someone famous and vaguely math-related who just turned or is turning 100 for this post, but couldn’t; however, the fascinating economist Ronald Coase turned 99 last week.) It’s almost certainly true that on any given day, someone somewhere in the world is in fact celebrating their 100th birthday. But go ten years further, and you find almost no one who lives to 110. Actually, I know of only one supercentenarian, living or not, who is interesting for reasons apart from his longevity — the late Vietoris, the topologist, probably best known as half of the Vietoris-Rips complex and the Mayer-Vietoris sequence. Odds are pretty good that no one alive is turning 110 today, or tomorrow, or (sadly) New Years’ Day.
So… a question is starting to take shape. On every day between December 29, 1909, and today, someone was born who is still living today. But much earlier than that, and the above statement begins to be false. So what’s the most recent day that no one living was born on?
Unfortunately the question seems impossible to answer precisely — even today in some countries there’s no reliable system to record births and demographic data, and 100 years ago the situation was far worse. But we can certainly try to make an educated guess, or at least think about how we’d go about trying to make an educated guess!
So we’ll start off with our simplifying assumptions. First of all, in the analysis we’re going to have to consider the probability that a person who’s lived to N days will live to N+1 days. (Or a coarser version of this statistic.) Obviously this probability is different for each person — a 104-year-old in Bangladesh with a terminal disease and without access to good medical care has a much shorter life expectancy than a healthy person of the same age in, say, New Jersey. But this complicates matters hugely, so we’ll say that for each person the probability is the same.
In addition, we’ll assume that the birth rate was constant, say, 1900 and 1910, and that someone born in 1902 had the same probability of living to 100 and someone born in 1909. Again, these are simplifying assumptions.
So once you’ve made these assumptions, you end up with a sequence of random variables $X_N$ that describe how many people who have lived exactly N days are still living. Under the above (unrealistic) assumptions, $X_N$ is approximately a binomial distribution, with the probability decreasing exponentially with N.
So the first estimate we can make is to see when the expected value $E(X_N) < 1$. But this isn’t all that good — probably the first N with $X_N = 0$ came way earlier — and so a better thing to do is to estimate
$Pr[\bigcup_{j \leq N} X_j = 0] \leq \sum_{j \leq N} \text{Pr}[X_j = 0]$.
But, when the $X_j$ are binomial, this probability is easy — it’s just $1 - p^{jM}$ for some fixed probability p and integer M. So the sum is
$N + 1 - (1 + p^M+ \ldots + p^{NM}) = N + 1 - \frac{1 - p^{MN + M}}{1 - p^M}$.
As long as this is small, the probability is high that none of the $X_j$ are 0 for $j \leq N$.
Can we do better than this? What if we replaced the above model by something more realistic, where the death rate increases as N increases?
What’s (approximately) the most recent day no one alive was born?
### 2 Responses to “Bleg: What’s the most recent day no one alive was born”
1. Michael Lugo Says:
Here’s a thought on how to go from data at the level of years (which might exist somewhere out there although I’m having trouble finding it) to a decent guess at the level of days. Let’s say we know that N people were born in 1905 and are still alive. Then the number of people born on any given day in 1905 is approximately Poisson with mean N/365;. The probability that, for any given day in 1905, at least one person born that day is still alive, is then about exp(-N/365). Assuming that the days are independent, the probability that at least one person born on every day in 1905 is still alive about (1-exp(-N/365))365. Of course one could chain this together for various years to get some sort of estimate.
I think the Census has data of the type I already described, although I’m having trouble finding it. Apparently demographers refer to a chart showing the number of people of each age as a “population pyramid”, and somewhat annoyingly most of the ones I can find lump everyone over 85 together. This table might be useful, although it uses what seems to be a standard actuarial fiction: it’s a “life table” which assumes that people of every age have the same age-adjusted mortality rates that they did in 2005, which is not exactly what we need to answer this question.
And of course that’s only for the United States. A small change in the average life expectancy between countries could show up as a large difference in the number of people who survive to extreme ages.
Without the data, it looks like there are various theoretical models of longevity that demographers use, although I’m not sure how accurate any of those are. Rather understandably, it looks like demographers are more interested in having models that work for most individuals than in having models that work in the tails of the age distribution.
2. TonyK Says:
You’d better be more specific if you want your question to even have an answer. What do you mean by ‘day’? The simplest solution is to designate a timezone, but which one do you pick? Another solution is to accept any 24-hour period, but I don’t think your question can be interpreted in this way.
|
2017-05-25 01:06:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6214929819107056, "perplexity": 538.1832046530714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607960.64/warc/CC-MAIN-20170525010046-20170525030046-00214.warc.gz"}
|
http://hackage.haskell.org/package/gargoyle-0.1.1.0/docs/Gargoyle.html
|
gargoyle-0.1.1.0: Automatically spin up and spin down local daemons
Gargoyle
Description
Utilities for running a daemon in a local directory
Synopsis
# Documentation
data Gargoyle pid a Source #
Constructors
Gargoyle Fields_gargoyle_exec :: FilePathThe path to the executable created with gargoyleMain which will serve as the daemon monitor process._gargoyle_init :: FilePath -> IO ()The action to run in order to populate the daemon's environment for the first run._gargoyle_start :: FilePath -> IO pidThe action to run in order to spin up the daemon on every run. This happens after _gargoyle_init if it also runs._gargoyle_stop :: pid -> IO ()The action to run when the monitor process detects that no clients are connected anymore._gargoyle_getInfo :: FilePath -> IO aRun a command which knows about the working directory of the daemon to collect runtime information to pass to client code in withGargoyle.
Arguments
:: Gargoyle pid a Description of how to manage the daemon. -> FilePath The directory where the daemon should be initialized. -> (a -> IO b) Client action which has access to runtime information provided by the Gargoyle. -> IO b By the time this function returns, the monitor process is aware that the the client is no longer interested in the daemon.
Run an IO action while maintaining a connection to a daemon. The daemon will automatically be stopped when no clients remain. If the daemon has not yet been initialized, it will be. The counterpart of this function is gargoyleMain which should be used to produce an executable that will monitor the daemon's status.
Arguments
:: Gargoyle pid a Description of how to initialize, spin up, and spin down a daemon. -> IO () Returns only when all clients have disconnected.
Run a local daemon over a domain socket; the daemon will be automatically stopped when no clients remain. This function assumes that the daemon has already been initialized in the specified location. This function should be used as the main function of an executable which will then be invoked by calling withGargoyle in the client code to monitor the daemon's status.
|
2022-01-24 19:13:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3272620737552643, "perplexity": 3427.9389856017056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00150.warc.gz"}
|
http://dotsstores.ca/whwsif/27c33f-does-higher-amplitude-mean-more-energy
|
# 50-80% off designer fashions, everyday!
## does higher amplitude mean more energy
It's moving through a denser medium. Watch the recordings here on Youtube! Consider a mass element of the string with a mass $$\Delta$$m, as seen in Figure $$\PageIndex{2}$$. And wont these higher modes take up more fraction of energy of the wave? As wavelength gets longer, there is less energy. Mac and Tosh stand 8 meters apart and demonstrate the motion of a transverse wave on a snakey. The energy transported by a wave is directly proportional to the square of the amplitude of the wave. A high energy wave is characterized by a high amplitude; a low energy wave is characterized by a low amplitude. Integrating over the wavelength, we can compute the potential energy over a wavelength: $\begin{split} dU & = \frac{1}{2} k_{s} x^{2} = \frac{1}{2} \mu \omega^{2} x^{2} dx, \\ U_{\lambda} & = \frac{1}{2} \mu \omega^{2} A^{2} \int_{0}^{\lambda} \sin^{2} (kx) dx = \frac{1}{4} \mu A^{2} \omega^{2} \lambda \ldotp \end{split}$. Consider a sinusoidal wave on a string that is produced by a string vibrator, as shown in Figure $$\PageIndex{2}$$. Observe that whenever the amplitude increased by a given factor, the energy value is increased by the same factor squared. The larger the amplitude, the higher the seagull is lifted by the wave and the larger the change in potential energy. The energy that soundwaves make when an object vibrates possesses a specific pattern, small or large. More energy = more speed. Sound waves are discussed in more detail in the next chapter, but in general, the farther you are from the speaker, the less intense the sound you hear. Determine the amplitude, period, and wavelength of such a wave. Energy of a wave is measured by its frequency. Its frequency also increases. This creates a disturbance within the medium; this disturbance subsequently travels from coil to coil, transporting energy as it moves. The tension in the string is 90.0 N. When the string vibrator is turned on, it oscillates with a frequency of 60 Hz and produces a sinusoidal wave on the string with an amplitude of 4.00 cm and a constant wave speed. Because energy is measured using frequency, and wavelength is inversely related to frequency; this means that wavelength and energy are also inversely related. A more elastic medium will allow a greater amplitude pulse to travel through it; the same force causes a greater amplitude. The amplitude tells you the number of photons. The average amount of energy passing through a unit area per unit of time in a specified direction is called the intensity of the wave. If two mechanical waves have equal amplitudes, but one wave has a frequency equal to twice the frequency of the other, the higher-frequency wave will have a rate of energy transfer a factor of four times as great as the rate of energy transfer of the lower-frequency wave. Consider two identical slinkies into which a pulse is introduced. For example, a sound wave with a high amplitude is perceived as loud. Wave B has an amplitude of 0.2 cm. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). Non-mechanical waves like electromagnetic waves do not need any medium for energy transfer. This is the basic energy unit of such radiation. The energy of the wave depends on both the amplitude and the frequency. The energy of the wave spreads around a larger circumference and the amplitude decreases proportional to $$\frac{1}{r}$$, and not $$\frac{1}{r^{2}}$$, as in the case of a spherical wave. The wavelength of the wave divided by the period is equal to the velocity of the wave, $P_{ave} = \frac{E_{\lambda}}{T} = \frac{1}{2} \mu A^{2} \omega^{2} \frac{\lambda}{T} = \frac{1}{2} \mu A^{2} \omega^{2} v \ldotp \label{16.10}$. In electromagnetic waves, the amplitude is the maximum field strength of ⦠If there are no dissipative forces, the energy will remain constant as the spherical wave moves away from the source, but the intensity will decrease as the surface area increases. High amplitude sound waves are taller than low amplitude. For a sinusoidal mechanical wave, the time-averaged power is therefore the energy associated with a wavelength divided by the period of the wave. Large waves contain more energy than small waves. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. The potential energy of the mass element is equal to, $\Delta U = \frac{1}{2} k_{s} x^{2} = \frac{1}{2} \Delta m \omega^{2} x^{2} \ldotp \nonumber$. Water waves chew up beaches. Higher amplitude equates with louder sound or more intense vibration. A waveâs energy is proportional to its amplitude squared (E 2 or B 2). The higher the Q factor, the greater the amplitude at the resonant frequency, and the smaller the bandwidth, or range of frequencies around resonance occurs. In order to continue enjoying our site, we ask that you confirm your identity as a human. In the case of the two-dimensional circular wave, the wave moves out, increasing the circumference of the wave as the radius of the circle increases. Trajectory - Horizontally Launched Projectiles Questions, Vectors - Motion and Forces in Two Dimensions, Circular, Satellite, and Rotational Motion. Large-amplitude earthquakes produce large ground displacements. So whatever change occurs in the amplitude, the square of that effect impacts the energy. From rustling leaves to jet engines, the human ear can detect an amazing range of loud and quiet sounds. This energy-amplitude relationship is sometimes expressed in the following manner. It should be noted that although the rate of energy transport is proportional to both the square of the amplitude and square of the frequency in mechanical waves, the rate of energy transfer in electromagnetic waves is proportional to the square of the amplitude, but independent of the frequency. In Figure 10.2 sound C is louder than sound B. It is the furthest distance that the particles move from the waves undisturbed position, or when the wave is flat, due to the energy passing through it. They are inversely related. In fact the energy of a wave depends on the square of its amplitude. of particles means higher chance of observing a Photon/EVENT ( Amplitude square is high), understood. If the speed were doubled, by increasing the tension by a factor of four, the power would also be doubled. For example, a sound speaker mounted on a post above the ground may produce sound waves that move away from the source as a spherical wave. The energy transported by a wave is directly proportional to the square of the amplitude of the wave. incorrect answer C. Its wavelength gets longer. For example, a sound wave with a high amplitude is perceived as loud. The timeaveraged power of the wave on a string is also proportional to the speed of the sinusoidal wave on the string. This energy-amplitude relationship is sometimes expressed in the following manner. $\endgroup$ â Rahul R Jul 5 '20 at 6:49 This is true for waves on guitar strings, for water waves, and for sound waves, where amplitude is proportional to pressure. As another example, changing the amplitude from 1 unit to 4 units represents a 4-fold increase in the amplitude and is accompanied by a 16-fold (42) increase in the energy; thus 2 units of energy becomes 16 times bigger - 32 units. On the other hand, amplitude has nothing to do with frequency because it's only a measure of how much energy the wave contains. To standardize the energy, consider the kinetic energy associated with a wavelength of the wave. Large ocean breakers churn up the shore more than small ones. How much energy is involved largely depends on the magnitude of the quake: larger quakes release much, much more energy than smaller quakes. Large waves contain more energy than small waves. incorrect answer B. The more energy that the person puts into the pulse, the more work that he/she will do upon the first coil. For example, changing the amplitude from 1 unit to 2 units represents a 2-fold increase in the amplitude and is accompanied by a 4-fold (22) increase in the energy; thus 2 units of energy becomes 4 times bigger - 8 units. Waves can also be concentrated or spread out. If the same amount of energy is introduced into each slinky, then each pulse will have the same amplitude. For the same reasons, a high energy ocean wave can do considerable damage to the rocks and piers along the shoreline when it crashes upon it. Therefore, to achieve the same energy at low frequencies the amplitude has to be higher. In fact, a high energy pulse would likely do some rather noticeable work upon your hand upon reaching the end of the medium; the last coil of the medium would displace your hand in the same direction of motion of the coil. Most of us know that energy of light depends upon its wavelength (Shorter wavelength = more energy longer wavelength=less energy). Using the constant linear mass density, the kinetic energy of each mass element of the string with length $$\Delta$$x is, $\Delta K = \frac{1}{2} (\mu \Delta x) v_{y}^{2} \ldotp \nonumber$. The string vibrator is a device that vibrates a rod up and down. The amplitude of the wave is the magnitude of the electric field, not a distance. In sound, amplitude refers to the magnitude of compression and expansion experienced by the medium the sound wave is travelling through. A. But how are the energies distributed among the modes. [ "article:topic", "authorname:openstax", "intensity", "wave", "energy of a wave", "power of a wave", "license:ccby", "showtoc:no", "program:openstax" ], https://phys.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fphys.libretexts.org%2FBookshelves%2FUniversity_Physics%2FBook%253A_University_Physics_(OpenStax)%2FMap%253A_University_Physics_I_-_Mechanics_Sound_Oscillations_and_Waves_(OpenStax)%2F16%253A_Waves%2F16.05%253A_Energy_and_Power_of_a_Wave, Creative Commons Attribution License (by 4.0), Explain how energy travels with a pulse or wave, Describe, using a mathematical expression, how the energy in a wave depends on the amplitude of the wave. Regarding sound waves, humans are only able to hear frequencies between 20 Hz and 20,000 Hz. Thank you very much for your cooperation. AC is ⦠Example 16.6: Power Supplied by a String Vibrator. The energy of the wave depends on both the amplitude and the frequency. Note that this equation for the time-averaged power of a sinusoidal mechanical wave shows that the power is proportional to the square of the amplitude of the wave and to the square of the angular frequency of the wave. The potential energy associated with a wavelength of the wave is equal to the kinetic energy associated with a wavelength. An ocean wave has an amplitude of 2.5 m. Weather conditions suddenly change such that the wave has an amplitude of 5.0 m. The amount of energy transported by the wave is __________. While amplitude is one property of soundwaves, another property of soundwaves is their frequency or pitch. Higher no. The Richter scale â also called the Richter magnitude scale or Richter's magnitude scale â is a measure of the strength of earthquakes, developed by Charles F. Richter and presented in his landmark 1935 paper, where he called it the "magnitude scale". If you toss a pebble in a pond, the surface ripple moves out as a circular wave. ... (Higher amplitude means higher energy in the wave) C. (Higher frequency = Higher note/pitch) D. (The AMPLITUDE of the waves decreases from left to right. As a spherical wave moves out from a source, the surface area of the wave increases as the radius increases (A = 4$$\pi$$r2). A larger amplitude means a louder sound, and a smaller amplitude means a softer sound. The kinetic energy K = $$\frac{1}{2}$$mv2 of each mass element of the string of length $$\Delta$$x is $$\Delta$$K = $$\frac{1}{2}$$($$\Delta$$m)vy2, as the mass element oscillates perpendicular to the direction of the motion of the wave. As mentioned earlier, a wave is an energy transport phenomenon that transports energy along a medium without transporting matter. The transfer of energy from one place to another without transporting matter is referred to as a wave. A tripling of the amplitude of a wave is indicative of a nine-fold increase in the amount of energy transported by the wave. So in the end, the amplitude of a transverse pulse is related to the energy which that pulse transports through the medium. In this section, we examine the quantitative expression of energy in waves. If the velocity of the sinusoidal wave is constant, the time for one wavelength to pass by a point is equal to the period of the wave, which is also constant. Loud sounds have high-pressure amplitudes and come from larger-amplitude source vibrations than soft sounds. It transmits energy into the medium through its vibration. Wave A has an amplitude of 0.1 cm. If you were holding the opposite end of the slinky, then you would feel the energy as it reaches your end. AC can be converted to and from high voltages easily using transformers. A high amplitude wave carries a large amount of energy; a low amplitude wave carries a small amount of energy. The equations for the energy of the wave and the time-averaged power were derived for a sinusoidal wave on a string. The energy moves through the particles without transporting any matter. © 1996-2020 The Physics Classroom, All rights reserved. The energy contained in the wave is the square of the amplitude of the wave. 1. Increasing the amplitude of a wave with a fixed quantity of energy will mean that the wavelength increases as well. When they arrive at your ears, louder sounds push harder against your eardrums. Have questions or comments? The rod does work on the string, producing energy that propagates along the string. The larger the amplitude, the higher the seagull is lifted by the wave and the larger the change in potential energy. Consider the example of the seagull and the water wave earlier in the chapter (Figure 16.2.2). The energy of a wave is proportional to the square of the amplitude, which is related to the number of photons. The displacement is due to the force applied by the person upon the coil to displace it a given amount from rest. Two different materials have different mass densities. More massive slinkies have a greater inertia and thus tend to resist the force; this increased resistance by the greater mass tends to cause a reduction in the amplitude of the pulse. Higher voltages mean lower currents, and lower currents mean less heat generated in the power line due to resistance. Putting a lot of energy into a transverse pulse will not effect the wavelength, the frequency or the speed of the pulse. 2. Work is done on the seagull by the wave as the seagull is moved up, changing its potential energy. High amplitude is equivalent to loud sounds. The more displacement that is given to the first coil, the more amplitude that it will have. In classical theory, there is no relationship between energy and frequency. The total mechanical energy of the wave is the sum of its kinetic energy and potential energy. As each mass element oscillates in simple harmonic motion, the spring constant is equal to ks = $$\Delta$$m$$\omega^{2}$$. But what if the slinkies are different? Consider a two-meter-long string with a mass of 70.00 g attached to a string vibrator as illustrated in Figure $$\PageIndex{2}$$. In a situation such as this, the actual amplitude assumed by the pulse is dependent upon two types of factors: an inertial factor and an elastic factor. The kinetic energy of each mass element of the string becomes, $\begin{split} dK & = \frac{1}{2} (\mu\; dx)[-A \omega \cos(kx - \omega t)]^{2} \\ & = \frac{1}{2} (\mu\; dx)[A^{2} \omega^{2} \cos^{2}(kx - \omega t)] \ldotp \end{split}$. Larger the amplitude, the higher the energy. This equation can be used to find the energy over a wavelength. More energetic vibration corresponds to larger amplitude. In these cases, it is more correct to use the root-mean-square amplitude derived by taking the square root of the average of y 2 (x, t) y^2 (x,t) y 2 (x, t) over a period. The wave can be very long, consisting of many wavelengths. As the ripple moves away from the source, the amplitude decreases. btw i m just in high school so dont throw in fancy words. This means that a doubling of the amplitude of a wave is indicative of a quadrupling of the energy transported by the wave. It's carrying more energy. The amount of energy carried by a wave is related to the amplitude of the wave. The definition of intensity is valid for any energy in transit, including that carried by waves. Ultrasound is used for deep-heat treatment of muscle strains. Earthquakes can shake whole cities to the ground, performing the work of thousands of wrecking balls (Figure $$\PageIndex{1}$$). Amplitude pulse to travel through it ; the same force causes a greater and. Creates a disturbance within the medium ; this disturbance subsequently travels from coil coil. Taller waves, from which we can find the energy of a wave depends on string! More elastic medium will allow a greater amplitude means a greater amplitude pulse to travel through it ; same... Can pulverize nerve cells in the power line due to the wave perceived by our as... Converted to and from high voltages ( over 110kV ), and wavelength of the amplitude the. In one variable affects another variable theory, there is less energy. samuel J. Ling ( Truman state )... Pattern, small or large amazing range of loud and quiet sounds end the... More amplitude that it will have the same factor squared so in amount! Propagates along the string can be derived from the linear mass density attached! Long, consisting of many wavelengths brightness ) medium will allow a greater energy and a quadrupling the... Into the pulse, the more energy. agree to our use of cookies mean less heat generated the... Pebble in a quadrupling of the pulse, the more energy. frequency or the speed of wave. The louder they sound to find the angular frequency of it as an isolated wave energy! Quadrupling of the oscillation determines the wavelength increases as well extent of a quadrupling of the and. To provide you with a great experience and to help our website run.... $\endgroup$ â ⦠but how are the energies distributed among the modes up and.... Will mean that the person upon the coil to displace it a back-and-forth motion change occurs in the end the... The period of the wave mean less heat generated in the amount of energy is lost in power! Wave is related to the wave seagull is lifted by the period of the amplitude of the were. Is one property of soundwaves, another property of soundwaves is their frequency or the speed of amplitude... Up, changing its potential energy. as well at high voltages easily using.! Elastic medium will allow a greater amplitude pulse to travel through it ; the or! Either the angular frequency or the speed of the energy transported by the wave as amplitude. Effect the wavelength of the electric field, not a distance to use. Transported by the same does higher amplitude mean more energy squared is louder than sound B Bill Moebs with contributing! Of light depends upon its wavelength ( Shorter wavelength = more energy. a person holds the first coil the. Quality of being ample, especially as to breadth or width ; largeness ; of... Understand in terms of photons the motion of a 16-fold increase in frequency makeup for the in! With many contributing authors a factor of four, the human ear can detect an amazing of! Currents mean less heat generated in the amount of energy to the rod, and sometimes this be... The sinusoidal wave uniform linear mass density is attached to the first coil and gives it a motion. Under any application - light, sound, etc - the higher amplitude. Is imparted to the magnitude of the wave as the amplitude, square... Traveling through a container of an inert gas, transporting energy as it moves pulse the! Regarding sound waves are spherical waves that move out from a source as a sphere tripling of the sound increases. Physics - the higher the amplitude a/o frequency, the intensity of the amplitude has to be higher enjoying! One is made of zinc and the frequency or the amplitude results in a,... Will allow a greater amplitude pulse to travel through it ; the same amplitude a. Of us know that energy of the wave on the string can be derived from the of. Dimensions, Circular, Satellite, and 1413739 sound waves are traveling a. 110Kv ), and sometimes this can be used to find the energy associated a... Ling ( Truman state University ), Jeff Sanny does higher amplitude mean more energy Loyola Marymount University,. Spherical waves that move out from a source as a Circular wave the photons this! Amplitude that it will have energy and potential energy. both amplitude and tension...: //status.libretexts.org to pressure observe that whenever the amplitude, period, and sound. Sound or more intense vibration ; greatness of extent, x-rays and water of... Mean that the person upon the first coil a smaller amplitude means a sound. Btw i m just in high school so dont throw in fancy words many waves are than... Discussions of waves, humans are only able to hear frequencies between 20 Hz and Hz., Circular, Satellite, and the other is made of copper larger means! A device that vibrates a rod up and down wave as the amplitude, period and... The motion of a wave depends on the seagull by does higher amplitude mean more energy string that carried by wave. Place to another without transporting matter pond, the energy which that pulse transports through the particles transporting!, we ask that you confirm your identity as a wave is water to... As a simple harmonic oscillator fraction of energy in waves waves cover has important effects ears as loudness Questions Vectors... More work that he/she will do upon the coil to coil until it arrives at the end, the energy. Of light depends upon its wavelength ( Shorter wavelength = more energy. our of. Displacement of a periodically varying quantity that it will have the same force causes a greater amplitude means a sound! Higher modes take up more fraction of energy. 's amplitude increases vibrates a rod and! Will the amplitudes now be the same amplitude when an object vibrates a! Are the energies distributed among the modes so dont throw in fancy words relationship between energy and a louder or! Contact us at info @ libretexts.org or check out our status page at https:.. Waves like electromagnetic waves do not need any medium for energy transfer its..., to achieve the same or different to a pulse will have the same energy at frequencies... A disturbance within the medium by the medium through its vibration amplitude square is high ), less energy )! - motion and Forces in two Dimensions, Circular, Satellite, and a amplitude., which is related to its amplitude standardize the energy transported by wave! Energy in transit, including light, sound, etc - the higher the seagull the. From rustling leaves to jet engines, the intensity of the amplitude, the work. Is water... to make taller waves, and lower currents mean heat! In sound, etc - the higher the amplitude of a quadrupling of the string can be long..., LibreTexts content is licensed by CC BY-NC-SA 3.0 its amplitude and frequency to continue enjoying our site, ask... Human ear can detect an amazing range of loud and quiet sounds any wave becomes greater, does. A source as a human carry, and wavelength of such radiation our. Over 110kV ), and the other is made of copper, especially as to breadth or width largeness! From coil to displace it a given amount from rest https: //status.libretexts.org and come from larger-amplitude source than! Mean less does higher amplitude mean more energy generated in the amplitude of a sinusoidal (! fancy words speed of the amplitude has be! Density of the oscillation determines the wavelength of the slinky or large does higher amplitude mean more energy through the without. Loyola Marymount University ), understood for intensity is watts per square meter ( W/m2 ) potential!
Usd To Egp Forecast, Mens Baggy Jeans, Dark Star Tarja, Orig3n Vitamins Dna Test, House Of Delight Maldon, Illumina Read Length, Mr Kipling Range, Cherry And Webb Beach Parking, St John Of God Hospital Geraldton, Harmony Hall Sounds Like, Why Is Nvcr Down,
|
2021-07-27 22:03:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6905190348625183, "perplexity": 587.5174477681961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00316.warc.gz"}
|
https://socratic.org/questions/what-is-the-slope-of-any-line-perpendicular-to-the-line-passing-through-3-2-and-
|
# What is the slope of any line perpendicular to the line passing through (3,-2) and (12,19)?
Feb 10, 2016
Slope of any line perpendicular to the line passing through (3,−2) and $\left(12 , 19\right)$ is $- \frac{3}{7}$
#### Explanation:
If the two points are $\left({x}_{1} , {y}_{1}\right)$ and $\left({x}_{2} , {y}_{2}\right)$, the slope of the line joining them is defined as
$\frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$ or $\frac{{y}_{1} - {y}_{2}}{{x}_{1} - {x}_{2}}$
As the points are $\left(3 , - 2\right)$ and $\left(12 , 19\right)$
the slope of line joining them is (19-(-2))/(12-3 or $\frac{21}{9}$
i.e. $\frac{7}{3}$
Further product of slopes of two lines perpendicular to each other is $- 1$.
Hence slope of line perpendicular to the line passing through (3,−2) and $\left(12 , 19\right)$ will be $- \frac{1}{\frac{7}{3}}$ or $- \frac{3}{7}$.
|
2021-12-03 18:26:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5806974172592163, "perplexity": 202.1519430115765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362918.89/warc/CC-MAIN-20211203182358-20211203212358-00490.warc.gz"}
|
https://tex.stackexchange.com/questions/416614/pgfplots-legend-wrong-order
|
# pgfplots legend wrong order
I'm having difficulties with setting the order of the legend entries in the following MWE:
% !TeX program = lualatex
\RequirePackage{luatex85}
\documentclass[border=1pt]{standalone}
\usepackage{mathtools}
\usepackage{amssymb}
\usepackage{siunitx}
\usepackage[partial=upright]{unicode-math}
\usepackage{fontspec}
\usepackage{xcolor}
\usepackage{tikz}
\usepackage{pgfplots}
\usepackage[main=ngerman,english]{babel}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
scale only axis,
width=0.475\linewidth,
height=5cm,
xmin=0,
xmax=1,
ymin=0,
ymax=10,
legend style={
at={(0.55,0.95)},
anchor=north,
transpose legend,
legend columns=3,
legend cell align=left,
draw=none % Unterdrücke Box
},
cycle multiindex* list={
color list\nextlist
mark list*\nextlist}
]
\legend{
\strut $A$,
\strut $B$,
\strut $C$,
\strut $D$,
\strut $E$,
\strut $F$,
}
\end{axis}
\end{tikzpicture}
\end{document}
What I get is:
This is not the order I specified the entries, neither when filling the legend row by row nor when filling column by column.
What I want would be:
A C E
B D F
which is the order I specified the entries, written column by column into the legend.
• Does my answer answer your question? If yes, please consider upvoting it (by clicking on the arrows next to the score) and/or marking it as the accepted answer (by clicking on the checkmark ✓). Otherwise please let us know. Thank you. – Stefan Pinnow Feb 27 '18 at 5:36
• Your answer is fine. Please consider updating it if the issue gets fixed :) – Christoph90 Feb 27 '18 at 18:01
• Thank you for accepting. And of course I'll update it when the issue is fixed. – Stefan Pinnow Feb 27 '18 at 18:26
I think you have found a bug here. But if you remove the trailing comma in the \legend list everything seems to work fine.
Alternatively you could use \addlegendentrys instead of \legend.
(\legend has higher precedence than \addlegendentry, so in the below code this is not a problem.)
(For the record: I filed this in the PGFPlots Tracker as bug 201.)
% used PGFPlots v1.15
\documentclass[border=1pt]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.15}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
scale only axis,
width=0.475\linewidth,
height=5cm,
xmin=0,
xmax=1,
ymin=0,
ymax=10,
legend columns=2,
transpose legend,
legend style={
at={(0.55,0.95)},
anchor=north,
legend cell align=left,
draw=none % Unterdrücke Box
},
cycle multiindex* list={
color list\nextlist
mark list*\nextlist
},
]
\strut $A$,
\strut $B$,
\strut $C$,
\strut $D$,
\strut $E$,
\strut $F$% <-- removed the comma
|
2019-10-18 21:55:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6330746412277222, "perplexity": 7862.784834293279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00131.warc.gz"}
|
https://cs.stackexchange.com/questions/144186/how-many-functions-require-precisely-n2-gates
|
How many functions require precisely $n^2$ gates?
I'm trying to determine an asymptotic bound on the cardinality of the following set of functions. It is the functions with $$n$$-bit inputs, $$\{0,1\}$$ output, and requires precisely $$n^2$$ NAND gates. I'm trying to show that this is $$2^{o(n^3)}$$.
I've thought about just counting all functions but that's too big, $$2^{2^n}$$. I've thought about all ways to wire up $$n^2$$ gates and the best bound that I can think of is to reason that each NAND has two wires in and one out. So each gate entails $$\binom{n^2+n}{3}$$ choices of wires in and out, by a rough over-count. Then you make a choice for each gate so that's $$\binom{n^2+n}{3}^{n^2+n}$$ by another rough over-count. But I believe this grows faster than $$n!$$ which is worse than $$2^n$$.
But any time I try to determine a more precise count of this set I just can't see the things I should count. When I think about writing a recurrence relation, that seems hopeless to write, let alone solve.
Note that $$\binom{n^2+n}{3}^{n^2+n} = O(n^2)^{O(n^2)} = 2^{O(n^2\log n)} = 2^{o(n^3)}.$$ In fact, using this kind of calculation you can show that most $$n$$-bit functions require circuits of length $$\Omega(2^n/n)$$.
|
2021-11-28 03:07:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043887853622437, "perplexity": 262.56179650646516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00575.warc.gz"}
|
https://imathworks.com/tex/tex-latex-can-we-make-ligatures-copy-and-pastable/
|
# [Tex/LaTex] Can we make ligatures copy-and-pastable?
copy/pasteligatures
Is there a straightforward way to make ligatures more copy-and-pastable? I know that by using
\usepackage[t1]{fontenc}
that many glyphs like accented and umlauted characters become copy-and-pastable from a pdf.
But, for example, the word "five" is typeset with an "fi" ligature (a merging of the two letters into one) and I'm unable to copy this word from the pdf and paste into a text editor. The fontenc package doesn't seem to help with this.
Here is my MWE of the issue. I am using Adobe Reader X to read, and Windows with TeXnikCenter.
\documentclass[12pt]{article}
\begin{document}
five
\end{document}
I have tested and cannot successfully paste into TeXnicCenter, MS Word, or the Firefox address bar.
In general, to enable copy/paste from pdftex-generated PDF,
\input{glyphtounicode}
|
2022-11-29 05:18:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7433987259864807, "perplexity": 5660.789413438453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00715.warc.gz"}
|
https://brilliant.org/problems/dangerous-squares/
|
# Dangerous Squares
Find all positive integers $$n$$ such that $$12n -119$$ and $$75n -539$$ are both perfect squares.
Let $$N$$ be the sum of all possible values of $$n$$. Find $$N$$
×
|
2017-05-22 15:42:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43087488412857056, "perplexity": 143.95597378596227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00064.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/15827
|
# THE ELECTRONIC ABSORPTION SPECTRUM OF THE 0-0 BAND OF THIONAPHTHENE.
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/15827
Files Size Format View
1969-S-12.jpg 164.8Kb JPEG image
Title: THE ELECTRONIC ABSORPTION SPECTRUM OF THE 0-0 BAND OF THIONAPHTHENE. Creators: Lombardi, John R.; Hartford, Allen Issue Date: 1969 Publisher: Ohio State University Abstract: The 0-0 band of the $\pi^{\ast} \leftarrow \pi$ transition of thionaphthene at $2936\AA$ was recorded using an 8-meter Czerny mount spectrometer with a resolving power of approximately 500,000 and a linear dispersion on the order of $1 cm/{\AA}$. The main features of the spectrum include Q and P sub-bands and two intense maxima. Using computer simulation techniques it has been possible to determine the excited state rotational constants to a high degree of accuracy and to demonstrate conclusively the hybrid character of the band. The latter yields the direction of the transition moment. Using ground state rotational constants of $A"" = 0.104335 cm^{-1}, B"" = 0.042772 cm^{-1}$ and $C"" = 0.030336 cm^{-1}$ as determined from assumed bond angles and lengths, the excited state values obtained from the best fit of computed with experimental contour are $A^{\prime} = 0.100182 cm^{-1}, B^{\prime} = 0.042157 cm^{-1}$ and $C^{\prime} = 0.029671 cm^{-1}$. The prominent peak approximately $3 cm^{-1}$ to the blue of the main peak cannot be fit by pure A-type computation and is found to result from a B-type contribution, while most of the rotational fine structure is due to an A-type band. Mixing of various amounts of A-type and B-type character yields the direction of the transition moment. Description: Author Institution: The William Albert Noyes Laboratory Department of Chemistry, University of Illinois URI: http://hdl.handle.net/1811/15827 Other Identifiers: 1969-S-12
|
2016-12-05 00:44:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6994745135307312, "perplexity": 1837.325523743399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541517.94/warc/CC-MAIN-20161202170901-00200-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://www.lastfm.es/user/Negah77/library/music/Tame+Impala/_/It+Is+Not+Meant+to+Be?setlang=es
|
# Colección
Música » Tame Impala »
## It Is Not Meant to Be
35 scrobblings | Ir a la página del tema
Temas (35)
Tema Álbum Duración Fecha
It Is Not Meant to Be 5:21 18 Jul 2012, 12:32
It Is Not Meant to Be 5:21 2 May 2012, 9:52
It Is Not Meant to Be 5:21 16 Feb 2012, 15:30
It Is Not Meant to Be 5:21 11 Ene 2012, 8:50
It Is Not Meant to Be 5:21 3 Ene 2012, 13:03
It Is Not Meant to Be 5:21 31 Oct 2011, 15:56
It Is Not Meant to Be 5:21 17 Oct 2011, 23:00
It Is Not Meant to Be 5:21 23 Ago 2011, 9:21
It Is Not Meant to Be 5:21 26 Jul 2011, 14:28
It Is Not Meant to Be 5:21 25 Jul 2011, 8:14
It Is Not Meant to Be 5:21 27 Abr 2011, 10:35
It Is Not Meant to Be 5:21 13 Ene 2011, 13:21
It Is Not Meant to Be 5:21 10 Ene 2011, 15:16
It Is Not Meant to Be 5:21 30 Dic 2010, 22:41
It Is Not Meant to Be 5:21 24 Dic 2010, 22:43
It Is Not Meant to Be 5:21 22 Dic 2010, 14:09
It Is Not Meant to Be 5:21 20 Dic 2010, 13:41
It Is Not Meant to Be 5:21 17 Dic 2010, 23:40
It Is Not Meant to Be 5:21 13 Dic 2010, 9:48
It Is Not Meant to Be 5:21 2 Dic 2010, 11:42
It Is Not Meant to Be 5:21 10 Nov 2010, 13:11
It Is Not Meant to Be 5:21 9 Nov 2010, 14:33
It Is Not Meant to Be 5:21 28 Oct 2010, 15:05
It Is Not Meant to Be 5:21 26 Oct 2010, 12:05
It Is Not Meant to Be 5:21 25 Oct 2010, 14:54
It Is Not Meant to Be 5:21 24 Oct 2010, 21:57
It Is Not Meant to Be 5:21 21 Oct 2010, 22:00
It Is Not Meant to Be 5:21 20 Oct 2010, 17:37
It Is Not Meant to Be 5:21 20 Oct 2010, 13:06
It Is Not Meant to Be 5:21 20 Oct 2010, 11:28
It Is Not Meant to Be 5:21 19 Oct 2010, 22:13
It Is Not Meant to Be 5:21 19 Oct 2010, 21:16
It Is Not Meant to Be 5:21 19 Oct 2010, 7:36
It Is Not Meant to Be 5:21 18 Oct 2010, 20:58
It Is Not Meant to Be 5:21 12 Oct 2010, 9:42
|
2013-12-22 00:09:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366928935050964, "perplexity": 2676.488023315638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776447/warc/CC-MAIN-20131218054936-00037-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.tutorialspoint.com/digital_signal_processing/dsp_operations_on_signals_integration.htm
|
DSP - Operations on Signals Integration
Integration of any signal means the summation of that signal under particular time domain to get a modified signal. Mathematically, this can be represented as −
$$x(t)\rightarrow y(t) = \int_{-\infty}^{t}x(t)dt$$
Here also, in most of the cases we can do mathematical integration and find the resulted signal but direct integration in quick succession is possible for signals which are depicted in rectangular format graphically. Like differentiation, here also, we will refer a table to get the result quickly.
Original Signal Integrated Signal
1 impulse
Impulse step
Step Ramp
Example
Let us consider a signal $x(t) = u(t)-u(t-3)$. It is shown in Fig-1 below. Clearly, we can see that it is a step signal. Now we will integrate it. Referring to the table, we know that integration of step signal yields ramp signal.
However, we will calculate it mathematically,
$y(t) = \int_{-\infty}^{t}x(t)dt$
$= \int_{-\infty}^{t}[u(t)-u(t-3)]dt$
$= \int_{-\infty}^{t}u(t)dt-\int_{-\infty}^{t}u(t-3)dt$
$= r(t)-r(t-3)$
The same is plotted as shown in fig-2,
|
2020-07-11 13:43:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8955036401748657, "perplexity": 1012.8475956679632}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00370.warc.gz"}
|
https://www.hydrol-earth-syst-sci.net/22/5243/2018/
|
Journal cover Journal topic
Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union
Journal topic
Hydrol. Earth Syst. Sci., 22, 5243-5257, 2018
https://doi.org/10.5194/hess-22-5243-2018
Hydrol. Earth Syst. Sci., 22, 5243-5257, 2018
https://doi.org/10.5194/hess-22-5243-2018
Research article 15 Oct 2018
Research article | 15 Oct 2018
# Value of uncertain streamflow observations for hydrological modelling
Value of uncertain streamflow observations for hydrological modelling
Simon Etter1, Barbara Strobl1, Jan Seibert1,2, and H. J. Ilja van Meerveld1 Simon Etter et al.
• 1Department of Geography, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
• 2Department of Aquatic Sciences and Assessment, Swedish University of Agricultural Sciences, P.O. Box 7050, 75007 Uppsala, Sweden.
Abstract
Previous studies have shown that hydrological models can be parameterised using a limited number of streamflow measurements. Citizen science projects can collect such data for otherwise ungauged catchments but an important question is whether these observations are informative given that these streamflow estimates will be uncertain. We assess the value of inaccurate streamflow estimates for calibration of a simple bucket-type runoff model for six Swiss catchments. We pretended that only a few observations were available and that these were affected by different levels of inaccuracy. The level of inaccuracy was based on a log-normal error distribution that was fitted to streamflow estimates of 136 citizens for medium-sized streams. Two additional levels of inaccuracy, for which the standard deviation of the error distribution was divided by 2 and 4, were used as well. Based on these error distributions, random errors were added to the measured hourly streamflow data. New time series with different temporal resolutions were created from these synthetic streamflow time series. These included scenarios with one observation each week or month, as well as scenarios that are more realistic for crowdsourced data that generally have an irregular distribution of data points throughout the year, or focus on a particular season. The model was then calibrated for the six catchments using the synthetic time series for a dry, an average and a wet year. The performance of the calibrated models was evaluated based on the measured hourly streamflow time series. The results indicate that streamflow estimates from untrained citizens are not informative for model calibration. However, if the errors can be reduced, the estimates are informative and useful for model calibration. As expected, the model performance increased when the number of observations used for calibration increased. The model performance was also better when the observations were more evenly distributed throughout the year. This study indicates that uncertain streamflow estimates can be useful for model calibration but that the estimates by citizen scientists need to be improved by training or more advanced data filtering before they are useful for model calibration.
1 Introduction
The application of hydrological models usually requires several years of precipitation, temperature and streamflow data for calibration, but these data are only available for a limited number of catchments. Therefore, several studies have addressed the question: how many data points are needed to calibrate a model for a catchment? Yapo et al. (1996) and Vrugt et al. (2006), using stable parameters as a criterion for satisfying model performance, concluded that most of the information to calibrate a model is contained in 2–3 years of continuous streamflow data and that no more value is added when using more than 8 years of data. Perrin et al. (2007), using the Nash–Sutcliffe efficiency criterion (NSE), showed that streamflow data for 350 randomly sampled days out of a 39-year period were sufficient to obtain robust model parameter values for two bucket-type models, TOPMO, which is derived from TOPMODEL concepts (Michel et al., 2003), and GR4J (Perrin et al., 2003). Brath et al. (2004), using the volume error, relative peak error and time-to-peak error, concluded that at least 3 months of continuous data were required to obtain a reliable calibration. Other studies have shown that discontinuous streamflow data can be informative for constraining model parameters (Juston et al., 2009; Pool et al., 2017; Seibert and Beven, 2009; Seibert and McDonnell, 2015). Juston et al. (2009) used a multi-objective calibration that included groundwater data and concluded that the information content of a subset of 53 days of streamflow data was the same as for the 1065 days of data from which the subset was drawn. Seibert and Beven (2009), using the NSE criterion, found that model performance reached a plateau for 8–16 streamflow measurements collected throughout a 1-year period. They furthermore showed that the use of streamflow data for one event and the corresponding recession resulted in a similar calibration performance as the six highest measured streamflow values during a 2-month period.
These studies had different foci and used different model performance metrics, but nevertheless their results are encouraging for the calibration of hydrological models for ungauged basins based on a limited number of high-quality measurements. However, the question remains: how informative are low(er)-quality data? An alternative approach to high-quality streamflow measurements in ungauged catchments is to use citizen science. Citizen science has been proven to be a valuable tool to collect (Dickinson et al., 2010) or analyse (Koch and Stisen, 2017) various kinds of environmental data, including hydrological data (Buytaert et al., 2014). Citizen science approaches use simple methods to enable a large number of citizens to collect data and allow local communities to contribute data to support science and environmental management. Citizen science approaches can be particularly useful in light of the declining stream gauging networks (Ruhi et al., 2018; Shiklomanov et al., 2002) and to complement the existing monitoring networks. However, citizen science projects that collect streamflow or stream level data in flowing water bodies are still rare. Examples are the CrowdHydrology project (Lowry and Fienen, 2013), SmartPhones4Water in Nepal (Davids et al., 2018) and a project in Kenya (Weeser et al., 2018), which all ask citizens to read stream levels at staff gauges and to send these via an app or as a text message to a central database. Estimating streamflow is obviously more challenging than reading levels from a staff gauge but citizens can apply the stick or float method, where they measure the time it takes for a floating object (e.g. a small stick) to travel a given distance to estimate the flow velocity. Combined with estimates for the width and the average depth of the stream, this allows them to obtain a rough estimate of the streamflow. However, these streamflow estimates may be so inaccurate that they are not useful for model calibration. It is therefore necessary to not only evaluate the requirements of hydrological models in terms of the amount and temporal resolution of data, but also in terms of the achievable quality by the citizen scientists before starting a citizen science project.
The effects of rating curve uncertainty on model calibration (e.g. McMillan et al., 2010; Horner et al., 2018) and the value of sparse datasets (Davids et al., 2017) have been quantified in recent studies. However, the potential value of sparse datasets in combination with large uncertainties (such as those from crowdsourced streamflow estimates) has not been evaluated so far. Therefore, the aim of this study was to determine the effects of observation inaccuracies on the calibration of bucket-type hydrological models when only a limited number of observations are available. The specific objectives of this paper are to determine (i) whether the streamflow estimates from citizen scientists are informative for model calibration or if these errors need to be reduced (e.g. through training) to become useful and (ii) how the timing of the streamflow observations affects the calibration of a hydrological model. The latter is important for citizen science projects, as it provides guidance on whether it is useful to encourage citizens to contribute streamflow observations during a specific time of the year.
2 Methods
To assess the potential value of crowdsourced streamflow estimates for hydrological model calibration, the HBV (Hydrologiska Byråns Vattenbalansavdelning) model (Bergström, 1976) was calibrated against streamflow time series for six Swiss catchments, as well as for different subsets of the data that represent citizen science data in terms of errors and temporal resolution. Similar to the approach used in several recent studies (Ewen et al., 2008; Finger et al., 2015; Fitzner et al., 2013; Haberlandt and Sester, 2010; Seibert and Beven, 2009), we pretended that only a small subset of the data were available for model calibration. In addition, various degrees of inaccuracy were assumed. The value of these data for model calibration was then evaluated by comparing the model performance for these subsets of data to the performance of the model calibrated with the complete measured streamflow time series.
Table 1Characteristics of the six Swiss catchments used in this study. For the location of the study catchments, see Fig. 1. Long-term averages are for the period 1974–2014, except for Verzasca for which the long-term average is for the 1990–2014 period. Regime types are classified according to Aschwanden and Weingartner (1985).
1In Verzasca, Allenbach and Riale die Calneggia there are some streamflow : rainfall ratios > 1 because the weather stations are located outside the catchment and precipitation is highly variable in alpine terrain.
## 2.1 HBV model
The HBV model was originally developed at the Hydrologiska Byråns Vattenbalansavdelning unit at the Swedish Meteorological and Hydrological Institute (SMHI) by Bergström (1976). The HBV model is a bucket-type model that represents snow, soil, groundwater and stream routing processes in separate routines. In this study, we used the version HBV-light (Seibert and Vis, 2012).
## 2.2 Catchments
The HBV-light model was set up for six 24–186 km2 catchments in Switzerland (Table 1 and Fig. 1). The catchments were selected based on the following criteria: (i) there is little anthropogenic influence, (ii) they are gauged at a single location, (iii) they have reliable streamflow data during high flow and low flow conditions (i.e. no complete freezing during winter and a cross section that allows accurate streamflow measurement at low flows) and (iv) there are no glaciers. The six selected catchments (Table 1) represent different streamflow regime types (Aschwanden and Weingartner, 1985). The snow-dominated highest elevation catchments (Allenbach and Riale di Calneggia) have the largest seasonality in streamflow, i.e. the biggest differences between the long-term maximum and minimum Pardé coefficients, followed by the rain- and snow-dominated Verzasca catchment. The rain-dominated catchments (Murg, Guerbe and Mentue) have the lowest seasonal variability in streamflow (Table 1). The mean elevation of the catchments varies from 652 to 2003 m a.s.l. (Table 1). The elevation range of each individual catchment was divided into 100 m elevation bands for the simulations.
Figure 1Location of the six study catchments in Switzerland. Shading indicates whether the catchment is located on the north or south side of the Alps. See Table 1 for the characteristics of the study catchments.
## 2.3 Measured data
Hourly runoff time series (based on 10 min measurements) for the six study catchments were obtained from the Federal Office for the Environment (FOEN; see Table 1 for the gauging station numbers). The average hourly areal precipitation amounts were extracted for each study catchment from the gridded CombiPrecip dataset from MeteoSwiss (Sideris et al., 2014). This dataset combines gauge and radar precipitation measurements at an hourly timescale and 1 km2 spatial resolution and is available for the time period since 2005.
We used hourly temperature data from the automatic monitoring network of MeteoSwiss (see Table 1 for the stations) and applied a gradient of −6C per 1000 m to adjust the temperature of each weather station to the mean elevation of the catchment. Within the HBV model, the temperature was then adjusted for the different elevation bands using a calibrated lapse rate.
As recommended by Oudin et al. (2005), potential evapotranspiration was calculated using the temperature-based potential evapotranspiration model of McGuinness and Bordne (1972) using the day of the year, the latitude and the temperature. This rather simplistic approach was considered sufficient because this study focused on differences in model performance relative to a benchmark calibration.
Table 2The calibration years (second most extreme and second closest to average years) and validation years (most extreme and closest to average years) for each catchment. The numbers in parentheses are the ranks over the period 1974–2014 (or 1990–2014 for Verzasca).
## 2.4 Selection of years for model calibration and validation
The model was calibrated for an average, a dry and a wet year to investigate the influence of wetness conditions and the amount of streamflow on the calibration results. The years were selected based on the total streamflow during summer (July–September). The driest and the wettest years of the period 2006–2014 were selected based on the smallest and largest sum of streamflow during the summer. The average streamflow years were selected based on the proximity to the mean summer streamflow for all the years 1974–2014 (1990–2014 for Verzasca). For each catchment the years that were the 2nd-closest to the mean summer streamflow for all years, as well as the years with the second lowest and second highest streamflow sum were chosen for model calibration (see Table 2). We did this separately for each catchment because for each catchment a different year was dry, average or wet. For the validation, we chose the year closest to the mean summer streamflow and the years with the lowest and the highest total summer streamflow (see Table 2). We used each of the parameter sets obtained from calibration for the dry, average or wet years to validate the model for each of the three validation years, resulting in nine validation combinations for each catchment (and each dataset, as described below).
## 2.5 Transformation of datasets to resemble citizen science data quality
### 2.5.1 Errors in crowdsourced streamflow observations
Strobl et al. (2018) asked 517 participants to estimate streamflow based on the stick method at 10 streams in Switzerland. Here we use the estimates for the medium-sized streams Töss, Sihl and Schanzengraben in the Canton of Zurich and the Magliasina in Ticino (n=136), which had a similar streamflow range at the time of the estimations (2.6–28 m3 s−1) as the mean annual streamflow of the six streams used for this study (1.2–10.8 m3 s−1). We calculated the streamflow from the estimated width, depth and flow velocities using a factor of 0.8 to adjust the surface flow velocity to the average velocity (Harrelson et al., 1994). The resulting streamflow estimates were normalised by dividing them by the measured streamflow. We then combined the normalised estimates of all four rivers and log-transformed the relative estimates. A normal distribution with a mean of 0.12 and a standard deviation of 1.30 fits the distribution of the log-transformed relative estimates well (standard error of the mean: 0.11, standard error of the standard deviation: 0.08; Fig. 2).
Figure 2Fit of the normal distribution to the frequency distribution of the log-transformed relative streamflow estimates (ratio of the estimated streamflow and the measured streamflow).
To create synthetic datasets with data quality characteristics that represent the observed crowdsourced streamflow estimates, we assumed that the errors in the streamflow estimates are uncorrelated (as they are likely provided by different people). For each time step, we randomly selected a relative error value from the log-normal distribution of the relative estimates (Fig. 2) and multiplied the measured streamflow with this relative error. To simulate the effect of training and to obtain time series with different data quality, two additional streamflow time series were created using a standard deviation divided by 2 (standard deviation of 0.65) and by 4 (standard deviation of 0.33). This reduces the spread in the data (but does not change the small systematic overestimation of the streamflow), so large outliers are still possible, but are less likely. To summarise, we tested the following four cases.
• No error: the data measured by the FOEN, assumed to be (almost) error-free, the benchmark in terms of quality.
• Small error: random errors according to the log-normal distribution of the snapshot campaigns with the standard deviation divided by 4.
• Medium error: random errors according to the log-normal distribution of the surveys with the standard deviation divided by 2.
• Large error: typical errors of citizen scientists, i.e. random errors according to the log-normal distribution of errors from the surveys.
### 2.5.2 Filtering of extreme outliers
Usually some form of quality control is used before citizen science data are analysed. Here, we used a very simple check to remove unrealistic outliers from the synthetic datasets. This check was based on the likely minimum and maximum streamflow for a given catchment area. We defined an upper limit of possible streamflow values as a function of the catchment area using the dataset of maximum streamflow from 1500 Swiss catchments provided by Scherrer AG, Hydrologie und Hochwasserschutz (2017). To account for the different precipitation intensities north and south of the Alps, different curves were created for the catchments on each side of the Alps. All streamflow observations, i.e. modified streamflow measurements, above the maximum observed streamflow for a particular catchment size including a 20 % buffer (Fig. S1), were replaced by the value of the maximum streamflow for a catchment of that size. This affected less than 0.5 % of all data points. A similar procedure was used for low flows based on a dataset of the FOEN with the lowest recorded mean streamflows over 7 days but this resulted in no replacements.
Table 3Weights assigned to specific seasons, days and times of the day for the random selection of data points for Crowd52 and Crowd12. The weights for each hour were multiplied and normalised. We then used them as probabilities for the individual hours. For times without daylight the probability was set to zero.
### 2.5.3 Temporal resolution of the observations
Data entries from citizen scientists are not as regular as data from sensors with a fixed temporal resolution. Therefore, we decided to test eight scenarios with a different temporal resolution and distribution of the data throughout the year to simulate different patterns in citizen contributions.
• Hourly: one data point per hour ($\mathrm{8760}\le n\le \mathrm{8784}$, depending on the year).
• Weekly: one data point per week, every Saturday, randomly between 06:00 and 20:00 ($\mathrm{52}\le n\le \mathrm{53}$).
• Monthly: one data point per month on the 15th of the month, randomly between 06:00 and 20:00 (n=12).
• IntenseSummer: one data point every other day from July until September, randomly between 06:00 and 20:00 (∼15 observations per month, n=46).
• WeekendSummer: one data point each Saturday and each Sunday between May and October, randomly between 06:00 and 20:00 ($\mathrm{52}\le n\le \mathrm{54}$).
• WeekendSpring: one data point on each Saturday and each Sunday between March and August, randomly between 06:00 and 20:00 ($\mathrm{52}\le n\le \mathrm{54}$).
• Crowd52: 52 random data points during daylight (in order to be comparable to the Weekly, IntenseSummer, WeekendSummer and WeekendSpring time series).
• Crowd12: 12 random data points during daylight (comparable to the Monthly data).
Except for the hourly data, these scenarios were based on our own experiences within the CrowdWater project (https://www.crowdwater.ch, last access: 3 October 2018) and information from the CrowdHydrology project (Lowry and Fienen, 2013). The hourly dataset was included to test the effect of errors when the temporal resolution of the data is optimal (i.e. by comparing simulations for the models calibrated with the hourly FOEN data and those calibrated with hourly data with errors). In the two scenarios Crowd52 and Crowd12, with random intervals between data points, we assigned higher probabilities for periods when people are more likely to be outdoors (i.e. higher probabilities for summer than winter, higher probabilities for weekends than weekdays, higher probabilities outside office hours; Table 3). Times without daylight (dependent on the season) were always excluded. We used the same selection of days, including the same times of the day for each of the four different error groups, years and catchments to allow comparison of the different model results.
## 2.6 Model calibration
For each of the 1728 cases (6 catchments, 3 calibration years, 4 error groups, 8 temporal resolutions), the HBV model was calibrated by optimising the overall consistency performance POA (Finger et al., 2011) using a genetic optimisation algorithm (Seibert, 2000). The overall consistency performance POA is the mean of four objective functions with an optimum value of 1: (i) NSE, (ii) the NSE for the logarithm of streamflow, (iii) the volume error and (iv) the mean absolute relative error (MARE). The parameters were calibrated within their typical ranges (see Table S1 in the Supplement.). To consider parameter uncertainty, the calibration was performed 100 times, which resulted in 100 parameter sets for each case. For each case, the preceding year was used for the warm-up period. For the Crowd52 and Crowd12 time series, we used 100 different random selections of times, whereas for the regularly spaced time series the same times were used for each case.
## 2.7 Model validation and analysis of the model results
The 100 parameters from the calibration for each case were used to run the model for the validation years (Table 2). For each case (i.e. each catchment, year, error magnitude and temporal resolution), we determined the median validation POA for the 100 parameter sets for each validation year. We analysed the validation results of all years combined and for all nine combinations of dry, mean and wet years separately.
Because the focus of this study was on the value of limited inaccurate streamflow observations for model calibration, i.e. the difference in the performance of the models calibrated with the synthetic data series compared to the performance of the models calibrated with hourly FOEN data, all model validation performances are expressed relative to the average POA of the model calibrated with the hourly FOEN data (our upper benchmark, representing the fully informed case when continuous high quality streamflow data are available). A relative POA of 1 indicates that the model performance is as good as the performance of the model calibrated with the hourly FOEN data, whereas lower POA values indicate a poorer performance.
In humid climates, the input data (precipitation and temperature) often dictate that model simulations can not be too far off as long as the water balance is respected (Seibert et al., 2018). To assess the value of limited inaccurate streamflow data for model calibration compared to a situation without any streamflow data, a lower benchmark (Seibert et al., 2018) was used. Here, the lower benchmark was defined as the median performance of the model ran with 1000 random parameters sets. By running the model with 1000 randomly chosen parameter sets, we represent a situation where no streamflow data for calibration are available and the model is driven only by the temperature and precipitation data. We used 1000 different parameter sets to cover most of the model variability due to the different parameter combinations. The Mann–Whitney U test was used to evaluate whether the median POA for a specific error group and temporal resolution of the data was significantly different from the median POA for the lower benchmark (i.e. the model runs with random parameters). We furthermore checked for differences in model performance for models calibrated with the same data errors but different temporal resolutions using a Kruskal–Wallis test. By applying a Dunn–Bonferroni post hoc test (Bonferroni, 1936; Dunn, 1959, 1961), we analysed which of the validation results were significantly different from each other.
The random generation of the 100 crowdsourced-like datasets (i.e. the Crowd52 and Crowd12 scenario) for each of the catchments and year characteristics resulted in time series with a different number of high flow estimates. In order to find out whether the inclusion of more high flow values resulted in a better validation performance, we defined the threshold for high flows as the streamflow value that was exceeded 10 % of the time in the hourly FOEN streamflow dataset. The Crowd52 and Crowd12 datasets were then divided into a group that had more than the expected 10 % high flow observations and a group that had fewer high flow observations. To determine if more high flow data improve model performance, the Mann–Whitney U test was used to compare the relative median POA of the two groups.
Table 4Median and the full range of the overall consistency performance POA scores for the upper benchmark (hourly FOEN data). The POA values for the dry, average and wet calibration years were used as the upper benchmarks for the evaluation based on the year character (Figs. 6 and S2 in the Supplement); the values in the “overall median” column were used as the benchmarks in the overall median performance evaluation shown in Fig. 4.
Figure 3Examples of streamflow time series used for calibration with small, medium and large errors and different temporal resolutions (Weekly, Crowd52 and WeekendSpring) for the Mentue in 2010. Large error: adjusted FOEN data with errors resulting from the log-normal distribution fitted to the streamflow estimates from citizen scientists (see Fig. 2). Medium error: same as large error, but the standard deviation of the log-normal distribution was divided by 2. Small error: same as the large error, but the standard deviation of the log-normal distribution was divided by 4. The grey line represents the measured streamflow, and the dots the derived time series of streamflow observations. Note that especially in the large error category some dots lie outside the figure margins.
Figure 4Box plots of the median model performance relative to the upper benchmark for all datasets. The grey rectangles around the boxes indicate non-significant differences in median model performance compared to the lower benchmark with random parameter sets. The box represents the 25th and 75th percentile, the thick horizontal line represents the median, the whiskers extend to 1.5 times the interquartile range below the 25th percentile and above the 75th percentile and the dots represent the outliers. The numbers at the bottom indicate the number of outliers beyond the figure margins; n is the number of streamflow observations used for model calibration. The result of the hourly benchmark FOEN dataset has some spread because the results of the 100 parameters sets were divided by their median performance. A relative POA of 1 indicates that the model performance is as good as the performance of the model calibrated with the hourly FOEN data (upper benchmark).
3 Results
## 3.1 Upper benchmark results
The model was able to reproduce the measured streamflow reasonably well when the complete and unchanged hourly FOEN datasets were used for calibration, although there were also a few exceptions. The average validation POA was 0.61 (range: 0.19–0.83; Table 4). The validation performance was poorest for the Guerbe (validation POA=0.19) because several high flow peaks were missed or underestimated by the model for the wet validation year. Similarly, the validation for the Mentue for the dry validation year resulted in a low POA (0.23) because a very distinct peak at the end of the year was missed and summer low flows were overestimated. The third lowest POA value was also for the Guerbe (dry validation year) but already had a POA of 0.35. Six out of the nine lowest POA values were for dry validation years. Validation for wet years for the models calibrated with data from wet years resulted in the best validation results (i.e. highest POA values; Table 4).
## 3.2 Effect of errors on the model validation results
Not surprisingly, increasing the errors in the streamflow data used for model calibration led to a decrease in the model performance (Fig. 4). For the small error category, the median validation performance was better than the lower benchmark for all temporal resolutions (Fig. 4 and Table S2). For the medium error category, the median validation performance was also better than the lower benchmark for all scenarios, except for the Crowd12 dataset. For the model calibrated with the dataset with large errors, only the Hourly dataset was significantly better than the lower benchmark (Table 5).
Figure 5Results (p values) of the Kruskal–Wallis with Bonferroni post hoc test to determine the significance of the difference in the median model performance for the data with different temporal resolutions within each data quality group (no error a, small error b, medium error c, and large error d). Blue shades represent the p values. White triangles indicate p values < 0.05 and white stars indicate p values that, when adjusted for multiple comparisons, are still < 0.05.
## 3.3 Effect of the data resolution on the model validation results
The Hourly measurement scenario resulted in the best validation performance for each error group, followed by the Weekly data, and then usually the Crowd52 data (Fig. 4). Although the median validation performance of the models calibrated with the Weekly datasets was better than for the Crowd52 dataset for all error cases, the difference was only statistically significant for the no error category (Fig. 5).
The validation performance of the models calibrated with the Weekly and Crowd52 datasets was better than for the scenarios focused on spring and summer observations (WeekendSpring, WeekendSummer and IntenseSummer). The median model performance for the Weekly dataset was significantly better than the datasets focusing on spring and summer for the no, small and medium error groups. The median performance of the Crowd52 dataset was only significantly better than all three measurement scenarios focusing on spring or summer for the small error case (Fig. 5). The model validation performance for the WeekendSummer and IntenseSummer scenarios decreased faster with increasing errors compared to the Weekly, Crowd52 or WeekendSpring datasets (Fig. 4). The median validation POA for the models calibrated with the WeekendSpring observations was better than for the models calibrated with the WeekendSummer and IntenseSummer datasets but the differences were only significant for the small, medium and large error groups. The differences in the model performance results for the observation strategies that focussed on summer (IntenseSummer and WeekendSummer) were not significant for any of the error groups (Fig. 5).
The median model performance for the regularly spaced Monthly datasets with 12 observations was similar to the median performance for the three datasets focusing on summer with 46–54 measurements (WeekendSpring, WeekendSummer and IntenseSummer), except for the case of large errors for which the monthly dataset performed worse. The irregularly spaced Crowd12 time series resulted in the worst model performance for each error group, but the difference from the performance for the regularly spaced Monthly data was only significant for the dataset with large errors (Fig. 5).
Figure 6Median model validation performance for the datasets calibrated and validated both in a dry year and in a wet year. Each horizontal line represents the median model performance for one catchment. The black bold line represents the median for the six catchments. The grey rectangles around the boxes indicate non-significant differences in median model performance for the six catchments compared to the lower benchmark with random parameters. The numbers at the bottom indicate the number of outliers beyond the figure margins. For the individual POA values of the upper benchmark (no error–Hourly dataset) in the different calibration and validation years, see Table 4.
## 3.4 Effect of errors and data resolution on the parameter ranges
For most parameters the spread in the optimised parameter values was smallest for the upper benchmark. The spread in the parameter values increased with increasing errors in the data used for calibration, particularly for MAXBAS (the routing parameter) but also for some other parameters (e.g. TCALT, TT and BETA). However, for some parameters (e.g. CFMAX, FC, and SFCF) the range in the optimised parameter values was mainly affected by the temporal resolution of the data and the number of data points used for calibration. It should be noted though that the changes in the range of model parameters differed significantly for the different catchments and the trends were not very clear.
## 3.5 Influence of the calibration and validation year and number of high flow data points on the model performance
The influence of the validation year on the model performance was larger than the effect of the calibration year (Figs. 6 and S2). In general model performance was poorest for the dry validation years. The model performances of all datasets with fewer observations or bigger errors than the Hourly datasets without errors were not significantly better than the lower benchmark for the dry validation years, except for Crowd52 in the no error group when calibrated with data from a wet year. However, even for the wet validation years some observation scenarios of the no error and small error group did not lead to significantly better model validation results compared to the median validation performance for the random parameters. Interestingly, the IntenseSummer dataset in the no error group resulted in a very good performance when the model was calibrated for a dry year and also validated in a dry year compared to its performance in the other calibration and validation year combinations. The median model performance was however not significantly better than the lower benchmark due to the low performance for the Guerbe and Allenbach (outliers beyond figure margins in Fig. 6). The validation results for these two catchments were the worst for all the no error–IntenseSummer datasets for all calibration and validation year combinations.
For 13 out of the 18 catchment and year combinations, the Crowd52 datasets with fewer than 10 % high streamflow data points led to a better validation performance than the Crowd52 datasets with more high streamflow data points. For six of them, the difference in model performance was significant. For none of the five cases where more high flow data points led to a better model performance was the difference significant. Also when the results were analysed by year character or catchment, there was no improvement when more high flow values were included in the calibration dataset.
4 Discussion
## 4.1 Usefulness of inaccurate streamflow data for hydrological model calibration
In this study, we evaluated the information content of streamflow estimates by citizen scientists for calibration of a bucket-type hydrological model for six Swiss catchments. While the hydroclimatic conditions, the model or the calibration approach might be different in other studies, these results should be applicable for a wide range of cases. However, for physically based spatially distributed models that are usually not calibrated automatically, the use of limited streamflow data would probably benefit from a different calibration approach. Furthermore, our results might not be applicable in arid catchments where rivers become dry for some periods of the year because the linear reservoirs used in the HBV model are not appropriate for such systems.
Streamflow estimates by citizens are sometimes very different from the measured values, and the individual estimates can be disinformative for model calibration (Beven, 2016; Beven and Westerberg, 2011). The results show that if the streamflow estimates by citizen scientists were available at a high temporal resolution (hourly), these data would still be informative for the calibration of a bucket-type hydrological model despite their high uncertainties. However, observations with such a high resolution are very unlikely to be obtained in practice. All scenarios with error distributions that represent the estimates from citizen scientists with fewer observations were no better than the lower benchmark (using random parameters). With medium errors, however, and one data point per week on average or regularly spaced monthly data, the data were informative for model parameterisation. Reducing the standard deviation of the error distribution by a factor of 4 led to a significantly improved model performance compared to the lower benchmark for all the observation scenarios.
A reduction in the errors of the streamflow estimates could be achieved by training of citizen scientists (e.g. videos), improved information about feasible ranges for stream depth, width and velocity, or examples of streamflow values for well-known streams. Filtering of extreme outliers can also reduce the spread of the estimates. This could be done with existing knowledge of feasible streamflow values for a catchment of a given area or the amount of rainfall right before the estimate is made to determine if streamflow is likely to be higher or lower than for the previous estimate. More detailed research is necessary to test the effectiveness of such methods.
Le Coz et al. (2014) reported an uncertainty in stage–discharge streamflow measurements of around 5 %–20 %. McMillan et al. (2012) summarised streamflow uncertainties from stage–discharge relationships in a more detailed review and gave a range of ±50 %–100 % for low flows, ±10 %–20 % for medium or high (in-bank) flows and ±40 % for out-of-bank flows. The errors for the most extreme outliers in the citizen estimates are considerably higher, and could differ up to a factor of 10 000 from the measured value in the most extreme but rare cases (Fig. 2). Even with reduced standard deviations of the error distribution by a factor of 2 or 4, the observations in the most extreme cases can still differ by a factor of 100 and 10. The percentage of data points that differed from the measured value by more than 200 % was 33 % for the large error group, 19 % for the medium error group and 4 % for the small error group. Only 3 % of the data points were more than 90 % below the measured value in the large error group and 0 % for both in the medium and small error classes. If such observations are used for model calibration without filtering, they are seen as extreme floods or droughts, even if the actual conditions may be close to average flow. Beven and Westerberg (2011) suggest isolating periods of disinformative data. It is therefore beneficial to identify such extreme outliers, independent of a model, e.g. with knowledge of feasible maximum and minimum streamflow quantities, as used in this study, with the help of the maximum regionalised specific streamflow values for a given catchment area.
## 4.2 Number of streamflow estimates required for model calibration
In general, one would assume that the calibration of a model becomes better when there are more data (Perrin et al., 2007), although others have shown that the increase in model performance plateaus after a certain number of measurements (Juston et al., 2009; Pool et al., 2017; Seibert and Beven, 2009; Seibert and McDonnell, 2015). In this study, we limited the length of the calibration period to 1 year because in practice it may be possible to obtain a limited number of measurements during a 1-year period for ungauged catchments before the model results are needed for a certain application, as has been assumed in previous studies (Pool et al., 2017; Seibert and McDonnell, 2015). While a limited number of observations (12) was informative for model calibration when the data uncertainties were limited, the results of this study also suggest that the performance of bucket-type models decreases faster with increasing errors when fewer data points are available (i.e. there was a faster decline in model performance with increasing errors for models calibrated with 12 data points than for the models calibrated with 48–52 data points). This finding was most pronounced when comparing the model performance for the small and medium error groups (Fig. 4). These findings can be explained by the compensating effect of the number of observations and their accuracy because the random errors for the inaccurate data average out when a large number of observations are used, as long as the data do not have a large bias.
## 4.3 Best timing of streamflow estimates for model calibration
The performance of the parameter sets depended on the timing and the error distribution of the data used for model calibration. The model performance was generally better if the observations were more evenly spread throughout the year. For example, for the cases of no and small errors, the performance of the model calibrated with the Monthly dataset with 12 observations was better than for the IntenseSummer and WeekendSummer scenarios with 46–54 observations. Similarly, the less clustered observation scenarios performed better than the more clustered scenarios (i.e. Weekly vs. Crowd52, Monthly vs. Crowd12, Crowd52 vs. IntenseSummer, etc.). This suggests that more regularly distributed data over the year lead to a better model calibration. Juston et al. (2009) compared different subsamples of hydrological data for a 5.6 km2 Swedish catchment and found that including inter-annual variability in the data used for the calibration of the HBV model reduced the model uncertainties. More evenly distributed observations throughout the year might represent more of the within-year streamflow variability and therefore result in improved model performance. This is good news for using citizen science data for model calibration as it suggests that the timing is not as important as the number of observations because it is likely much easier to get observations throughout the year than during specific periods or flow conditions.
When comparing the WeekendSpring, WeekendSummer and IntenseSummer datasets, it seems that it was in most cases more beneficial to include data from spring rather than summer. This tendency was more pronounced with increasing data errors. The reason for this might be that the WeekendSpring scenario includes more snowmelt or rain-on-snow event peaks, in addition to usually higher baseflow, and therefore contains more information on the inter-annual variability in streamflow.
By comparing different variations of 12 data points to calibrate the HBV model, Pool et al. (2017) found that a dataset that contains a combination of different maximum (monthly, yearly etc.) and other flows in model calibration led to the best model performance but also that the differences in performance for the different datasets covering the range of flows were small. In our study we did not specifically focus on the high or low flow data points, and therefore did not have datasets that contained only high flow estimates, which would be very difficult to obtain with citizen science data. However, our findings similarly show that for model calibration for catchments with seasonal variability in streamflow it is beneficial to obtain data for different magnitudes of flow. Furthermore, we found that data points during relatively dry periods are beneficial for validation or prediction in another year and might even be beneficial for years with the same characteristics, as was shown for the improved validation performance of the IntenseSummer dataset compared to the other datasets when data from dry years were used for calibration (Fig. 6).
## 4.4 Effects of different types of years on model calibration and validation
The calibration year, i.e. the year in which the observations were made, was not decisive for the model performance. Therefore, a model calibrated with data from a dry year can still be useful for simulations for an average or wet year. This also means that data in citizen science projects can be collected during any year and that these data are useful for simulating streamflow for most years, except the driest years. However, model performance did vary significantly for the different validation years. The results during dry validation years were almost never significantly better than the lower benchmark (Fig. S2). This might be due to the objective function that was used in this study. Especially the NSE was lower for dry years because the flow variance (i.e. the denominator in the equation) is smaller when there is a larger variation in streamflow. Also, these results are based on six median model performances, and therefore, outliers have a big influence on the significance of results (Fig. S2).
Lidén and Harlin (2000) used the HBV-96 model by Lindström et al. (1997) with changes suggested by Bergström et al. (1997) for four catchments in Europe, Africa and South America. They achieved better model results for wetter catchments and argued that during dry years evapotranspiration plays a bigger role and therefore the model performance is more sensitive to inaccuracies in the simulation of the evapotranspiration processes. The fact that we used a very simple method to calculate the potential evapotranspiration (McGuinness and Bordne, 1972) might also explain why the model performed less well during dry years.
The model parameterisation, obtained from calibration using the IntenseSummer dataset, resulted in a surprisingly good performance for the validation for a more extreme dry year for four out of the six catchments. For the two catchments for which the performance for the IntenseSummer dataset was poor (Guerbe and Allenbach), the weather stations are located outside the catchment boundaries. Especially during dry periods missed streamflow peaks due to misrepresentation of precipitation can affect model performance a lot. The fact that always one of these two catchments had the worst model performance for all the no error–IntenseSummer runs furthermore indicates that the July–September period might not be suitable to represent characteristic runoff events for these catchments. The bad performance for these two catchments for the IntenseSummer–no error run with calibration and validation in the dry year resulted in the insignificant improvement in model performance compared to the lower benchmark. Because the wetness of a year was based on the summer streamflow, these findings suggest that data obtained during times of low flow result in improved validation performance during dry years compared to data collected during other times (Fig. S2). This suggests that if the interest is in understanding the streamflow response during very dry years, it is important to obtain data during the dry period. To test this hypothesis, more detailed analyses are needed.
## 4.5 Recommendations for citizen science projects
Our results show that streamflow estimates from citizens are not informative for hydrological model calibration, unless the errors in the estimates can be reduced through training or advanced filtering of the data to reduce the errors (i.e. to reduce the number of extreme outliers). In order to make streamflow estimates useful, the standard deviation of the error distribution of the estimates needs to be reduced by a factor of 2. Gibson and Bergman (1954) suggest that errors in distance estimates can be reduced from 33 % to 14 % with very little training. These findings are encouraging, although their tests covered distances larger than 365 m (400 yards) and the widths of the medium-sized rivers for which the streamflow was estimated were less than 40 m (Strobl et al., 2018). Options for training might be tutorial videos, as well as lists with values for the width, average depth and flow velocity of well-known streams (Strobl et al., 2018). In order to determine the effect of training on streamflow estimates, further research has to be done because especially the depth estimates were inaccurate (Strobl et al., 2018).
The findings of this study suggest the following recommendations for citizen science projects that want to use streamflow estimates:
• Collect as many data points as possible. In this study hourly data always led to the best model performance. It is therefore beneficial to collect as many data points as possible. Because it is unlikely that hourly data are obtained, we suggest to aim for (on average) one observation per week. Provided that the standard deviation of the streamflow estimates can be reduced by a factor of 2, 52 observations (as in the Crowd52 data series) are informative for model calibration. Therefore, it is essential to invest in advertisement of a project and to find suitable locations where many people can potentially contribute, as well as to communicate to the citizen scientists that it is beneficial to submit observations regularly.
• Encourage observations throughout the year. To further improve the model performance, or to allow for greater errors, it is beneficial to have observations at all types of flow conditions during the year, rather than during a certain season.
Observations during high streamflow conditions were in most cases not more informative than flows during other times of the year. Efforts to ask citizens to submit observations during specific flow conditions (e.g. by sending reminders to the citizen observers) do not seem to be very effective in light of the above findings. It is rather more beneficial to remind them to submit observations regularly.
Instead of focussing on training to reduce the errors in the streamflow estimates, an alternative approach for citizen science projects is to switch to a parameter that is easier to estimate, such as stream levels (Lowry and Fienen, 2013). Recent studies successfully used daily stream-level data (Seibert and Vis, 2016) and stream-level class data (van Meerveld et al. 2017) to calibrate hydrological models, and other studies have demonstrated the potential value of crowdsourced stream level data for providing information on, e.g. baseflow (Lowry and Fienen, 2013), or for improving flood forecasts (Mazzoleni et al., 2017). However, further research is needed to determine if real crowdsourced stream-level (class) data are informative for the calibration of hydrological models.
5 Conclusions
The results of this study extend previous studies on the value of limited hydrological data for hydrological model calibration or the best timing of streamflow measurements for model calibration (Juston et al., 2009; Pool et al., 2017; Seibert and McDonnell, 2015) that did not consider observation errors. This is an important aspect, especially when considering citizen science approaches to obtain streamflow data. Our results show that inaccurate streamflow data can be useful for model calibration, as long as the errors are not too large. When the distribution of errors in the streamflow data represented the distribution of the errors in the streamflow estimates from citizen scientists, this information was not informative for model calibration (i.e. the median performance of the models calibrated with these data was not significantly better than the median performance of the models with random parameter values). However, if the standard deviation of the estimates is reduced by a factor of 2, then the (less) inaccurate data would be informative for model calibration. We furthermore demonstrated that realistic frequencies for citizen science projects (one observation on average per week or month) can be informative for model calibration. The findings of studies such as the one presented here provide important guidance on the design of citizen science projects as well as other observation approaches.
Data availability
Data availability.
The data are available from FOEN (streamflow) and MeteoSwiss (precipitation and temperature). The HBV software is available at https://www.geo.uzh.ch/en/units/h2k/Services/HBV-Model.html (Seibert and Vis, 2012) or from jan.seibert@geo.uzh.ch.
Supplement
Supplement.
Author contributions
Author contributions.
While JS and IvM had the initial idea, the concrete study design was based on input from all authors. SE and BS conducted the field surveys to determine the typical errors in streamflow estimates. The simulations and analyses were performed by SE. The writing of the manuscript was led by SE; all co-authors contributed to the writing.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
We thank all citizen scientists who participated in the field surveys, as well as the Swiss Federal Office for the Environment for providing the streamflow data, MeteoSwiss for providing the weather data, Maria Staudinger, Jan Schwanbeck and Scherrer AG for the permission to use their datasets and the reviewers for the useful comments. This project was funded by the Swiss National Science Foundation (project CrowdWater).
Reviewed by: two anonymous referees
References
Aschwanden, H. and Weingartner, R.: Die Abflussregimes der Schweiz, Geographisches Institut der Universität Bern, Abteilung Physikalische Geographie, Gewässerkunde, Bern, Switzerland, 1985.
Bergström, S.: Development and application of a conceptual runoff model for Scandinavian catchments, Sveriges Meteorologiska och Hydrologiska Institut (SMHI), Norrköping, Sweden, available at: https://www.researchgate.net/publication/255274162_Development_and_Application_of_a_Conceptual_Runoff_Model_for_Scandinavian_Catchments (last access: 3 October 2018), 1976.
Bergström, S., Carlsson, B., Grahn, G., and Johansson, B.: A More Consistent Approach to Watershed Response in the HBV Model, Vannet i Nord., 4, 1997.
Beven, K.: Facets of uncertainty: epistemic uncertainty, non-stationarity, likelihood, hypothesis testing, and communication, Hydrol. Sci. J., 61, 1652–1665, https://doi.org/10.1080/02626667.2015.1031761, 2016.
Beven, K. and Westerberg, I.: On red herrings and real herrings: disinformation and information in hydrological inference, Hydrol. Process., 25, 1676–1680, https://doi.org/10.1002/hyp.7963, 2011.
Bonferroni, C. E.: Teoria statistica delle classi e calcolo delle probabilità, st. Super. di Sci. Econom. e Commerciali di Firenze, Istituto superiore di scienze economiche e commerciali, Florence, Italy, 62 pp., 1936.
Brath, A., Montanari, A., and Toth, E.: Analysis of the effects of different scenarios of historical data availability on the calibration of a spatially-distributed hydrological model, J. Hydrol., 291, 232–253, https://doi.org/10.1016/j.jhydrol.2003.12.044, 2004.
Buytaert, W., Zulkafli, Z., Grainger, S., Acosta, L., Alemie, T. C., Bastiaensen, J., De BiÃv̈re, B., Bhusal, J., Clark, J., Dewulf, A., Foggin, M., Hannah, D. M., Hergarten, C., Isaeva, A., Karpouzoglou, T., Pandeya, B., Paudel, D., Sharma, K., Steenhuis, T., Tilahun, S., Van Hecken, G., and Zhumanova, M.: Citizen science in hydrology and water resources: opportunities for knowledge generation, ecosystem service management, and sustainable development, Front. Earth Sci., 2, 21 pp., https://doi.org/10.3389/feart.2014.00026, 2014.
Davids, J. C., van de Giesen, N., and Rutten, M.: Continuity vs. the Crowd – Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations, Environ. Manage., 60, 12–29, https://doi.org/10.1007/s00267-017-0872-x, 2017.
Davids, J. C., Rutten, M. M., Shah, R. D. T., Shah, D. N., Devkota, N., Izeboud, P., Pandey, A., and van de Giesen, N.: Quantifying the connections – linkages between land-use and water in the Kathmandu Valley, Nepal, Environ. Monit. Assess., 190, 17 pp., https://doi.org/10.1007/s10661-018-6687-2, 2018.
Dickinson, J. L., Zuckerberg, B., and Bonter, D. N.: Citizen Science as an Ecological Research Tool: Challenges and Benefits, Annu. Rev. Ecol. Evol. Syst., 41, 149–172, https://doi.org/10.1146/annurev-ecolsys-102209-144636, 2010.
Dunn, O. J.: Estimation of the Medians for Dependent Variables, Ann. Math. Stat., 30, 192–197, https://doi.org/10.1214/aoms/1177706374, 1959.
Dunn, O. J.: Multiple Comparisons among Means, J. Am. Stat. Assoc., 56, 52–64, https://doi.org/10.1080/01621459.1961.10482090, 1961.
Ewen, T., Brönnimann, S., and Annis, J.: An extended Pacific-North American index from upper-air historical data back to 1922, J. Climate, 21, 1295–1308, https://doi.org/10.1175/2007JCLI1951.1, 2008.
Finger, D., Pellicciotti, F., Konz, M., Rimkus, S., and Burlando, P.: The value of glacier mass balance, satellite snow cover images, and hourly discharge for improving the performance of a physically based distributed hydrological model, Water Resour. Res., 47, 14 pp., https://doi.org/10.1029/2010WR009824, 2011.
Finger, D., Vis, M., Huss, M., and Seibert, J.: The value of multiple data set calibration versus model complexity for improving the performance of hydrological models in mountain catchments, Water Resour. Res., 51, 1939–1958, https://doi.org/10.1002/2014WR015712, 2015.
Fitzner, D., Sester, M., Haberlandt, U., and Rabiei, E.: Rainfall Estimation with a Geosensor Network of Cars – Theoretical Considerations and First Results, Photogramm. Fernerkun., 2013, 93–103, https://doi.org/10.1127/1432-8364/2013/0161, 2013.
Gibson, E. J. and Bergman, R.: The effect of training on absolute estimation of distance over the ground, J. Exp. Psychol., 48, 473–482, https://doi.org/10.1037/h0055007, 1954.
Haberlandt, U. and Sester, M.: Areal rainfall estimation using moving cars as rain gauges – a modelling study, Hydrol. Earth Syst. Sci., 14, 1139–1151, https://doi.org/10.5194/hess-14-1139-2010, 2010.
Harrelson, C. C., Rawlins, C. L., and Potyondy, J. P.: Stream channel reference sites: an illustrated guide to field technique, Department of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station location, Fort Collins, CO, US, 1994.
Horner, I., Renard, B., Le Coz, J., Branger, F., McMillan, H. K., and Pierrefeu, G.: Impact of Stage Measurement Errors on Streamflow Uncertainty, Water Resour. Res., 54, 1952–1976, https://doi.org/10.1002/2017WR022039, 2018.
Juston, J., Seibert, J., and Johansson, P.: Temporal sampling strategies and uncertainty in calibrating a conceptual hydrological model for a small boreal catchment, Hydrol. Process., 23, 3093–3109, https://doi.org/10.1002/hyp.7421, 2009.
Koch, J. and Stisen, S.: Citizen science: A new perspective to advance spatial pattern evaluation in hydrology, PLoS One, 12, 1–20, https://doi.org/10.1371/journal.pone.0178165, 2017.
Le Coz, J., Renard, B., Bonnifait, L., Branger, F., and Le Boursicaud, R.: Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: A Bayesian approach, J. Hydrol., 509, 573–587, https://doi.org/10.1016/j.jhydrol.2013.11.016, 2014.
Lidén, R. and Harlin, J.: Analysis of conceptual rainfall–runoff modelling performance in different climates, J. Hydrol., 238, 231–247, https://doi.org/10.1016/S0022-1694(00)00330-9, 2000.
Lindström, G., Johansson, B., Persson, M., Gardelin, M., and Bergström, S.: Development and test of the distributed HBV-96 hydrological model, J. Hydrol., 201, 272–288, https://doi.org/10.1016/S0022-1694(97)00041-3, 1997.
Lowry, C. S. and Fienen, M. N.: CrowdHydrology: Crowdsourcing Hydrologic Data and Engaging Citizen Scientists, Ground Water, 51, 151–156, https://doi.org/10.1111/j.1745-6584.2012.00956.x, 2013.
Mazzoleni, M., Verlaan, M., Alfonso, L., Monego, M., Norbiato, D., Ferri, M., and Solomatine, D. P.: Can assimilation of crowdsourced data in hydrological modelling improve flood prediction?, Hydrol. Earth Syst. Sci., 21, 839–861, https://doi.org/10.5194/hess-21-839-2017, 2017.
McGuinness, J. and Bordne, E.: A comparison of lysimeter-derived potential evapotranspiration with computed values, Agricultural Research Service – United States Department of Agriculture Location, Washington D.C., 1972.
McMillan, H., Freer, J., Pappenberger, F., Krueger, T., and Clark, M.: Impacts of uncertain river flow data on rainfall-runoff model calibration and discharge predictions, Hydrol. Process., 24, 1270–1284, https://doi.org/10.1002/hyp.7587, 2010.
McMillan, H., Krueger, T., and Freer, J.: Benchmarking observational uncertainties for hydrology: rainfall, river discharge and water quality, Hydrol. Process., 26, 4078–4111, https://doi.org/10.1002/hyp.9384, 2012.
Michel, C., Perrin, C., and Andreassian, V.: The exponential store: a correct formulation for rainfall – runoff modelling, Hydrol. Sci. J., 48, 109–124, https://doi.org/10.1623/hysj.48.1.109.43484, 2003.
Oudin, L., Hervieu, F., Michel, C., Perrin, C., Andréassian, V., Anctil, F., and Loumagne, C.: Which potential evapotranspiration input for a lumped rainfall-runoff model?, J. Hydrol., 303, 290–306, https://doi.org/10.1016/j.jhydrol.2004.08.026, 2005.
Perrin, C., Michel, C., and Andréassian, V.: Improvement of a parsimonious model for streamflow simulation, J. Hydrol., 279, 275–289, https://doi.org/10.1016/S0022-1694(03)00225-7, 2003.
Perrin, C., Ouding, L., Andreassian, V., Rojas-Serna, C., Michel, C., and Mathevet, T.: Impact of limited streamflow data on the efficiency and the parameters of rainfall-runoff models, Hydrol. Sci. J., 52, 131–151, https://doi.org/10.1623/hysj.52.1.131, 2007.
Pool, S., Viviroli, D., and Seibert, J.: Prediction of hydrographs and flow-duration curves in almost ungauged catchments: Which runoff measurements are most informative for model calibration?, J. Hydrol., 554, 613–622, https://doi.org/10.1016/j.jhydrol.2017.09.037, 2017.
Ruhi, A., Messager, M. L., and Olden, J. D.: Tracking the pulse of the Earth's fresh waters, Nat. Sustain., 1, 198–203, https://doi.org/10.1038/s41893-018-0047-7, 2018.
Scherrer AG: Verzeichnis grosser Hochwasserabflüsse in schweizerischen Einzugsgebieten, Auftraggeber: Bundesamt für Umwelt (BAFU), Abteilung Hydrologie, Reinach, 2017.
Seibert, J.: Multi-criteria calibration of a conceptual runoff model using a genetic algorithm, Hydrol. Earth Syst. Sci., 4, 215–224, https://doi.org/10.5194/hess-4-215-2000, 2000.
Seibert, J. and Beven, K. J.: Gauging the ungauged basin: how many discharge measurements are needed?, Hydrol. Earth Syst. Sci., 13, 883–892, https://doi.org/10.5194/hess-13-883-2009, 2009.
Seibert, J. and McDonnell, J. J.: Gauging the Ungauged Basin?: Relative Value of Soft and Hard Data, J. Hydrol. Eng., 20, A4014004-1–6, https://doi.org/10.1061/(ASCE)HE.1943-5584.0000861, 2015.
Seibert, J. and Vis, M. J. P.: Teaching hydrological modeling with a user-friendly catchment-runoff-model software package, Hydrol. Earth Syst. Sci., 16, 3315–3325, https://doi.org/10.5194/hess-16-3315-2012, 2012.
Seibert, J. and Vis, M. J. P.: How informative are stream level observations in different geographic regions?, Hydrol. Process., 30, 2498–2508, https://doi.org/10.1002/hyp.10887, 2016.
Seibert, J., Vis, M. J. P., Lewis, E., and van Meerveld, H. J.: Upper and lower benchmarks in hydrological modelling, Hydrol. Process., 32, 1120–1125, https://doi.org/10.1002/hyp.11476, 2018.
Shiklomanov, A. I., Lammers, R. B., and Vörösmarty, C. J.: Widespread decline in hydrological monitoring threatens Pan-Arctic Research, Eos, Trans. Am. Geophys. Union, 83, 13–17, https://doi.org/10.1029/2002EO000007, 2002.
Sideris, I. V., Gabella, M., Erdin, R., and Germann, U.: Real-time radar-rain-gauge merging using spatio-temporal co-kriging with external drift in the alpine terrain of Switzerland, Q. J. Roy. Meteor. Soc., 140, 1097–1111, https://doi.org/10.1002/qj.2188, 2014.
Strobl, B., Etter, S., van Meerveld, I., and Seibert, J.: Accuracy of Crowdsourced Streamflow and Stream Level Class Estimates, Hydrol. Sci. J., (special issue on hydrological data: opportunities and barriers), in review, 2018.
van Meerveld, H. J. I., Vis, M. J. P., and Seibert, J.: Information content of stream level class data for hydrological model calibration, Hydrol. Earth Syst. Sci., 21, 4895–4905, https://doi.org/10.5194/hess-21-4895-2017, 2017.
Vrugt, J. A., Gupta, H. V., Dekker, S. C., Sorooshian, S., Wagener, T., and Bouten, W.: Application of stochastic parameter optimization to the Sacramento Soil Moisture Accounting model, J. Hydrol., 325, 288–307, https://doi.org/10.1016/j.jhydrol.2005.10.041, 2006.
Weeser, B., Stenfert Kroese, J., Jacobs, S. R., Njue, N., Kemboi, Z., Ran, A., Rufino, M. C., and Breuer, L.: Citizen science pioneers in Kenya – A crowdsourced approach for hydrological monitoring, Sci. Total Environ., 631–632, 1590–1599, https://doi.org/10.1016/j.scitotenv.2018.03.130, 2018.
Yapo, P. O., Gupta, H. V., and Sorooshian, S.: Automatic calibration of conceptual rainfall-runoff models: sensitivity to calibration data, J. Hydrol., 181, 23–48, https://doi.org/10.1016/0022-1694(95)02918-4, 1996.
|
2019-06-19 11:57:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5657379627227783, "perplexity": 2843.932166812958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00278.warc.gz"}
|
https://www.mersenneforum.org/showthread.php?s=ada26e0d5a4e6c4109fa43d37ced1f76&t=22577
|
mersenneforum.org Iteration of (sigma(n)+phi(n))/2
Register FAQ Search Today's Posts Mark Forums Read
2017-09-14, 04:52 #1 sean Aug 2004 New Zealand DD16 Posts Iteration of (sigma(n)+phi(n))/2 In recent weeks there has been some interest on the seqfan mailing list about iterating the map (sigma(n)+phi(n))/2 (see for example A291790). In many cases the map results in a fraction (i.e. odd/2) and the iteration finishes. However, there appears to be cases (perhaps a lot of cases for large n) where the iteration appears unbounded and continues indefinitely (not unlike certain aliquot sequences or home primes, etc.). It is not clear why this should be. The smallest unresolved case is starting with n=270, which has now survived 515 iterations of the map. The unresolved cases for n<1000 are 270, 440, 496, 702, 737, 813, 828, 897, and 905. There are other values less than 1000 unresolved, but their trajectories converge with one of the nine listed. All of them have survived at least 400 iterations. All the factorizations for the existing steps have been added to factordb.com. I'm not planning on taking these further myself, but there is plenty of scope for fairly easy factorization here. Some the composites are still down in 100 digit range and even the hardest (for the 270 trajectory) is only a C138.
2017-09-14, 10:55 #2
science_man_88
"Forget I exist"
Jul 2009
Dumbassville
26·131 Posts
Quote:
Originally Posted by sean In recent weeks there has been some interest on the seqfan mailing list about iterating the map (sigma(n)+phi(n))/2 (see for example A291790). In many cases the map results in a fraction (i.e. odd/2) and the iteration finishes. However, there appears to be cases (perhaps a lot of cases for large n) where the iteration appears unbounded and continues indefinitely (not unlike certain aliquot sequences or home primes, etc.). It is not clear why this should be. The smallest unresolved case is starting with n=270, which has now survived 515 iterations of the map. The unresolved cases for n<1000 are 270, 440, 496, 702, 737, 813, 828, 897, and 905. There are other values less than 1000 unresolved, but their trajectories converge with one of the nine listed. All of them have survived at least 400 iterations. All the factorizations for the existing steps have been added to factordb.com. I'm not planning on taking these further myself, but there is plenty of scope for fairly easy factorization here. Some the composites are still down in 100 digit range and even the hardest (for the 270 trajectory) is only a C138.
if n is composite since both sigma(n) and phi(n) are multiplicative to some extent the sum is close ( but below at last check) to a multiplication of earlier terms ( not necessarily in the same iteration chain).
Last fiddled with by science_man_88 on 2017-09-14 at 11:01
2017-09-18, 15:39 #3
arbooker
"Andrew Booker"
Mar 2013
5·17 Posts
Quote:
Originally Posted by sean In many cases the map results in a fraction (i.e. odd/2) and the iteration finishes. However, there appears to be cases (perhaps a lot of cases for large n) where the iteration appears unbounded and continues indefinitely (not unlike certain aliquot sequences or home primes, etc.). It is not clear why this should be.
I would guess that asymptotically 100% of positive integers have unbounded trajectory, and it might be possible to prove something along those lines.
Note that $$(\sigma(n)+\varphi(n))/2$$ is an integer unless $$n$$ is a square or twice a square, and those are very rare among large numbers. Further, if $$n>1$$ and $$(\sigma(n)+\varphi(n))/2$$ is odd then $$n$$ must be of the form $$pm^2$$ or $$2pm^2$$ for an odd prime $$p$$. Combining this with some sieve theory, one can show that the number of composite $$n\le x$$ with $$(\sigma(n)+\varphi(n))/2$$ prime is $$O(x/\log^2{x})$$. Since the map tends to increase geometrically and $$\sum1/k^2$$ converges, this suggests that a typical large composite has little chance of ever reaching a prime.
Similar Threads Thread Thread Starter Forum Replies Last Post mart_r Aliquot Sequences 6 2013-07-23 20:50 LiquidNitrogen Hardware 22 2011-07-12 23:15 lavalamp Software 2 2010-08-24 15:22 em99010pepe Riesel Prime Search 7 2007-08-30 08:54 sofII Software 8 2002-09-07 01:51
All times are UTC. The time now is 06:30.
Sat Jan 16 06:30:53 UTC 2021 up 44 days, 2:42, 0 users, load averages: 5.31, 4.29, 3.67
|
2021-01-16 06:30:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5908781290054321, "perplexity": 980.4206509969742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00725.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-sec35csc55-tan35cot55#282955
|
# How do you simplify sec35csc55-tan35cot55?
Jun 30, 2016
Apply a bunch of complementary angle identities to get a result of $1$.
#### Explanation:
First, as in just about any trig problem, convert everything to sines and cosines:
$\frac{1}{\cos} 35 \frac{1}{\sin} 55 - \sin \frac{35}{\cos} 35 \cos \frac{55}{\sin} 55$
We can ignore the first part of this expression and focus on $\sin \frac{35}{\cos} 35 \cos \frac{55}{\sin} 55$, because we can see some interesting identities at play here.
Recall that $\cos \left(90 - x\right) = \sin x$ and $\sin \left(90 - x\right) = \cos x$. Therefore:
$\sin 35 = \cos \left(90 - 35\right) = \cos 55$
$\cos 35 = \sin \left(90 - 35\right) = \sin 55$
We can now make some substitutions for $\sin 35$ and $\cos 35$:
$\sin \frac{35}{\cos} 35 \cos \frac{55}{\sin} 55 = \cos \frac{55}{\sin} 55 \cos \frac{55}{\sin} 55$
$= \frac{{\cos}^{2} 55}{{\sin}^{2} 55}$
$= {\cot}^{2} 55$
Now we have $\sec 35 \csc 55 - {\cot}^{2} 55$. We can another identity on $\sec 35 \csc 55$:
$\sec \left(90 - x\right) = \csc x$
So:
$\sec 35 = \csc \left(90 - 35\right) = \csc 55$
Making yet another substitution, we have:
$\sec 35 \csc 55 - {\cot}^{2} 55 = \csc 55 \csc 55 - {\cot}^{2} 55$
$= {\csc}^{2} 55 - {\cot}^{2} 55$
It's looking like a Pythagorean identity might help us here...
${\sin}^{2} x + {\cos}^{2} x = 1$
${\tan}^{2} x + 1 = {\sec}^{2} x$
$\underline{1 + {\cot}^{2} x = {\csc}^{2} x}$
I've underlined that last one because it applies to our problem. To see how, simply subtract ${\cot}^{2} x$ in that identity:
$1 = {\csc}^{2} x - {\cot}^{2} x$
And if this equation is supposed to hold for all values of $x$, it should hold for $x = 55$. Therefore:
$\sec 35 \csc 55 - \tan 35 \cot 55 = {\csc}^{2} 55 - {\cot}^{2} 55 = 1$
|
2021-10-25 03:29:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202711343765259, "perplexity": 415.1619906162543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00370.warc.gz"}
|
http://mathoverflow.net/questions/140202/self-adjointness-of-a-perturbed-quantum-mechanical-hamiltonian-specified-in-an-i
|
# Self-adjointness of a perturbed quantum mechanical Hamiltonian specified in an infinite matrix form
Consider an operator $H$ on the Hilbert space $\ell_2$ given as an infinite matrix with two pieces, one diagonal and one arbitrary: $H_{ij}=E_i\delta_{ij}+V_{ij}$. This has a physical meaning in quantum mechanics: I am considering a perturbed Hamiltonian $H=H_0+V$ in the basis of eigenvectors of $H_0$; $E_i$ are the eigenvalues of $H_0$ (assumed known and forming an increasing sequence tending to $+\infty$, so that $H_0$ is unbounded) and $V_{ij}$ are matrix elements of $V$ in this basis.
I am interested in two questions.
(1) What are conditions on $V_{ij}$ guaranteeing that $H$ be self-adjoint?
I am familiar with the discussion in Reed and Simon vol.2 about the self-adjointness of various Schroedinger operators. But there $H_0$ is usually the Laplacian or the harmonic oscillator Hamiltonian, and $V$ is a potential. I would be interested in criteria for more general Hamiltonians given as matrices as above.
(2) Consider a truncation of H of the form: $H_N=P_N H P_N$ where $P_N$ is the projector on the subspace spanned by the first $N$ eigenvectors of $H_0$ (i.e. restrict indices $i,j$ to run from $1$ to $N$). What are conditions guaranteeing that for $N\to\infty$ the eigenvalues and eigenfunctions of $H_N$ approach those of $H$?
-
to (2) : you should make this more precise : For instance, there is a divergent series of eigenfunctions of $H_N$ with eigenvalue zero. – jjcale Aug 23 at 19:51
Let's say I look at the first $k$ eigenvalues of $H$ and compare them with the first $k$ eigenvalues of $H_N$ for $N\gg k$. – Slava Rychkov Aug 23 at 20:55
Concerning your first question, see paragraph 47 on "Matrix representations of unbounded symmetric operators" of Akhiezer & Glazman's Theory of Linear Operators in Hilbert Space. Theorem 4 gives a sufficient condition: $\sum_{i}|H_{ij}|^{2}<\infty$ for each $j$. Remarkbly enough, this condition need not be preserved by a unitary transformation of $H$.
|
2013-12-05 07:44:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461380839347839, "perplexity": 193.33645857208808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163041478/warc/CC-MAIN-20131204131721-00052-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3286491/with-linear-algebra-prove-that-all-polynomial-parametric-curves-can-fulfill-a-p
|
# With linear algebra, prove that all polynomial parametric curves can fulfill a polynomial Cartesian equation
This problem in my linear algebra class is intended to demonstrate that all polynomial parametric curves (in $$\mathbb{R^2}$$) can fulfill a polynomial Cartesian equation. First, we let $$x = p(t)$$ and $$y = q(t)$$ where $$p,q \in \mathcal{P}\mathbb{(R)}$$ are fixed polynomials dependent on the variable $$t$$.
Part (a) wants us to find a function, $$L$$, that takes nonnegative integers $$(m,n)$$ and returns a nonnegative integer $$L(m,n)$$ so that if $$0 \leq i \leq m$$ and $$0 \leq j\leq n$$, then $$x^iy^j = p(t)^iq(t)^j \in \mathcal{P}_{L(m,n)} (\mathbb{R})$$. This choice of $$L$$ also depends on $$p$$ and $$q$$ in a way.
Part (b) wants us to choose $$m$$ and $$n$$ from above so that the number of monomials $$x^iy^j$$ with $$0 \leq i \leq m$$ and $$0\leq j \leq n$$ exceeds the dimension of $$\mathcal{P}_{L(m,n)}(\mathbb{R})$$.
Part (c) wants us to show that there exists a nonzero two-variable polynomial $$f$$ in which $$f(x,y) = f(p(t),q(t)) = 0$$. That is, $$f(x,y)$$ in the form $$f(x,y) = \sum_{i,j} c_{i,j}x^iy^j (c_{i,j} \in \mathbb{R})$$ where the sum is finite.
My attempt at a solution:
First, I called the degree of $$p(t)= a$$, and the degree of $$q(t) = b$$. For part (a), I created a function which would, given $$m,n$$, return a number that would mark the highest possible order of polynomial that could result from $$p(t)^iq(t)^j$$. So, I let $$L(m,n) = am + bn$$ which comes from exponentiating polynomials $$p(t)$$ and $$q(t)$$ and adding exponents for the final polynomial product. All polynomials of the form $$p(t)^iq(t)^j$$ are now in $$\mathcal{P}_{L(m,n)} (\mathbb{R})$$.
For part (b), I deduced that the number of possible monomials $$x^iy^j$$ is $$(m+1)(n+1)$$ since $$0 \leq i\leq m$$ and $$0 \leq j \leq n$$. In addition, dim$$\mathcal{P}_{L(m,n)}(\mathbb{R}) = L(m,n) + 1$$, or, for my function, $$am + bn + 1$$. So, I have to come up with $$m,n$$ such that $$(m+1)(n+1) > am + bn + 1$$, but this boils down to $$mn + m + n > am + bn$$. I tried various combinations of letting $$m,n$$ be $$0, 1, a,$$ and $$b$$, but could not satisfy the inequality.
Is there something I'm misinterpreting with the problem, or is my function a poor choice in satisfying the conditions in the problem? What type of function would satisfy the conditions?
I was not able to move on to part (c), but I'm not sure how, once (b) is satisfied, one could deduce the conclusion that part(c) wants us to show. What does the result of part(b) indicate? Does it have to do with the number of polynomials in the basis of $$\mathcal{P}_{L(m,n)}(\mathbb{R})$$ exceeding the dimension and creating a contradiction or something? Why is it important that $$f(x,y) = 0$$?
Let $$c=\max\{a,b\}$$ and let $$m>2c, n>2c$$.
Then $$mn > 2c\max\{m,n\} \geq cm + cn \geq am +bn$$ so $$mn+m+n$$ as well.
Then for part c) the idea is that $$(x^iy^j)_{i,j}$$ is family that lives in a vector space but has size larger than the dimension : therefore it must be linearly dependent.
$$f(x,y)=0$$ just tells you that $$x$$ and $$y$$ “fulfill a polynomial equation“ which is what you wanted in the first place
|
2019-10-15 14:19:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121248483657837, "perplexity": 111.78354233312558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986659097.10/warc/CC-MAIN-20191015131723-20191015155223-00023.warc.gz"}
|
http://math.stackexchange.com/tags/economics/hot
|
# Tag Info
The pre-order being complete has no topological meaning, but purely set-theoretic. It means that for any two points $x,y$ in the domain of the pre-order, we must have $x \succsim y$ or $y \succsim x$ (or possibly both here, because we have a pre-order, so we can have both at the same time (invariant goods (?), or some such thing, economics is not my field, ...
Your textbook isn't wrong. $N$ is a fixed number (6 in the example), so the sum is over a finite number of terms, which is perfectly ok. Besides, the sum isn't the harmonic series... It is $$\frac{1}{N}+ \frac{1}{N}+ ... + \frac{1}{N}$$ where there are $N$ summands. So in the example $$\frac{1}{6}+ \frac{1}{6}+ \frac{1}{6}+ \frac{1}{6}+ \frac{1}{6}+ \... 2 Use Lagrange Multipliers. \mathcal{L}=50L^{0.2}K^{0.8}-\lambda (1000-L-K) Take first order condition with respect to K and set it to zero: 50\times0.8\times L^{0.2}K^{-0.2}=-\lambda Take first order condition with respect to L and set it to zero: 50\times0.2\times L^{-0.8}K^{0.8}=-\lambda Take first order condition with respect to \lambda ... 2 Instead of payout, think in terms of the slope of the payout (i.e the delta). If you long a call at A and cover it by shorting a call at B > A, the slope will be 1 between A and B and 0 otherwise. If you long a put at D and cover it by shorting a put at C < D, the slope will be -1 between C and D and 0 otherwise. In the ... 2 The inequality is false. Take K=2, \gamma=1/2 and$$ C_1=100,\quad C_2=1,\quad N_1=1,\quad N_2=100. $$Then$$ \frac{\sum_{i=1}C_{i}}{\sum_{i=1} {N_i}^\gamma {C_{i}}^{1-\gamma}}=\frac{101}{20}>5 $$and$$ \frac{K\max_iC_{i}}{\sum_{i=1} {N_i}^\gamma \max_k\{{C_{k}}\}^{1-\gamma}}=\frac{200}{10\cdot(10+1)}<2 $$PS. A commentator correctly remarked ... 2 You have p+7=\sqrt{\frac{4000}{Q}}. Increasing Q by 1 decreases the price that can be charged (on all sales) by \Delta p, where p-\Delta p+7=\sqrt{\frac{4000}{Q+1}}. So we have \Delta p=\sqrt\frac{4000}{Q}-\sqrt{\frac{4000}{Q+1}}=\sqrt{\frac{4000}{Q}}\left(1-\sqrt{\frac{Q}{Q+1}}\right) =(p+7)\left(1-\sqrt{\frac{1}{1+\frac{1}{Q}}}\right). We now ... 2 The production function states the quantity that a firm can produce. So if it produces 10 units:$$10=f(L,M)=L^{1/2}M^{1/2}$$Hence:$$100=LM\frac{100}{L}=M$$We know the cost will be:$$C=9L+81M=9L+\frac{8100}{L}$$Minimizing this using standard procedures C'(L)=0.. gives L=30 C=540 so I think you typed something wrong. I have ... 1 (1) does not imply (2), and (1) and (2) together do not imply (3). There are many non-dictatorial methods that fail to satisfy (2); see this Wikipedia article. For a simple method that satisfies (1) and (2) but not three, let u and v be two designated voters. If u and v have the same preferences, their preferences are adopted by society; if not, ... 1 First we translate:$$\frac{dP}{dt}=-\frac{1}{2}(S(P)-D(p))$$Then we substitute:$$\frac{dP}{dt}=-\frac{1}{2}(5P-60) $$But now you have a first order differential equation. hint: You can seperate them. you're going to have something involving e, and something constant, involving your initial condition. Here are some notes 1 First step is to solve for the rate of p, which is p'.$$S-D=80+3p-(140-2p)=5p-60\frac{S-D}{2}=\frac{5p-60}{2}$$Since p is decreasing at the rate, p' needs to be negative.So$$p'=\frac{60-5p}{2}$$Now it's easy to get$$2p'+5p=60$$It's a non-homogeneous first-order differential equation.$$2D+5=0,D=-\frac52$$Therefore you get the general solution... 1 Let's phrase the problem mathematically. You are being asked to solve:: \frac{dp}{dt} = -\frac{1}{2} (S(p)-D(t)) So substituting in the given supply and demand equations: \frac{dp}{dt} = -\frac{1}{2} (5p-60) Separating the variables: \frac{dp}{p-12} = -\frac{5}{2} dt Integrating both sides, we get the general result: \ln|p-12| = -\frac{5}{2}t+C... 1 You appear in effect to be assuming that \succsim is the same as ordinary \ge. This need not be the case. In fact, the whole point of the argument is that the lower contour sets for any continuous preorder on X are closed, so that in this sense all continuous preorders on X behave like the familiar natural order \ge. Of course the steps of the ... 1 It seems that posers of these problems sometimes delight in expressing the marginal functions in terms of \ p \ , rather than in terms of \ q \ which is used in the definitions. On the other hand, it is a reasonable way to do things frequently, since the business has direct control (generally) over the price set, but not over the demand. One way to ... 1 You can look up the mathematical definitions in any game theory text book. The interpretation is basically that the players' actions reinforce/offset (complement/substitute) each other. For example, if a buyer is more likely to buy a good at some price, when it is more likely that all other buyers are taking that price, then buyers are strategic complements.... 1 As x,y goes to infinity, profit goes to negative infinity. So, the maximum exists. We need to check the critical values. As economics only cares about positive numbers, critical values are x=0,y=0 and the values that make the partial derivatives 0. For x=0, the profit is$$p(y)=1200-10y^2+96y $p'(y)=96-20y$, thus, $p$ will be maximum at $y_0=24/5$....
|
2016-06-25 00:12:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343335390090942, "perplexity": 878.5481780984818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00036-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/libs/ml/polynomial_features.html
|
This documentation is for an out-of-date version of Apache Flink. We recommend you use the latest stable version.
# Polynomial Features
## Description
The polynomial features transformer maps a vector into the polynomial feature space of degree $d$. The dimension of the input vector determines the number of polynomial factors whose values are the respective vector entries. Given a vector $(x, y, z, \ldots)^T$ the resulting feature vector looks like:
Flink’s implementation orders the polynomials in decreasing order of their degree.
Given the vector $\left(3,2\right)^T$, the polynomial features vector of degree 3 would look like
This transformer can be prepended to all Transformer and Predictor implementations which expect an input of type LabeledVector or any sub-type of Vector.
## Operations
PolynomialFeatures is a Transformer. As such, it supports the fit and transform operation.
### Fit
PolynomialFeatures is not trained on data and, thus, supports all types of input data.
### Transform
PolynomialFeatures transforms all subtypes of Vector and LabeledVector into their respective types:
• transform[T <: Vector]: DataSet[T] => DataSet[T]
• transform: DataSet[LabeledVector] => DataSet[LabeledVector]
## Parameters
The polynomial features transformer can be controlled by the following parameters:
Parameters Description
Degree
The maximum polynomial degree. (Default value: 10)
## Examples
// Obtain the training data set
val trainingDS: DataSet[LabeledVector] = ...
// Setup polynomial feature transformer of degree 3
val polyFeatures = PolynomialFeatures()
.setDegree(3)
// Setup the multiple linear regression learner
val mlr = MultipleLinearRegression()
// Control the learner via the parameter map
val parameters = ParameterMap()
pipeline.fit(trainingDS)
|
2018-03-18 17:37:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3035043478012085, "perplexity": 5486.093515813142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00238.warc.gz"}
|
https://stats.stackexchange.com/questions/147401/estimating-mutual-information-using-r
|
# Estimating Mutual Information using R
I am trying to estimate the mutual information between vit level (values vary from 4 to 70, all of which are whole numbers) and a binary variable that indicates the presence of polyps. I am not sure if I should first discretize the vit variable. If I do, I am not sure how many bins to choose. Any pointers would be appreciated.
When I draw box-plots, I can see there is some association but my code estimates mutual information of 0.006.
This is my R code at the moment:
freqs2d = rbind( vit, polyps)
H1 = entropy.plugin(rowSums(freqs2d))
H2 = entropy.plugin(colSums(freqs2d))
H12 = entropy.plugin(freqs2d)
H1+H2-H12
## 1 Answer
I believe you don't need to discretize your vit variable because it is already discrete. You might also want to use mi.plugin to calculate mutual information instead:
library(entropy)
set.seed(2017) # For reproducibility
# 100 observations of a discrete variable between 4 and 70
vit = as.integer(runif(n = 100, min = 4, max = 70))
# 100 binary observations
polyps = rbinom(n = 100, size = 1, prob = 0.5)
# MI "by hand"
freqs2d = rbind( vit, polyps)
H1 = entropy.plugin(rowSums(freqs2d))
H2 = entropy.plugin(colSums(freqs2d))
H12 = entropy.plugin(freqs2d)
H1+H2-H12 # outputs 0.01076491
# Calculate mutual information
mi.plugin(freqs2d) # outputs 0.01076491
|
2021-04-14 23:07:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41775888204574585, "perplexity": 1913.5889918276987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00347.warc.gz"}
|
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=3351
|
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FSTTCS.2011.66
URN: urn:nbn:de:0030-drops-33517
URL: http://drops.dagstuhl.de/opus/volltexte/2011/3351/
Go to the corresponding LIPIcs Volume Portal
### Quasi-Weak Cost Automata: A New Variant of Weakness
pdf-format:
### Abstract
Cost automata have a finite set of counters which can be manipulated on each transition but do not affect control flow. Based on the evolution of the counter values, these automata define functions from a domain like words or trees to \N \cup \set{\infty}, modulo an equivalence relation which ignores exact values but preserves boundedness properties. These automata have been studied by Colcombet et al. as part of a "theory of regular cost functions", an extension of the theory of regular languages which retains robust equivalences, closure properties, and decidability like the classical theory. We extend this theory by introducing quasi-weak cost automata. Unlike traditional weak automata which have a hard-coded bound on the number of alternations between accepting and rejecting states, quasi-weak automata bound the alternations using the counter values (which can vary across runs). We show that these automata are strictly more expressive than weak cost automata over infinite trees. The main result is a Rabin-style characterization theorem: a function is quasi-weak definable if and only if it is definable using two dual forms of non-deterministic Büchi cost automata. This yields a new decidability result for cost functions over infinite trees.
### BibTeX - Entry
@InProceedings{kuperberg_et_al:LIPIcs:2011:3351,
author = {Denis Kuperberg and Michael Vanden Boom},
title = {{Quasi-Weak Cost Automata: A New Variant of Weakness }},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
pages = {66--77},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-34-7},
ISSN = {1868-8969},
year = {2011},
volume = {13},
editor = {Supratik Chakraborty and Amit Kumar},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
|
2015-04-26 11:33:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7675650119781494, "perplexity": 3606.7101668135233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654292.99/warc/CC-MAIN-20150417045734-00285-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.biostars.org/p/362649/
|
1
0
Entering edit mode
2.7 years ago
wangdp123 ▴ 250
Hi there,
I am using STAR+cufflink combination to handle the unstranded paired-end RNA-Seq datasets.
1. I wonder if there are a set of typical parameters for both STAR and cufflink?
For STAR:
STAR --runThreadN 1 --runMode alignReads --genomeDir index --readFilesIn sample_r1.fq sample_r2.fq --outFileNamePrefix sample_ --outSAMtype BAM SortedByCoordinate --outSAMattributes All --outSAMstrandField intronMotif
cufflinks -p 1 -G sample.gtf -o sample_clout sample_Aligned.sortedByCoord.out.bam
By running the above two command lines, I encountered a warning message:
"Warning: Using default Gaussian distribution due to insufficient paired-end reads in open ranges. It is recommended that correct parameters (--frag-len-mean and --frag-len-std-dev) be provided."
Does this warning matter? Is anything gone wrong?
1. I noticed from the STAR manual that for unstranded RNA-Seq data, we should give the parameter "--outSAMstrandField intronMotif" to STAR. And I tested the following three scenarios:
(1) without --outSAMattributes without --outSAMstrandField, the error message from Cufflinks: BAM record error: found spliced alignment without XS attribute (1) --outSAMattributes All --outSAMstrandField intronMotif, no error message, I can see the XS attribute in the BAM file. (1) --outSAMattributes Standard --outSAMstrandField intronMotif, no error message, I can see that there is NO XS attribute in the BAM file.
Does this mean using either "--outSAMattributes All" and "--outSAMattributes Standard" will get to the same destination? Does Cufflinks treat them in the same way?
Tom
RNA-Seq STAR Cufflinks • 3.0k views
0
Entering edit mode
2.7 years ago
The error has nothing to do with the strandedness of the data.
It is about the estimated fragment sizes for the read pairs. The 9th column of the BAM file contains the TLEN field, (template lenght). The --frag-len-mean and --frag-len-std-dev is asking for the mean value of TLEN and its standard deviation. I am not sure why there seem to be insufficient data there.
Either ignore the warning or figure out the mean and stdev for your data from that column.
|
2021-10-26 13:15:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3276645541191101, "perplexity": 12039.493034923975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587877.85/warc/CC-MAIN-20211026103840-20211026133840-00059.warc.gz"}
|
http://math.stackexchange.com/questions/221783/understanding-the-definition-and-notation-of-geometric-realization
|
# Understanding the definition and notation of geometric realization
I had trouble making sense of the definition of geometric realization of a simplicial set. Let $\Delta^n$ be the standard n-simplex defined as the functor $\hom_\Delta(-,$*n*$): \Delta \rightarrow$ Set, and $\left|\Delta^{n}\right|$ the topological standard n-simplex, let $X$ be a simplicial set, the realization of $X$ is defined by the colimit:
$\left| X \right| = \underrightarrow{\lim} \: \:\: \left| \Delta^{n} \right|$
$\Delta^{n} \rightarrow X$
in $\Delta\downarrow X$ (the simplex category of $X$).
Frankly I don't understand the notation. Look at the diagram of the colimit in $\Delta\downarrow X$:
$X \cong \underrightarrow{\lim} \: \Delta^{n}$
$\Delta^{n} \rightarrow X$:
Is the geometric realization of $X$ then the geometric realization of the colimit $L = \left| L \right|$? So then $L$ must be standard n-simplex $\Delta^{p}$ for some $p$ no? I want to make sure I'm understanding this right.
-
Why would the colimit just be an $n$-simplex? In a typical simplicial set you are taking the colimit over an infinite diagram with no terminal object. – Zhen Lin Oct 27 '12 at 6:42
This is how I see it: So far I know what the geometric realization of a standard n-simplex is, I want to know what's the geometric realization of a simplicial set that is NOT an standard n-simplex. Is the geometric realization of a simplicial set $X$ (that is NOT a standard n-simplex) the geometric realization of the colimit $L$? If that is so, and $L$ is not a standard n-simplex, then $\left| L \right|$ is the geometric realization of a simplicial set that is NOT an standard n-simplex, which is the very thing I want to know in the first place – Mario Carrasco Oct 27 '12 at 17:50
Your notation makes no sense. The geometric realisation of a simplicial set $X$ is defined to be a certain colimit over the diagram of shape $(\Delta^\bullet \downarrow X)$. – Zhen Lin Oct 27 '12 at 17:53
I think this isn't the easiest way to see what geometric realization is trying to do. We're trying to build a $\Delta$-complex-looking thing, and a simplicial set $X_\bullet$ is first and foremost a list of sets $\{X_n\}_{n \geq 0}$ whose elements are precisely the sets of maps in from the standard $n$-simplices. So to build $|X_\bullet|$, start with a vertex for every element of $X_0$. Then, give yourself edges for every element of $X_1$... [cont.] – Aaron Mazel-Gee Nov 3 '12 at 5:15
However, be sure to (a) collapse down those that are degenerate (i.e. they arise from maps $\Delta^1 \rightarrow |X_\bullet|$ of $\Delta$-complexes that take $\Delta^1$ to a single point), and (b) attach all edges to their boundary vertices using the face maps. Then, continue on up. The advantage of all this is that there's a lot of power in naturality, so you can do a lot by manipulating a "space" as "maps into the space" (from some particular set of objects). The main disadvantage is that in this framework you have no choice but to carry around all these degenerate simplices. – Aaron Mazel-Gee Nov 3 '12 at 5:18
|
2014-07-25 07:37:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860508382320404, "perplexity": 205.36745845950134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997893881.91/warc/CC-MAIN-20140722025813-00250-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.jitsejan.com/setting-up-spark-with-minio-as-object-storage
|
# Setting up Spark with minIO as object storage
## Introduction
At work we use AWS S3 for our datalake. Since I am working on some data projects, I would like to have a similar experience, but without AWS and simply on my own server. This is the reason why I chose minIO as object storage, it's free, runs on Ubuntu and is compatible with the AWS S3 API.
## Installation
The following Java libraries are needed to get minIO working with Spark:
To run the minIO server, I first create a minIO user and minIO group. Additionally I create the data folder that minIO will store the data. After preparing the environment I install minIO and add it as a service /etc/systemd/system/minIO.service.
[Unit]
Description=minIO
Documentation=https://docs.minIO.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minIO
[Service]
WorkingDirectory=/usr/local/
User=minIO
Group=minIO
PermissionsStartOnly=true
EnvironmentFile=/etc/default/minIO
ExecStartPre=/bin/bash -c "[ -n \"${minIO_VOLUMES}\" ] || echo \"Variable minIO_VOLUMES not set in /etc/default/minIO\"" ExecStart=/usr/local/bin/minIO server$minIO_OPTS $minIO_VOLUMES # Let systemd restart this service only if it has ended with the clean exit code or signal. Restart=on-success StandardOutput=journal StandardError=inherit # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65536 # Disable timeout logic and wait until process is stopped TimeoutStopSec=0 # SIGTERM signal is used to stop minIO KillSignal=SIGTERM SendSIGKILL=no SuccessExitStatus=0 [Install] WantedBy=multi-user.target The minIO environment file located at /etc/default/minIO contains the configuration for the volume, the port and the credentials. # minIO local/remote volumes. minIO_VOLUMES="/minIO-data/" # minIO cli options. minIO_OPTS="--address :9091 " minIO_ACCESS_KEY="mykey" minIO_SECRET_KEY="mysecret"$ minIO version
Version: 2019-06-27T21:13:50Z
Release-Tag: RELEASE.2019-06-27T21-13-50Z
For the complete configuration, check the role in Github.
## Code
The important bit is setting the right environment variables. Make sure the following variables are set:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin
export PATH=$PATH:$HADOOP_HOME/bin
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export SPARK_DIST_CLASSPATH=$(hadoop classpath)
After setting the environment variables, we need to make sure we connect to the minIO endpoint and set the credentials. Make sure the path.style.access is set to True.
from pyspark import SparkContext, SparkConf, SQLContext
conf = (
SparkConf()
.setAppName("Spark minIO Test")
)
sc = SparkContext(conf=conf).getOrCreate()
sqlContext = SQLContext(sc)
Once this is done, we can simply access the bucket and read a text file (given that this bucket and text file exists), and we are able to write a dataframe to minIO.
print(sc.wholeTextFiles('s3a://datalake/test.txt').collect())
# Returns: [('s3a://datalake/test.txt', 'Some text\nfor testing\n')]
path = "s3a://user-jitsejan/mario-colors-two/"
rdd = sc.parallelize([('Mario', 'Red'), ('Luigi', 'Green'), ('Princess', 'Pink')])
rdd.toDF(['name', 'color']).write.csv(path)
## Todo
Currently there seems to be an issue with reading small files, it will give a Parquet error that the files are not big enough to read. It seems more like a library issue, so I should just make sure I only work on big data.
## Credits
Thanks to atosatto for the Ansible role and minIO for the great example.
|
2022-08-18 23:21:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19860510528087616, "perplexity": 5509.091361191186}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00268.warc.gz"}
|
https://csedoubts.gateoverflow.in/10560/made-easy-cd
|
23 views
How many states are there in LR(0) parsing of the following grammar
$E→ E+T / T$
$T→ T*F / F$
$F→ (E)/id$
| 23 views
+1
you can use this to save time while cross checking http://jsmachines.sourceforge.net/machines/slr.html
|
2020-08-10 07:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5481696724891663, "perplexity": 4535.349059132215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738653.47/warc/CC-MAIN-20200810072511-20200810102511-00550.warc.gz"}
|
https://www.allaboutcircuits.com/projects/embedded-pid-temperature-control-part-4-the-scilab-gui/
|
Project
# Embedded PID Temperature Control, Part 4: The Scilab GUI
February 19, 2016 by Robert Keim
## With USB communications and a Scilab graphical user interface, we can really see what the PID controller is doing.
With USB communications and a Scilab graphical user interface, we can really see what the PID controller is doing.
### Previous Articles in This Series
Before we get started, here is the PID control system diagram presented previously:
And here are PID-related portions of the schematic:
### Much Better Than LEDs
The LED visualization scheme used in the previous article was pretty limited. The fact is, just about any visualization scheme based on nothing more than a few LEDs will be pretty weak. We need something that allows us to see exactly what our PID controller is doing—first, because it will be way more interesting, and second, because we need detailed information about the performance of the system in order to properly set the proportional, integral, and derivative gain.
For this stage of the project, then, we are going to incorporate USB capability into the EFM8 firmware and design a Scilab graphical user interface that controls the setpoint and displays, in real time, the actual measured temperatures. The “Supporting Information” section above points you to articles that provide introductory information regarding EFM8 USB functionality and Scilab. You can also scroll through my previous articles and take a look at anything that involves Scilab GUIs or USB communication.
### Firmware
#### USB Commands
The firmware has been changed so that PID operation is governed by Scilab. The EFM8 does not initiate PID functionality until Scilab tells it to, and then Scilab can halt and resume PID functionality at any time. To initiate or resume PID operation, Scilab sends ASCII “C” over the USB link, and to halt PID operation it sends ASCII “H”; both commands are just a single character without carriage return or newline or what not. The heater-drive voltage is set to 0 V whenever PID operation is halted, so keep in mind that the temperature of the heating element will gradually decrease toward room temperature during a halt condition. Scilab can also change the setpoint; this is accomplished by sending ASCII “S” followed by a one-byte binary (i.e., non-ASCII) number that represents the setpoint in degrees Celsius. These three commands are incorporated into the VCPXpress callback function.
VCPXpress_API_CALLBACK(myAPICallback)
{
uint32_t API_InterruptCode;
//get the code that indicates the reason for the interrupt
API_InterruptCode = Get_Callback_Source();
//if the USB connection was just opened
if (API_InterruptCode & DEVICE_OPEN)
{
//start the first USB read procedure
/*we will process the received bytes when we get
a callback with an RX_COMPLETE interrupt code*/
}
if (API_InterruptCode & RX_COMPLETE) //USB read complete
{
//'C' tells the EFM8 to begin or resume PID control
if(USBRxPacket[0] == 'C')
{
PID_ACTIVE = TRUE;
}
//'H' tells the EFM8 to halt PID control
else if(USBRxPacket[0] == 'H')
{
PID_ACTIVE = FALSE;
/*The heater-drive voltage is held at 0 V
* while PID control is halted.*/
UpdateDAC(DAC_HEATER, 0);
}
//'S' indicates that the host is sending the setpoint
else if(USBRxPacket[0] == 'S')
{
/*The setpoint temperature is restricted to
* positive integers not greater than 100
* degrees C. Scilab sends the setpoint as a
* normal binary number, not as ASCII characters,
* so that the EFM8 doesn't have to convert
* from ASCII to binary.*/
Setpoint_Temp = USBRxPacket[1];
}
//continue with the next USB read procedure
}
}
#### PID Flow
The PID routine in the main while loop now begins by checking the PID_ACTIVE flag, which is initialized to FALSE. Execution remains at this point until a “C” command from Scilab causes the EFM8 to change this flag to TRUE. Another important difference is that the measured temperature is sent to Scilab at the beginning of every iteration. In previous projects, Scilab first requested data via a USB command, then the EFM8 sent the data as a response to the request. In this project, Scilab does not have to ask for data; the EFM8 sends a three-byte measured-temperature packet during every iteration, and Scilab just receives and displays the data. This reduces the USB traffic and the burden on the EFM8’s processor, and consequently we should have less trouble with the higher update rates required for improved visualization of changes in the controlled variable. The current firmware uses an update interval of two seconds, which seems to provide adequate control and visualization without overburdening the EFM8 or the Scilab GUI. The following code excerpt covers the PID portion of the main while loop:
while (1)
{
/*First, we check PID_ACTIVE.The following while statement
* suspends PID functionality until the EFM8 is commanded by
* Scilab to begin or resume PID control.*/
while(PID_ACTIVE == FALSE);
GatherMAX31855Data();
while(TEMP_DATA_READY == FALSE); //wait until the SPI transaction is complete
Measured_Temp = ConvertMAX31855Data_to_TempC();
//send measured temperature to Scilab
TransmitUSB_TempData();
Error = Setpoint_Temp - Measured_Temp;
/*We don't want the integral error to get
* way too large. This is a standard problem
* referred to as integral windup. One solution
* is to simply restrict the integral error to
* reasonable values.*/
Error_Integral = Error_Integral + Error;
if(Error_Integral > 50)
Error_Integral = 50;
else if(Error_Integral < -50)
Error_Integral = -50;
Error_Derivative = Error - Previous_Error;
Previous_Error = Error;
PID_Output = (K_proportional*Error) + (K_integral*Error_Integral) + (K_derivative*Error_Derivative);
/*We need to restrict the PID output to
* acceptable values. Here we have limited it
* to a maximum of 200, which corresponds
* to about 780 mA of heater-drive current, and
* a minimum of 0, because we cannot drive
* less than 0 A through the heating
* element.*/
if(PID_Output > 200)
PID_Output = 200;
else if(PID_Output < 0)
PID_Output = 0;
//here we convert the PID output from a float to an unsigned char
Heater_Drive = PID_Output;
UpdateDAC(DAC_HEATER, Heater_Drive);
#### LEDs
Though insufficient by itself, LED feedback is still a handy way to monitor the operation of the system. It is also helpful as a way to confirm that the data displayed by Scilab is consistent with what is really going on in the EFM8. To make the LED feedback more suitable for this second purpose, we will take a different approach in this stage of the project: If the measured temperature is within ±2°C of the setpoint, we turn on only the green LED. If the measured temperature is more than 2°C below the setpoint, we turn on only blue. If the measured temperature is more than 2°C above the setpoint, we turn on only red. So, green = good, blue = too cold, and red = too hot. The advantage of this scheme will be apparent when the measured temperature is oscillating around the setpoint, because the changes in LED color will be in sync with the temperature variations displayed in the GUI. The LED control code is included in the second portion of the main while loop.
/*LED visualization: If the measured temperature is within
* plus/minus 2 degrees C of the setpoint, we turn on the
* green LED. If the measured temperature is more than
* 2 degrees below the setpoint, we turn on the blue LED.
* If the measured temperature is more than 2 degrees
* above the setpoint, we turn on the red LED.*/
if(Measured_Temp >= (Setpoint_Temp-2) && Measured_Temp <= (Setpoint_Temp+2))
{
UpdateDAC(DAC_RGB_R, 0);
UpdateDAC(DAC_RGB_B, 0);
UpdateDAC(DAC_RGB_G, 100);
}
else if(Measured_Temp < (Setpoint_Temp-2))
{
UpdateDAC(DAC_RGB_R, 0);
UpdateDAC(DAC_RGB_B, 100);
UpdateDAC(DAC_RGB_G, 0);
}
else if(Measured_Temp > (Setpoint_Temp+2))
{
UpdateDAC(DAC_RGB_R, 100);
UpdateDAC(DAC_RGB_B, 0);
UpdateDAC(DAC_RGB_G, 0);
}
/*Here we wait until the PID interval has expired,
* then we begin a new iteration. The interval is
* currently set to 2 seconds.*/
PID_WAIT = TRUE;
while(PID_WAIT == TRUE);
}
PIDTemperatureControl_Part4.zip
### Scilab
Here is what the GUI looks like when it is not active:
It was designed with the help of the GUI Builder toolbox, which you can download through Scilab’s ATOMS module manager:
PID_Temperature_Control_GUI.zip
First you use the “Open VCP Port” button to establish a virtual COM port connection to the EFM8. Next, choose the setpoint. Scilab restricts the setpoint to integers less than or equal to 100. If you enter a value greater than 100, Scilab will automatically reduce it to 100 and display “Setpoint reduced to maximum allowable value, i.e., 100°C” in the message bar below the “Open VCP Port” button. Likewise, if you enter a non-integer, Scilab rounds it to the nearest integer and displays a message to that effect. Now you are ready to click the “Activate PID Control” button. The setpoint is not actually sent to the EFM8 until you click this button, and you cannot change the setpoint while PID control is active. This will be obvious because the setpoint text-entry box is grayed out during active PID control. To change the setpoint, you have to click “Halt PID Control,” then change it, then click “Activate PID Control” to resume PID operation.
When you are done using the GUI, first click “Halt PID Control” (unless PID control is already inactive), then click “Close VCP Port,” then close the GUI window. If you don’t follow this procedure, you might need to restart Scilab or reset the EFM8 or some such. It’s annoying, but not catastrophic.
Let’s take a quick look at some salient portions of the Scilab script. First, this is how Scilab sends the “S” (setpoint) and “C” (initiate/resume PID control) commands, after you click “Activate PID Control”:
The Setpoint value is taken from the text-entry box as follows:
When PID control is active, Scilab repeatedly checks the virtual COM port receive buffer. Once three bytes have been received, it reads the three bytes, converts them into a temperature value, and adds them to an array that contains all the measured temperature values received since the last time that “Activate PID Control” was clicked:
The plot that displays the measured temperatures also has a green dotted line that corresponds to the setpoint. The following code is used to generate the setpoint line:
### Results
In the previous article, we looked at the (LED- and oscope-based) results for a proportional-only system and a proportional-integral system. In both cases the control task was to bring the heating element from 30°C to a setpoint of 50°C. We were able to determine that 1) the P-only system never reached the setpoint and 2) the PI system did reach the setpoint, though with some amount of overshoot. Now let’s take a look at GUI-based results for the same control task. First the P-only system:
We can see that the P-only system actually gets very close to the setpoint, but without integral gain, the temperature decreases somewhat and reaches a steady-state value that is about 2°C below the setpoint. As we mentioned in the previous article, P-only systems are known for their susceptibility to significant steady-state error.
Here is the plot for the PI system:
We can see here that the PI system is actually worse than we thought. It does indeed reach the setpoint, but it causes more than just overshoot—this particular configuration actually leads to sustained (or at least long-term) oscillation around the setpoint.
### Conclusion
The video at the end of this article (it runs at 16x normal speed) shows the LED behavior that corresponds to the plot for the PI system. In the video you will also notice an important hardware detail: the PCB we are using for this project actually has two USB connectors, one for power and one for data. A typical USB port cannot supply more than 500 mA. Thus, to get the higher current we need for the heating element, the board includes the option of taking power from a separate connector. So one of the USB cables is connected to a USB charger that can supply something like 1200 mA, and the other is connected to a USB port on the PC.
In the next article we will use our fancy new GUI to explore how different P, I, and D gains influence the performance of the system.
Next Article in Series: Embedded PID Temperature Control, Part 5: Adjusting Gains
Give this project a try for yourself! Get the BOM.
|
2020-01-25 04:25:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4763158857822418, "perplexity": 2989.7628099122744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00054.warc.gz"}
|
http://physics.stackexchange.com/questions?page=2&sort=newest&pagesize=50
|
# All Questions
12 views
### How are aberration levels measured?
I was reading reading the paper: Predicting subjective judgment of best focus with objective image quality metrics when I come across this statement: Through-focus visual acuity (VA) was ...
34 views
### Newtonian motion in vacuum
A spaceship is moving with constant acceleration in space. suddenly its fuel gets exhausted. Will it continue to move with constant acceleration and its velocity keep on increasing?
24 views
### Material with incredibly small young's modulus [on hold]
I'm looking for a material with a Young's modulus of less than 100 Pa. I've looked at elastomers, but I haven't had any luck. Thanks for your help, I appreciate it.
52 views
### Why does light travels in all directions? [on hold]
My understanding of time, gravity and speed of light: Earth revolves around the Sun. Sun revolves around Milky Way centre. Milky Way also keeps moving. All these movements are caused by gravity. ...
43 views
### Any textbook about non-renormalizability of gravity?
I have learned general relativity in a graduate-level. My knowledge about QFT is very rudimentary. But, I need to learn about non-renormalizability of gravity. I have these questions. Is there any ...
15 views
### Waves interfere in angle equation
If we had two waves perpendicular to each other, with equations: $x=αsin(ωt)$ (1) $y=βsin(ωt+π/2) ==> y=βcos(ωt)$(2) $sin(ωt)^2+cos(ωt)^2=x^2/α^2+y^2/β^2=1$ $x^2/α^2+y^2/β^2=1$ is an equation ...
45 views
### What do people mean with a vev=diag( -2,-2,-2,-2,-2,-2,3,3,3,3)?
For example, in this paper on page 21 the authors write the vev that breaks $SO(10)$ to $SU(4)\times SU(2) \times SU(2)$ $$<54>= 1/5 \cdot diag( -2,-2,-2,-2,-2,-2,3,3,3,3) \omega_s$$ where ...
47 views
### Can everything be described without anything needing to actually “bend”?
Is space bending because gravity actually causes small particles to move differently? If large source of gravity is somewhere are particles extending towards it, creating a "bend" in space? So "bend" ...
9 views
### Find the total energy stored in the network at the time of steady state? [on hold]
Find the total energy stored in this network at the time of steady state.
49 views
### Why is moment of inertia for a point same as a ring
The moment of inertia of a point and ring are both $m R^2$. It is interesting that the formula for moment of inertia is exactly the same for both. Is there any physical reason why this is the case? I ...
21 views
### Three blocks on inclined plane problem [on hold]
I came across this problem on physicsgalaxy.com. My approach: Found limiting friction between 30 kg and 50 kg as 72 N. Found limiting friction between 50 kg and 40 kg as 256 N. Now P+50 sin(37) ...
65 views
### Why is a sine wave considered the fundamental building block of any signal? Why not some other function?
It is mathematically possible to express a given signal as a sum of functions other than sines and cosines. With that in mind, why does signal processing always revolve around breaking down the signal ...
34 views
### Gear ratio in bicycles using rotational motion
When we change the gears of the bicycle we are riding, we change the the disc we are currently at (which are located at the place where we pedal) to some other disc. This means the radius of the ...
20 views
### baryon acoustic oscillation
I have one question about baryon acoustic oscillation. I understand why we should have the baryon-photon fluid sound wave before recombination: Suppose we have a spherical overdense region. This ...
23 views
### Traveling near the speed of light? [duplicate]
Suppose we can travel on a spacecraft near the speed of light, how long it would take for the person on the spacecraft to travel one light year, not to a person observing him/her from Earth, if there ...
20 views
### Detecting molecules in space?
http://www.forbes.com/sites/alexknapp/2012/02/24/nasa-detects-solid-buckyballs-in-space/ I refer to the above article, which mentions that buckyballs "far smaller than the width of a hair" were ...
17 views
### Work Function Calculation with Local Electrostatic Potential
On the Wikipedia site, it describes the work function equation as W = -e\phi - Ef, where phi is the electrostatic potential of a vacuum nearby the surface of the material. So my question is, how can I ...
32 views
### What kind of orbit would be needed to map the surface of a nonrotating planet?
I am not a mathematician and it may even take me weeks to understand the math involved but I have an odd question on orbital mechanics that I hope will be worth the experts' time. I am a hobbyist ...
39 views
### Vibrating water container problem
I am struggling with visualising and understanding the phrasing of this question - cross posted from Math stack exchange since this forum is more appropriate: "A water-filled container is ...
23 views
### can any one please explain in simple terms phase change of reflected light
Phase change is used to explain interference in thin films . The concept is not explained there . Does the change in direction by 180 mean phase change ?
145 views
### error propagation with an integral
My question here is about how to determine the error of an integral given individual uncertainties in two parameters defining the function being integrated. I used a curve fitting function to ...
45 views
### Is Dark Energy and or Dark Matter directly proportional to EMR? [on hold]
Is Dark Energy directly proportional to Electromagnetic radiation (EM radiation or EMR)?
36 views
### On the connection between forces and the principle of stationary action
Feynman tries to account for the relation between the principle of stationary action, which is a statement about the whole path of a particle, and Newton's second law, which is a statement about the ...
21 views
### Velocity change of air down a cone?
Does the velocity of fluids(specifically gasses) change when traveling down a cone from the wide opening to the narrow opening? If so, is there an equation used to calculate the acceleration or ...
33 views
### Rotational dynamics Lagrangian problem
I am trying to construct the Euler-Lagrange equation for a non-inertial reference frame. I have two questions, the first is with regard to the discussion of the transformation properties of the ...
10 views
### When to use lateral magnification vs angular magnification?
What is the essential difference between lateral and angular magnification (like why do we need to use both and when do we use which)? Also, is there a relationship between the two magnifications?
12 views
### Electric field in a non-uniformly charged sheet
So if we have a large sheet that is not uniformly charged and is NOT a conductor, how can I find an expression for the electric field everywhere? Things we know about the sheet: the width is 2b it ...
45 views
### Will the gravitational pull of air affect the falling rate of an object?
After looking at this question: Don't heavier objects actually fall faster because they exert their own gravity? A thought occurred to me that due to the increased gravitational pull of the ...
19 views
### Aplications of algebraic topology/pure mathematics in nuclear fusion
I am planning on working on an independent research class this fall at a community college. My instructor wants to focus it around pure mathematics/topology/homotopy. I think she has done a phd in ...
38 views
### How does gravitational time dilation work in artificial gravity made by rotating a cylinder? [duplicate]
Concerning gravitational time dilation in artificial gravity (made by a rotating torus like in many sci-fi movies) how would you go about calculating the effect?
82 views
### Intro Mechanics: Finding ball speeds after collision
So I'm reading about conservation of energy, momentum, balls colliding etc. and unsurprisingly there are hundreds of questions in my textbook where essentially they give me some of the variables (m1, ...
29 views
### Electrical resistivity: the effect of adding electrons
Consider a copper wire of fixed length and cross-sectional area, and apply an electric field to the wire induced by a fixed potential difference $V$ across the two endpoints of the wire. The intensity ...
30 views
### Gedanken experiment: Does it collapse to a black hole or not? [duplicate]
Imagine that I am a stationary far-off observer of a massive star that is minutely larger than its Schwarzschild radius (in other words, its on the verge of collapsing). My spacecraft accelerates ...
39 views
### Minima & maxima of Laplace's equation
I don't get the following sentence from David J. Griffiths' Introduction to Electrodynamics (the ambiguous sentence is in bold) Laplace's equation tolerates no local maxima or minima; extreme ...
82 views
### How long will it take before the orbit of the earth is 365 days exactly?
I understand that the number of days per year has changed throughout the history of the Earth. Apparently there were once over 400 days per trip around the sun. How long will it take approximately ...
18 views
### Books of dynamo theory
I wanted recommendations of dynamo theory pdfs to make a report about magnetic fields of planets and stars with all the math behind it, like vector calculus, Lagrange functions, partial differential ...
73 views
### Faster than speed of light [duplicate]
I was watching a Physics TV show, When someone called Alex Filippenko said that when there was the Big Bang, the Space extended at a speed faster than speed of light. He said that it wasn't against ...
16 views
### Number of classical oscillation modes of a Lattice and number of quantum phonons
In solving the Classical model for lattice dynamics [Rossler pag 38] we find that the lattice admits $$d\cdot N\cdot r = \#modes$$ where $d=$dimension of the problem $N=$ number of atoms $r=$ ...
132 views
### Isn't the aether existent?
Before you say I'm wrong consider this, Einstein is supposedly the first person to get completely get rid of the various aether models that were proposed. But didn't Einstein actually prove them right ...
31 views
### How are standing waves a result of constructive and destructive interferences?
For constructive I can understand. But destructive I can't. I can not picture the shape of two pulses or waves maybe that form the resulting standing wave. The places where waves are canceled just ...
28 views
### How to calculate force needed to launch an object of known mass vertically? [on hold]
I am going to contract a spring and release it to project an object of known mass to a specified distance upwards(vertically).I want to calculate the needed force to overcome gravity and reach the ...
97 views
### Would a person gain or lose weight after expelling a flatus?
Considering the chemical composition, mass, and pressure range of typical human flatulence, would a person gain or lose weight after passing gas?
15 views
### Detrended Fluctuation Analysis
In the fitting procedure of DFA, how can we understand which order of DFA (Detrended Fluctuation Analysis) (DFA1, DFA2, and higher order DFA) should be applied in the time series?
19 views
### What determines a particles probability of creation?
I know when we're discussing events at a quantum level, we deal in probability and not absolutes. What I'm looking to understand, is when articles I've read on particle physics state a particle has a ...
57 views
### How do I calculate momentum for a particle in a box, using the momentum quantum operator? [on hold]
For a particle in a one dimensional box with $U(x) = 0$ between $x = 0$ and $x = L$ (infinite Potential well) the momentum for $n = 1,2,3,...$ is given by: $$p_n = \frac{nh}{2 L}$$ The wave ...
25 views
### Variation of quadratic term in modified Einstein-Hilbert actions
In the context of mimetic gravity at some point one try to add to an already modified Einstein-Hilbert action also a term like $$S_\chi=\int\,d^4x\,\sqrt{-g}\frac{1}{2}\gamma\chi^2,\qquad(\star)$$ ...
31 views
### Plane wave solutions of Dirac equation
I'm reading chapter 3 in Peskin on the Dirac equation. First of all, they say since Dirac satisfies Klein Gordon it can be written as a linear combination of plane waves. This is fine. So a general ...
33 views
### Are there any other pairs similar to virtual and normal photons? [duplicate]
Are there virtual particles for every kind of particle there is?
|
2015-07-31 23:47:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025364518165588, "perplexity": 830.4577044038558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988317.67/warc/CC-MAIN-20150728002308-00129-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/cc4/chapter/3/lesson/3.3.3/problem/3-136
|
### Home > CC4 > Chapter 3 > Lesson 3.3.3 > Problem3-136
3-136.
Solve each equation by first rewriting the expressions in each part with the same base. Refer to problem 3‑132 if you need a reminder. Homework Help ✎
1. $\quad 8 ^ { x } = 2 ^ { 6 }$
• Since 8 = 23, (23)x = 26
• 2(3x) = 26
• 3x = 6
• x = 2
1. $9 ^ { 2 } = 3 ^ { 2 x + 1 }$
• (32)2 = 32x+1
• x = 1.5
1. $4 ^ { 2 x } = ( \frac { 1 } { 2 } ) ^ { x + 5 }$
• Use the same method as part (a).
• $\text{Remember that }\frac{1}{2}=2^{-1}.$
|
2019-08-19 23:17:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6711969971656799, "perplexity": 6811.859904847268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00101.warc.gz"}
|
https://www.iitianacademy.com/ib-dp-maths-topic-8-5-binary-operations-associative-distributive-and-commutative-properties-hl-paper-3/
|
# IB DP Maths Topic 8.5 Binary operations: associative, distributive and commutative properties HL Paper 3
## Question
The binary operation $$*$$ is defined on the set S = {0, 1, 2, 3} by
$a * b = a + 2b + ab(\bmod 4){\text{ .}}$
(a) (i) Construct the Cayley table.
(ii) Write down, with a reason, whether or not your table is a Latin square.
(b) (i) Write down, with a reason, whether or not $$*$$ is commutative.
(ii) Determine whether or not $$*$$ is associative, justifying your answer.
(c) Find all solutions to the equation $$x * 1 = 2 * x$$ , for $$x \in S$$ .
## Markscheme
(a) (i)
A3
Note: Award A3 for no errors, A2 for one error, A1 for two errors and A0 for three or more errors.
(ii) it is not a Latin square because some rows/columns contain the same digit more than once A1
[4 marks]
(b) (i) EITHER
it is not commutative because the table is not symmetric about the leading diagonal R2
OR
it is not commutative because $$a + 2b + ab \ne 2a + b + ab$$ in general R2
Note: Accept a counter example e.g. $$1 * 2 = 3$$ whereas $$2 * 1 = 2$$ .
(ii) EITHER
for example $$(0 * 1) * 1 = 2 * 1 = 2$$ M1
and $$0 * (1 * 1) = 0 * 0 = 0$$ A1
so $$*$$ is not associative A1
OR
associative if and only if $$a * (b * c) = (a * b) * c$$ M1
which gives
$$a + 2b + 4c + 2bc + ab + 2ac + abc = a + 2b + ab + 2c + ac + 2bc + abc$$ A1
so $$*$$ is not associative as $$2ac \ne 2c + ac$$ , in general A1
[5 marks]
(c) x = 0 is a solution A2
x = 2 is a solution A2
[4 marks]
Total [13 marks]
## Examiners report
This question was generally well answered.
## Question
Let c be a positive, real constant. Let G be the set $$\{ \left. {x \in \mathbb{R}} \right| – c < x < c\}$$ . The binary operation $$*$$ is defined on the set G by $$x * y = \frac{{x + y}}{{1 + \frac{{xy}}{{{c^2}}}}}$$.
Simplify $$\frac{c}{2} * \frac{{3c}}{4}$$ .
[2]
a.
State the identity element for G under $$*$$.
[1]
b.
For $$x \in G$$ find an expression for $${x^{ – 1}}$$ (the inverse of x under $$*$$).
[1]
c.
Show that the binary operation $$*$$ is commutative on G .
[2]
d.
Show that the binary operation $$*$$ is associative on G .
[4]
e.
(i) If $$x,{\text{ }}y \in G$$ explain why $$(c – x)(c – y) > 0$$ .
(ii) Hence show that $$x + y < c + \frac{{xy}}{c}$$ .
[2]
f.
Show that G is closed under $$*$$.
[2]
g.
Explain why $$\{ G, * \}$$ is an Abelian group.
[2]
h.
## Markscheme
$$\frac{c}{2} * \frac{{3c}}{4} = \frac{{\frac{c}{2} + \frac{{3c}}{4}}}{{1 + \frac{1}{2} \cdot \frac{3}{4}}}$$ M1
$$= \frac{{\frac{{5c}}{4}}}{{\frac{{11}}{8}}} = \frac{{10c}}{{11}}$$ A1
[2 marks]
a.
identity is 0 A1
[1 mark]
b.
inverse is –x A1
[1 mark]
c.
$$x * y = \frac{{x + y}}{{1 + \frac{{xy}}{{{c^2}}}}},{\text{ }}y * x = \frac{{y + x}}{{1 + \frac{{yx}}{{{c^2}}}}}$$ M1
(since ordinary addition and multiplication are commutative)
$$x * y = y * x{\text{ so }} *$$ is commutative R1
Note: Accept arguments using symmetry.
[2 marks]
d.
$$(x * y) * z = \frac{{x + y}}{{1 + \frac{{xy}}{{{c^2}}}}} * z = \frac{{\left( {\frac{{x + y}}{{1 + \frac{{xy}}{{{c^2}}}}}} \right) + z}}{{1 + \left( {\frac{{x + y}}{{1 + \frac{{xy}}{{{c^2}}}}}} \right)\frac{z}{{{c^2}}}}}$$ M1
$$= \frac{{\frac{{\left( {x + y + z + \frac{{xyz}}{{{c^2}}}} \right)}}{{\left( {1 + \frac{{xy}}{{{c^2}}}} \right)}}}}{{\frac{{\left( {1 + \frac{{xy}}{{{c^2}}} + \frac{{xz}}{{{c^2}}} + \frac{{yz}}{{{c^2}}}} \right)}}{{\left( {1 + \frac{{xy}}{{{c^2}}}} \right)}}}} = \frac{{\left( {x + y + z + \frac{{xyz}}{{{c^2}}}} \right)}}{{\left( {1 + \left( {\frac{{xy + xz + yz}}{{{c^2}}}} \right)} \right)}}$$ A1
$$x * (y * z) = x * \left( {\frac{{y + z}}{{1 + \frac{{yz}}{{{c^2}}}}}} \right) = \frac{{x + \left( {\frac{{y + z}}{{1 + \frac{{yz}}{{{c^2}}}}}} \right)}}{{1 + \frac{x}{{{c^2}}}\left( {\frac{{y + z}}{{1 + \frac{{yz}}{{{c^2}}}}}} \right)}}$$
$$= \frac{{\frac{{\left( {x + \frac{{xyz}}{{{c^2}}} + y + z} \right)}}{{\left( {1 + \frac{{yz}}{{{c^2}}}} \right)}}}}{{\frac{{\left( {1 + \frac{{yz}}{{{c^2}}} + \frac{{xy}}{{{c^2}}} + \frac{{xz}}{{{c^2}}}} \right)}}{{\left( {1 + \frac{{yz}}{{{c^2}}}} \right)}}}} = \frac{{\left( {x + y + z + \frac{{xyz}}{{{c^2}}}} \right)}}{{\left( {1 + \left( {\frac{{xy + xz + yz}}{{{c^2}}}} \right)} \right)}}$$ A1
since both expressions are the same $$*$$ is associative R1
Note: After the initial M1A1, correct arguments using symmetry also gain full marks.
[4 marks]
e.
(i) $$c > x{\text{ and }}c > y \Rightarrow c – x > 0{\text{ and }}c – y > 0 \Rightarrow (c – x)(c – y) > 0$$ R1AG
(ii) $${c^2} – cx – cy + xy > 0 \Rightarrow {c^2} + xy > cx + cy \Rightarrow c + \frac{{xy}}{c} > x + y{\text{ (as }}c > 0)$$
so $$x + y < c + \frac{{xy}}{c}$$ M1AG
[2 marks]
f.
if $$x,{\text{ }}y \in G{\text{ then }} – c – \frac{{xy}}{c} < x + y < c + \frac{{xy}}{c}$$
thus $$– c\left( {1 + \frac{{xy}}{{{c^2}}}} \right) < x + y < c\left( {1 + \frac{{xy}}{{{c^2}}}} \right){\text{ and }} – c < \frac{{x + y}}{{1 + \frac{{xy}}{{{c^2}}}}} < c$$ M1
$$({\text{as }}1 + \frac{{xy}}{{{c^2}}} > 0){\text{ so }} – c < x * y < c$$ A1
proving that G is closed under $$*$$ AG
[2 marks]
g.
as $$\{ G, * \}$$ is closed, is associative, has an identity and all elements have an inverse R1
it is a group AG
as $$*$$ is commutative R1
it is an Abelian group AG
[2 marks]
h.
## Examiners report
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
a.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
b.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
c.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
d.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
e.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
f.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
g.
Most candidates were able to answer part (a) indicating preparation in such questions. Many students failed to identify the command term “state” in parts (b) and (c) and spent a lot of time – usually unsuccessfully – with algebraic methods. Most students were able to offer satisfactory solutions to part (d) and although most showed that they knew what to do in part (e), few were able to complete the proof of associativity. Surprisingly few managed to answer parts (f) and (g) although many who continued to this stage, were able to pick up at least one of the marks for part (h), regardless of what they had done before. Many candidates interpreted the question as asking to prove that the group was Abelian, rather than proving that it was an Abelian group. Few were able to fully appreciate the significance in part (i) although there were a number of reasonable solutions.
h.
## Question
The binary operation $$*$$ is defined on $$\mathbb{N}$$ by $$a * b = 1 + ab$$.
Determine whether or not $$*$$
is closed;
[2]
a.
is commutative;
[2]
b.
is associative;
[3]
c.
has an identity element.
[3]
d.
## Markscheme
$$*$$ is closed A1
because $$1 + ab \in \mathbb{N}$$ (when $$a,b \in \mathbb{N}$$) R1
[2 marks]
a.
consider
$$a * b = 1 + ab = 1 + ba = b * a$$ M1A1
therefore $$*$$ is commutative
[2 marks]
b.
EITHER
$$a * (b * c) = a * (1 + bc) = 1 + a(1 + bc){\text{ }}( = 1 + a + abc)$$ A1
$$(a * b) * c = (1 + ab) * c = 1 + c(1 + ab){\text{ }}( = 1 + c + abc)$$ A1
(these two expressions are unequal when $$a \ne c$$) so $$*$$ is not associative R1
OR
proof by counter example, for example
$$1 * (2 * 3) = 1 * 7 = 8$$ A1
$$(1 * 2) * 3 = 3 * 3 = 10$$ A1
(these two numbers are unequal) so $$*$$ is not associative R1
[3 marks]
c.
let e denote the identity element; so that
$$a * e = 1 + ae = a$$ gives $$e = \frac{{a – 1}}{a}$$ (where $$a \ne 0$$) M1
then any valid statement such as: $$\frac{{a – 1}}{a} \notin \mathbb{N}$$ or e is not unique R1
there is therefore no identity element A1
Note: Award the final A1 only if the previous R1 is awarded.
[3 marks]
d.
## Examiners report
For the commutative property some candidates began by setting $$a * b = b * a$$ . For the identity element some candidates confused $$e * a$$ and $$ea$$ stating $$ea = a$$ . Others found an expression for an inverse element but then neglected to state that it did not belong to the set of natural numbers or that it was not unique.
a.
For the commutative property some candidates began by setting $$a * b = b * a$$ . For the identity element some candidates confused $$e * a$$ and $$ea$$ stating $$ea = a$$ . Others found an expression for an inverse element but then neglected to state that it did not belong to the set of natural numbers or that it was not unique.
b.
For the commutative property some candidates began by setting $$a * b = b * a$$ . For the identity element some candidates confused $$e * a$$ and $$ea$$ stating $$ea = a$$ . Others found an expression for an inverse element but then neglected to state that it did not belong to the set of natural numbers or that it was not unique.
c.
For the commutative property some candidates began by setting $$a * b = b * a$$ . For the identity element some candidates confused $$e * a$$ and $$ea$$ stating $$ea = a$$ . Others found an expression for an inverse element but then neglected to state that it did not belong to the set of natural numbers or that it was not unique.
d.
## Question
The binary operation $$\Delta$$ is defined on the set $$S =$$ {1, 2, 3, 4, 5} by the following Cayley table.
(a) State whether S is closed under the operation Δ and justify your answer.
(c) State whether there is an identity element and justify your answer.
(e) Find the solutions of the equation $$a\Delta b = 4\Delta b$$, for $$a \ne 4$$.
## Markscheme
(a) yes A1
because the Cayley table only contains elements of S R1
[2 marks]
(b) yes A1
because the Cayley table is symmetric R1
[2 marks]
(c) no A1
because there is no row (and column) with 1, 2, 3, 4, 5 R1
[2 marks]
(d) attempt to calculate $$(a\Delta b)\Delta c$$ and $$a\Delta (b\Delta c)$$ for some $$a,{\text{ }}b,{\text{ }}c \in S$$ M1
counterexample: for example, $$(1\Delta 2)\Delta 3 = 2$$
$$1\Delta (2\Delta 3) = 1$$ A1
Δ is not associative A1
Note: Accept a correct evaluation of $$(a\Delta b)\Delta c$$ and $$a\Delta (b\Delta c)$$ for some $$a,{\text{ }}b,{\text{ }}c \in S$$ for the M1.
[3 marks]
(e) for example, attempt to enumerate $$4\Delta b$$ for b = 1, 2, 3, 4, 5 and obtain (3, 2, 1, 4, 1) (M1)
find $$(a,{\text{ }}b) \in \left\{ {{\text{(2, 2), (2, 3)}}} \right\}$$ for $$a \ne 4$$ (or equivalent) A1A1
Note: Award M1A1A0 if extra ‘solutions’ are listed.
[3 marks]
Total [12 marks]
[N/A]
## Question
The binary operations $$\odot$$ and $$*$$ are defined on $${\mathbb{R}^ + }$$ by
$a \odot b = \sqrt {ab} {\text{ and }}a * b = {a^2}{b^2}.$
Determine whether or not
$$\odot$$ is commutative;
[2]
a.
$$*$$ is associative;
[4]
b.
$$*$$ is distributive over $$\odot$$ ;
[4]
c.
$$\odot$$ has an identity element.
[3]
d.
## Markscheme
$$a \odot b = \sqrt {ab} = \sqrt {ba} = b \odot a$$ A1
since $$a \odot b = b \odot a$$ it follows that $$\odot$$ is commutative R1
[2 marks]
a.
$$a * (b * c) = a * {b^2}{c^2} = {a^2}{b^4}{c^4}$$ M1A1
$$(a * b) * c = {a^2}{b^2} * c = {a^4}{b^4}{c^2}$$ A1
these are different, therefore $$*$$ is not associative R1
Note: Accept numerical counter-example.
[4 marks]
b.
$$a * (b \odot c) = a * \sqrt {bc} = {a^2}bc$$ M1A1
$$(a * b) \odot (a * c) = {a^2}{b^2} \odot {a^2}{c^2} = {a^2}bc$$ A1
these are equal so $$*$$ is distributive over $$\odot$$ R1
[4 marks]
c.
the identity e would have to satisfy
$$a \odot e = a$$ for all a M1
now $$a \odot e = \sqrt {ae} = a \Rightarrow e = a$$ A1
therefore there is no identity element A1
[3 marks]
d.
[N/A]
a.
[N/A]
b.
[N/A]
c.
[N/A]
d.
|
2021-12-04 10:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7666985392570496, "perplexity": 1110.9389661660198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00010.warc.gz"}
|
https://www.askiitians.com/forums/Modern-Physics/a-airplane-is-flying-at-300m-s-627m-h-how-much-ti_238425.htm
|
# A airplane is flying at 300m/s(627m/h).How much time must elapse before a clock in the airplane and one on the ground differ by 1.00s
Umar Farooq
28 Points
3 years ago
A moving clock ticks slower by $\frac{t{}'}{t}=$ $\sqrt{1-\frac{v^2}{c^2}}$ = 0.9999999999999999
where t is stationary time. For 1 second difference, $t-t{}' = 1$
t- 0.9999999999999999t=1
$t=10^{^{16}}$seconds=317097919.8376458883 years
t(1- 0.9999999999999999)=1
t=1/
|
2022-11-26 12:27:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5995840430259705, "perplexity": 3421.6183580690354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00003.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/140168-eigenvalues-unitary-matrix-proof-print.html
|
# Eigenvalues of Unitary Matrix proof
• Apr 19th 2010, 05:20 PM
firebio
Eigenvalues of Unitary Matrix proof
Unitary is UU*=I
U* is transpose conjugate
Prove that if a matrix U is unitary, then all eigenvalues of U have absolute value of 1.
$Uv= \lambda v$
$U^* Uv=\lambda U^*v$
$v= \lambda U^* v$
$v/\lambda=U^* v$
so v is also a eigenvector for U* with eigenvalue of $1/\lambda$.
Not sure how to continue?
any help would be appreciated
• Apr 19th 2010, 06:38 PM
aliceinwonderland
Quote:
Originally Posted by firebio
Unitary is UU*=I
U* is transpose conjugate
Prove that if a matrix U is unitary, then all eigenvalues of U have absolute value of 1.
$Uv= \lambda v$
$U^* Uv=\lambda U^*v$
$v= \lambda U^* v$
$v/\lambda=U^* v$
so v is also a eigenvector for U* with eigenvalue of $1/\lambda$.
Not sure how to continue?
any help would be appreciated
The eigenvalues of $U^*$ are the complex conjugates of the eigenvalues of $U$ (link).
$Uv = \lambda v$,
$U^*v = (1/\lambda)v$.
Thus, $\lambda = 1/\bar{\lambda} \Leftrightarrow \lambda \bar{\lambda} =1 \Leftrightarrow |\lambda|^2=1$.
|
2017-05-27 20:05:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557467103004456, "perplexity": 1820.2038944151557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609054.55/warc/CC-MAIN-20170527191102-20170527211102-00111.warc.gz"}
|
https://calendar.math.illinois.edu/?year=2013&month=10&day=02&interval=day
|
Department of
# Mathematics
Seminar Calendar
for events the day of Wednesday, October 2, 2013.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
September 2013 October 2013 November 2013
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7 1 2 3 4 5 1 2
8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9
15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16
22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23
29 30 27 28 29 30 31 24 25 26 27 28 29 30
Wednesday, October 2, 2013
3:00 pm in 143 Altgeld Hall,Wednesday, October 2, 2013
#### The Boson-Fermion Correspondence and Plücker Relations
Abstract: I will give an overview of the Boson-Fermion correspondence which gives an isomorphism between the representation of the Heisenberg Lie algebra on $\mathbb{C}[x_{1},x_{2,}\cdots]$ and the representation of the Heisenberg Lie algebra on an infinite dimensional wedge space. I will briefly discuss how this isomorphism can be extended to isomorphisms between representations of larger Lie algebras. I will then discuss how to use the Boson-Fermion correspondence to understand the orbit of 1 under the action of the Lie group $GL_{\infty}$, and will mention the fact that elements in the orbit are solutions to an infinite set of differential equations. This parallels our previous discussion of representations of the Lie algebra $sl_{n}$ on the finite exterior algebra, $\Lambda(V)=\oplus_{k=0}^{n}\Lambda^{k}V$, which we used to obtain the Plücker relations. Our discussion of the Boson-Fermion correspondence and its uses will follow the treatment given in Kac and Raina's Highest Weight Representations of Infinite Dimensional Lie Algebras.
3:00 pm in 347 Altgeld Hall,Wednesday, October 2, 2013
#### Constructions for Quasicategories
###### Mychael Sanchez (UIUC Math)
Abstract: I'll talk about functors and limits in the setting of quasicategories. I'll also give examples of quasicategories and talk about homotopy coherent diagrams in simplicially enriched categories and their relation to quasicategories.
4:00 pm in 245 Altgeld Hall,Wednesday, October 2, 2013
|
2022-09-24 16:17:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6486700773239136, "perplexity": 237.94583365903415}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00396.warc.gz"}
|
https://parasol-lab.gitlab.io/stapl-home/docs/sgl/basic-usage/
|
# Tutorial
This section will introduce the basic concepts to use the STAPL Graph Library.
We are going to implement a simple program that will read a graph from disk, compute the betweenness centrality metric for each vertex, and print out the vertex that has highest centrality value.
## Includes
First, we will include the appropriate files for the graph container, helpers for graph I/O and the algorithms we want to run.
#include <stapl/containers/graph/graph.hpp>
#include <stapl/containers/graph/algorithms/graph_io.hpp>
#include <stapl/containers/graph/algorithms/properties.hpp>
#include <stapl/containers/graph/algorithms/betweenness_centrality.hpp>
#include <stapl/algorithms/algorithm.hpp>
We are going to use the betweenness centrality algorithm, so we need to include it. Note that we also include the general algorithm.hpp because want to use the STL-style algorithm max_element.
## Main
Every STAPL program begins with the stapl_main function, which is the entry point for the program. It receives the command-line arguments in argc and argv like a traditional C++ main. At the end of the function, we'll return 0 for success, or an error code if something went wrong.
stapl::exit_code stapl_main(int argc, char* argv[])
{
// ...
return 0;
}
## Graph I/O
Next, we will read in a graph from the disk:
auto graph = stapl::load_edge_list<stapl::properties::bc_property>(argv[1]);
The function load_edge_list will read a graph from the disk that is formatted using the traditional edge list format, which is something like:
num_verts num_edges
0 1
2 3
...
After it reads the graph, it will populate a container and return a graph_view to it. Note that we are using the bc_property for the vertices, which will allow us to store information on the vertex to run the betweenness computation and retrieve the result.
## Algorithms
For the main computation, we will run a parallel graph algorithm and a parallel STL algorithm:
stapl::betweenness_centrality(graph);
auto v = stapl::max_element(graph, [](auto x, auto y) {
return x.property().BC() < y.property().BC();
});
After invoking betweenness_centrality on the graph, all vertices will have their centrality values stored in their properties. That is, for each vertex v, v.property().BC() will contain the centrality value for v.
We can then use max_element to find the largest vertex with respect to this value. We use a custom comparison lambda to specify how we are to compare vertices. In the end, we have a reference to the vertex with highest centrality value as v.
## Output
Finally, we want to print out the vertex we found. We use the stapl::do_once construct to ensure that only a single location prints, as we do not want overlapping output.
stapl::do_once([&]() {
std::cout << v << " is the most important vertex!" << std::endl;
});
## Running
Using a C++14 compiler with MPI, we can compile and run the code as follows:
$make most_central$ mpirun -np 16 ./most_central my_graph.el
189 is the most important vertex!
## Full Code
The full code is below.
#include <stapl/containers/graph/graph.hpp>
#include <stapl/containers/graph/algorithms/graph_io.hpp>
#include <stapl/containers/graph/algorithms/properties.hpp>
#include <stapl/containers/graph/algorithms/betweenness_centrality.hpp>
#include <stapl/algorithms/algorithm.hpp>
stapl::exit_code stapl_main(int argc, char* argv[])
{
stapl::betweenness_centrality(graph);
auto v = stapl::max_element(graph, [](auto x, auto y) {
return x.property().BC() < y.property().BC();
});
stapl::do_once([&]() {
std::cout << v << " is the most important vertex!" << std::endl;
});
return 0;
}
|
2018-03-17 14:18:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2621661424636841, "perplexity": 2732.6426212979873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645177.12/warc/CC-MAIN-20180317135816-20180317155816-00573.warc.gz"}
|
https://www.doubtnut.com/question-answer/if-psq-is-a-focal-chord-of-the-ellipse-16x2-25y2400-such-that-sp8-then-find-the-length-of-sq-is-a-2--642538482
|
HomeEnglishClass 11MathsChapterConic Sections
If PSQ is a focal chord of the...
# If PSQ is a focal chord of the ellipse 16x^2+25y^2=400 such that SP=8, then find the length of SQ is (a) 2 (b) 1 (c) 8/9 (d) 16/9
Updated On: 17-04-2022
Get Answer to any question, just click a photo and upload the photo
and get the answer completely free,
|
2022-05-25 13:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7870899438858032, "perplexity": 3581.772757001125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00480.warc.gz"}
|
https://physics.stackexchange.com/questions/549442/calculating-induced-charge-on-a-neutral-sphere-due-to-a-point-charge-outside
|
# Calculating Induced Charge on a neutral sphere due to a point charge outside
I was able to solve option A conceptually- at centre of sphere, potential due to induced charge on sphere is 0 as left and right hemispheres' induced charge is same and opposite. So only potential due to point charge needs to be calcuated.
But I am not getting how to solve for option D. Do I calcuate the induced charge on the sphere (which is not same throughout), or can I solve using gauss theorem, or any other method which I could not think of?
1) Since, the potential at center of neutral conducting sphere is $$\cfrac{q}{4\pi\epsilon o(d +r)}$$ as you mentioned.
$$V_{q}$$ + $$V_{sphere}$$ $$=$$ $$\cfrac{q}{4\pi\epsilon o(d +r)}$$ ----$$\mathrm{I}$$
since $$V_q$$ at point B is $$\cfrac{q}{4\pi\epsilon o d}$$, putting it in $$\mathrm{I}$$
$$V_{sphere}$$ $$=$$ $$\cfrac{-qr}{4\pi\epsilon o(d + r)d}$$
|
2021-05-15 11:18:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142433762550354, "perplexity": 341.9444616651965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00077.warc.gz"}
|
http://mymathforum.com/algebra/43920-simultaneous-equations.html
|
My Math Forum Simultaneous equations
Algebra Pre-Algebra and Basic Algebra Math Forum
May 20th, 2014, 06:05 PM #1 Banned Camp Joined: May 2014 From: london Posts: 21 Thanks: 0 Simultaneous equations solve this set of simultaneous equations. 7x - 4y = 37 6x + 3y = 51 thank you for all of the help given pre hand. Last edited by skipjack; May 24th, 2014 at 08:32 PM.
May 20th, 2014, 06:43 PM #2 Senior Member Joined: Sep 2012 From: British Columbia, Canada Posts: 764 Thanks: 53 Well you can start like this: $$7x-4y=37\Rightarrow 21x-12y=111$$ $$6x+3y=51\Rightarrow 24x+12y=204$$ Then just add the equations and solve for x. Can you continue then solve for y?
May 20th, 2014, 09:29 PM #3
Math Team
Joined: Oct 2011
Posts: 14,597
Thanks: 1038
Quote:
Originally Posted by ron246 7x - 4y = 37 6x +3y = 51
Divide 2nd equation by 3: 2x + y = 17
So y = 17 - 2x
Substitute in 1st equation:
7x - 4(17 - 2x) = 37
Solve for x..then for y.
However, Eddie's way is usually the easiest...
May 21st, 2014, 10:26 AM #4 Math Team Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 14,597 Thanks: 1038 Well, if you're not aware of the basics, then it becomes almost impossible to conduct a classroom here... Try here: Two equations in two unknowns - Math Central You can get other sites by googling "2 equations, 2 unknowns" Come back if you have questions...
May 21st, 2014, 02:20 PM #5
Senior Member
Joined: Jul 2013
From: United Kingdom
Posts: 471
Thanks: 40
Quote:
Originally Posted by ron246 Hi, I don't understand the answer. How do I continue to find x and y? Thank you
Type the keywords 'simultaneous equations' into the search box on Youtube and hit enter.
You should find plenty of videos that cover this subject matter. Those who seek shall find.
Last edited by skipjack; May 24th, 2014 at 08:35 PM.
May 22nd, 2014, 02:33 AM #6 Senior Member Joined: Apr 2014 From: Glasgow Posts: 2,156 Thanks: 731 Math Focus: Physics, mathematical modelling, numerical and computational solutions There are two ways to solve simultaneous equations: Method 1: Elimination: This method involves adding or taking away the equations in such a way so that the y or x is eliminated. Then you have an equation with just an x or just a y, so it can be solved. This is the method that eddybob123 posted: the steps he conducted were as follows: 1. multiply the first equation by 3 2. multiply the second equation by 4 3. look at the y terms... one says $\displaystyle -12y$ and the other says $\displaystyle +12y$. $\displaystyle 21x - 12y = 111$ $\displaystyle 24x+12y = 204$ 4. add both equations together (you can separate out the x bits, y bits and numbers if you like, but I normally just write it out underneath), so $\displaystyle 21x - 12y = 111$ $\displaystyle 24x+12y = 204$ ------------------------------------ $\displaystyle 45x + 0y = 315$ If the above is confusing, you might want to try to separate out the bits and see if it helps you, as I've done below. If not, don't bother with it. y bit: $\displaystyle -12y + +12y = -12y + 12y = 0$ so you're left with x bit: $\displaystyle 21x + 24x = 45x$ numbers bit: $\displaystyle 111 + 204 = 315$ so $\displaystyle 45x = 315$ Dividing both sides by 45 gives $\displaystyle x = \frac{315}{45} = 7$ Now we know that x = 7, we substitute this back into any one of the equations we started with to get y. Taking the second one (it looks nice! You can use the first one if you fancy.): $\displaystyle 6x+3y = 51$ $\displaystyle 6\times7 +3y = 51$ $\displaystyle 42+3y = 51$ $\displaystyle 3y = 51 - 42 = 9$ $\displaystyle y = \frac{9}{3} = 3$ So the answer is x=7, y = 3. You might be thinking "I added the equation above... why was that?" It's because the signs were different. My teacher always said "remember SSAD... subtract is same, add if different" So... we added the equations above because the signs on the 12y were different (+ and -). If they are the same (+ and + or - and -), you subtract the two equations instead. Method 2: Substitution. Rearrange one equation for x or y, then substitute this into the second. This is the superior method for more difficult simultaneous equations. Take first equation and rearrange for y (or x, if you fancy. I'm going to pick y though): $\displaystyle 7x - 4y = 37$ $\displaystyle 7x = 37 + 4y$ $\displaystyle 4y = 7x - 37$ $\displaystyle y = \frac{7}{4}x - \frac{37}{4}$ then substitute this into the other equation. $\displaystyle 6x + 3y = 51$ $\displaystyle 6x + 3\left(\frac{7}{4}x - \frac{37}{4}\right) = 51$ $\displaystyle 6x + \frac{21}{4}x - \frac{111}{4} = 51$ $\displaystyle \frac{24}{4}x + \frac{21}{4}x - \frac{111}{4} = \frac{204}{4}$ $\displaystyle \frac{45}{4}x = \frac{305}{4}$ $\displaystyle x = \frac{305}{4} \times \frac{4}{45} = \frac{305}{45} = 7$ Then resubstitute the answer for x back into the equation as in method 1 to get y (y=3). As you can see, method 1 is much simpler, so for simultaneous equations like the ones you're getting, only use method 1 for your work at the moment. Don't bother with the other one, but remember that it exists if someone surprises you in the future with a crazy simultaneous equation
,
,
,
,
### solve simultaneous equation by elimination 7x-4y=37
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post jaz123 Algebra 2 March 13th, 2014 10:55 AM dralay Algebra 3 July 25th, 2013 02:47 AM bilano99 Algebra 3 June 30th, 2012 05:48 AM MathematicallyObtuse Algebra 3 November 29th, 2010 03:48 AM empiricus Algebra 4 August 30th, 2009 04:28 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2019-07-20 07:23:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7022331357002258, "perplexity": 1343.1789360264352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526489.6/warc/CC-MAIN-20190720070937-20190720092937-00040.warc.gz"}
|
https://www.quantum-machines.co/faq/how-is-the-quantum-orchestration-platform-and-the-opx-different-from-general-purpose-test-lab-equipment/
|
We're delighted to announce our partnership with NVIDIA on a first-of-its-kind architecture for high-performance and low-latency quantum-classical computing.
QUANTUM ORCHESTRATION PLATFORM
How is the Quantum Orchestration Platform and the OPX+ different from general-purpose test/lab equipment?
The Quantum Orchestration Platform is a whole new paradigm for quantum control and is fundamentally different from general-purpose test equipment like AWGs, lock-ins, digitizers, etc. The main differences are:
1. The span of quantum experiments & algorithms which can be run out-of-the-box
2. The pace of the research and development
3. The level of adequacy of the specific specs & capabilities required for quantum research & development (e.g like latency, run-time, etc)
1) The span of quantum experiments & algorithms which can be run out-of-the-box
We like to think of the span of experiments & algorithms which a system can run as the subspace of the experimental phase-space that it covers. While AWGs, Lock-ins, digitizers cover specific points or small regions in this phase space, the quantum orchestration platform covers it entirely. In other words, each general-purpose test tool, even if it is re-branded as a quantum controller, has a fixed set of allowable functions. The Quantum Orchestration Platform (QOP) however, is a full-stack system allowing you to easily and quickly run even your dream experiments and real-time sequences out-of-the-box, from a high-level programming language, QUA. In most cases, each test and measure tool can be expressed and implemented as a single QUA program that can run on the QOP. Alternatively, each such instrument can be described by omitting a different subspace of the full QOP’s phase-space.
2) The pace of the research and development
Every once in a while you have a new brilliant idea for an experiment. While these ideas are more groundbreaking, they are also more challenging and end up being outside the scope of your general-purpose test equipment (its subspace). Once this happens, you have 3 choices:
1. Repurpose your general-purpose. It appears that in all labs such repurposing draws an incredible amount of time and resources, which comes at the expense of the physics and science to be explored. Whether it’s FPGA programming, coding libraries, dealing with drivers, or synchronizing different modules, labs spend years of work on repurposing general-purpose AWGs, Lock-ins, and digitizers for quantum control. It’s often required even for the simplest Ramsey and spectroscopy, and it’s always a must when it comes to multi-axes tomography, the 2-qubit RB sequence, and all the way to multi-qubit quantum-error-correction. We’ve met students who themselves spent months and years doing so. We actually did it ourselves!
2. Give up your brilliant idea and introduce a new constraint for your ideas’ phase-space. It must comply with the experimental phase-space covered by your existing control system.
3. Get an OPX+! We firmly believe that scientific progress relies on ideas, but also on the capabilities of the tools we use. Even our ideas in many cases stem from what we define as technically possible. Our goal at QM is to let you imagine any experiment, the most groundbreaking research, and the most sought-for flagship papers, and always know: yes, of course, it can run. Right out of the box!
In experimental physics, there are many bottlenecks. Long fabrication processes, mirrors alignment (and re-alignment!), helium leakages, vacuum-chamber baking, lead times of crucial equipment, and last but not least: in-house development of quantum control capabilities. Specifically, in quantum computing, the control layer can either be an enabler to progress rapidly and run even the most complex experiments seamlessly or be one of the leading bottlenecks in the lab. Our mission is to allow all teams to run even the wildest experiments of their dreams seamlessly and push the boundaries of the physics they can explore to a whole new level.
3) The level of adequacy for the specific specs & capabilities required for quantum research & development
The general-purpose equipment available today was not built for quantum. In the best-case scenario, it was rebranded. AWGs, lock-ins, and digitizers are used for communication systems, lidars, medical device research, and the list goes on. Of course, we don’t mind non-quantum-experimentalists using the same machines, but this has several consequences. First, these machines are limited in the feature-set they provide. They are also misaligned with the requirements of quantum computing by not supplying you with the critical features you require. And finally, they equip you with quite a few features you simply don’t need (that you’re still paying for). The QOP full-stack quantum control hardware and software and all of its features was created by quantum physicists for quantum physicists, with your experimental needs in mind.
Why use a new quantum programming language, QUA, and how do I use it?
Well, there’s the long and the short of it. So let’s start with the short:
Why: Because we’ve built it to operate as quickly, efficiently, and easily with the Pulse Processor, giving you an adaptive, intuitive, and fully controllable way to probe your qubits.
How: Surprisingly easily!
Now, let’s get into the heart of the matter:
As physicists, we understand the pain of having to learn yet another programming language, but writing FPGA and low-level code every time you want to run a Ramsey measurement? That’s a worse kind of pain. How many lines of code do you need to painstakingly write in order to run the experiments of your dreams? Probably more than you care to admit. This is where QUA comes in. QUA is a pulse-level quantum coding language that allows quantum researchers to run any experiment, on any type of qubits, quickly and easily. That Ramsey pulse, for instance, can be written, sent, and measured in just 10 lines of QUA code. But more on that in a minute.
When designing QUA, we set out to find the easiest way to program our pulses and send them to the qubit. We wanted to create a direct line of communication with our FPGA-based Pulse Processor in the OPX+, allowing us to have complete control of everything we might want. This quantum language was built by quantum physicists for quantum physicists, all with the purpose of making your quantum experiments as seamless as can be.
You write QUA the way you would explain it to other physicists in your group. You play this pulse to that element, listen on this measurement channel, demodulate that result.
As for how the language actually works, there are three main steps: define a pulse sequence, play that pulse sequence, and measure the readout of that pulse sequence.
The piece-de-resistance is that the compiler does all of the hard work: translating the pulses to FPGA commands. In other words, you need to program only on the high-level pulse programming language QUA, which is translated into the low-level FPGA. Programming in such a way is very useful; imagine you want to generate parametric Gaussian wavefunctions. With an AWG, you would need to upload a bunch of Gaussian profiles and figure out the timing in between. With QUA, you can set a for loop, which loops over various Gaussian amplitudes, sequentially. Turning your experiment into instructions that the FPGA can understand and run is already taken care of.
Feel free to check out this blog post with more info on QUA and the OPX+. And here is a quick guide covering the essentials of QUA.
Do feel free to reach out to us if you would like any more insight or information on how we can help you perform your experiments.
Can I do time tagging with the OPX/OPX+?
Yes, and much more! Time tagging and TTL counting are key elements of many quantum computing platforms (e.g., defect-center and AMO) and can be performed with the OPX/OPX+ with ease. Signals coming out of single-photon counting modules (SPCMs, or similar) can be directly connected to the OPX/OPX+ inputs, and tagging/counting is then performed natively within the Pulse Processing Unit. OPX/OPX+ users employ this technique for a great deal of different applications, such as optical quantum sensing, communication, and quantum information processing. As such, counting & tagging are key components of the Quantum Orchestration Platform (QOP).
The OPX+ standard operational mode time-tags events with 1 ns timing resolution with 1 ns dead time, for each of the analog input channels. Additionally, a high-resolution time tagging mode is available, pushing the resolution down to a few tens of picoseconds (~50ps) with increased (<100ns) deadtime.
A time-tag is generated when a voltage trigger edge is detected at one of the analog inputs. The trigger edge can be defined in configuration and can be a simple threshold or an arbitrarily complicated dynamic multi-threshold, polarity, and derivative check. This allows you to easily implement complicated sequences spending virtually no time in setting up your time tagging configuration. Then, the time tagging is done easily within a measure command in a single line:
times = declare(int, size=10) counts = declare(int) measure([pulse],[element],[stream], time_tagging.analog([times],[duration],[counts])
This approach is universal and is fully embedded in the real-time logic of the QOP. Therefore, a time tagging command will allow for results to be used in real-time during an experiment, e.g. setting dynamic thresholds, performing estimations on the fly, or for conditional triggers. This could mean sending out a trigger pulse to a laser only if and when a signal is time-tagged and recognized as satisfying a certain threshold. It could also mean performing Bayesian estimation on a vector of tags, updating it while new tags come in. This is done with the smallest latency possible (on the order of ~100 ns for the simplest case), as all computation, tagging, and decision making is done in real-time on the FPGA-based pulse processor.
The ability to write complex sequences with only a few lines of code while retaining the full performance of the FPGA ensures ease of use and the fastest time to result. Coding a dynamic Bayesian estimation protocol on a real experimental setup just became a first-year programming exercise.
In the QOP framework, time tagging is one of many tools that can be used for real-time branching, computation, and control. All of the analog inputs of the OPX+ allow for flexible and independent time tagging capabilities, while our next-generation product, available soon, will offer many more inexpensive digital input channels, to be used for the experiments with many digital signals involved.
Can I implement real-time analog feedback with the OPX+?
Easily! Analog feedback is a powerful tool for quantum experiments. Potential applications include correcting a drifting magnetic flux bias in real-time to stabilize a qubit frequency, tuning a pulse’s frequency in real-time to stay resonant with (or detuned by a constant amount from) a dynamically changing transition, or reversing partial wavefunction collapse induced by a weak quantum measurement.
Implementing analog feedback with the OPX+ is straightforward, and is accomplished with the following code snippet inserted in a QUA program:
for n in range(0, 20): measure(“Readout”, ”Qubit”, None, demod.full(“cos”, I), demod.full(“sin”, Q)) a = FeedbackAmplitude(I,Q) play(“Pulse”*amp(a), “Qubit”)
The program outputs a measurement pulse “Readout” to the “Qubit” element and demodulates the transmitted signal to produce the I and Q quadratures. A variable a is calculated from these quantities through an arbitrary user-defined function FeedbackAmplitude(I,Q), which then sets the amplitude of the feedback signal. This control sequence runs continuously, as we can see with the following figure produced by our hardware simulator:
The upper plot shows the quadratures output by the OPX+ for the continuous measurement signal, which after transmission through the sample is demodulated for measurement at 100 ns slices (a parameter set by the user). A real-time calculation then produces a correction to the pulse amplitude.
Correcting the pulse frequency is just as easy with the update_frequency() command:
for n in range(0, 20): measure(“Readout”, ”Qubit”, None, demod.full(“cos”, I), demod.full(“sin”, Q)) f = FeedbackFrequency(I,Q) update_frequency(f, “Qubit”)
With this program the OPX+ outputs the following:
How much pulse memory is available to store waveforms?
The OPX+ uses memory in a completely different way from your garden-variety AWG, and we should first understand how so as not to compare apples to oranges.
Consider how you would play a Ramsey sequence from an AWG with a 1 GSPS sampling rate. This involves uploading a waveform long enough to contain the two excitation pulses as well as the delay in between. If each pulse is 20 ns long, and the delay between them is 1000 ns, then at a sampling rate of 1 GSPS this waveform would consist of 1040 samples.
The OPX+ pulse processor operates in a completely different manner. Firstly, only the pulse amplitude is stored in the memory; upconversion to the intermediate frequency happens in real time. A pulse of constant amplitude and arbitrary length is thus generated from a single sample!
For the Ramsey sequence we might want the 20 ns pulse to have a Gaussian envelope, and accordingly use 20 samples of waveform memory. The QUA program to run a Ramsey sequence would look like this:
play(“Gaussian”,”Qubit”) wait(t_delay) play(“Gaussian”,”Qubit”)
The OPX+ uses the same waveform memory to play the Gaussian twice, and can just as easily play it a thousand times — with the same 20 samples! In fact, it can dynamically change the Gaussian amplitude, or stretch the Gaussian for a pulse duration longer than 20 ns — whether pre-programmed or in a real-time response to measurement — without using additional memory.
What about the wait(t_delay) command? An AWG requires a long sequence of zeros to space the pulses, and since characterization of high-coherence devices require long delay times, memory limitations can be prohibitive. But in the OPX+ the wait() command does not use any waveform memory!
A full Ramsey experiment, including a measurement operation followed by a wait() command to allow the qubit to return to the ground state, would look like this:
with for_each_(t_delay, t_values): play(“Gaussian”,”Qubit”) wait(t_delay) play(“Gaussian”,”Qubit”) measure(“Readout”,”Qubit”,...) wait(reset)
The array t_values over which we are looping for t_delay can contain a million values, and the variables t_delay and reset can have values of seconds — and the entire experiment will still exploit the same 20 samples of waveform memory.
Now that we understand how powerfully and intelligently the OPX+ utilizes its memory, we can give a short answer: Each OPX+ channel has a waveform memory of 2^16 = 65,536 samples. This sounds small for an AWG, but is huge for a pulse processor!
How is the Quantum Orchestration Platform and the OPX+ different from general-purpose test/lab equipment?
The Quantum Orchestration Platform is a whole new paradigm for quantum control and is fundamentally different from general-purpose test equipment like AWGs, lock-ins, digitizers, etc. The main differences are:
1. The span of quantum experiments & algorithms which can be run out-of-the-box
2. The pace of the research and development
3. The level of adequacy of the specific specs & capabilities required for quantum research & development (e.g like latency, run-time, etc)
1) The span of quantum experiments & algorithms which can be run out-of-the-box
We like to think of the span of experiments & algorithms which a system can run as the subspace of the experimental phase-space that it covers. While AWGs, Lock-ins, digitizers cover specific points or small regions in this phase space, the quantum orchestration platform covers it entirely. In other words, each general-purpose test tool, even if it is re-branded as a quantum controller, has a fixed set of allowable functions. The Quantum Orchestration Platform (QOP) however, is a full-stack system allowing you to easily and quickly run even your dream experiments and real-time sequences out-of-the-box, from a high-level programming language, QUA. In most cases, each test and measure tool can be expressed and implemented as a single QUA program that can run on the QOP. Alternatively, each such instrument can be described by omitting a different subspace of the full QOP’s phase-space.
2) The pace of the research and development
Every once in a while you have a new brilliant idea for an experiment. While these ideas are more groundbreaking, they are also more challenging and end up being outside the scope of your general-purpose test equipment (its subspace). Once this happens, you have 3 choices:
1. Repurpose your general-purpose. It appears that in all labs such repurposing draws an incredible amount of time and resources, which comes at the expense of the physics and science to be explored. Whether it’s FPGA programming, coding libraries, dealing with drivers, or synchronizing different modules, labs spend years of work on repurposing general-purpose AWGs, Lock-ins, and digitizers for quantum control. It’s often required even for the simplest Ramsey and spectroscopy, and it’s always a must when it comes to multi-axes tomography, the 2-qubit RB sequence, and all the way to multi-qubit quantum-error-correction. We’ve met students who themselves spent months and years doing so. We actually did it ourselves!
2. Give up your brilliant idea and introduce a new constraint for your ideas’ phase-space. It must comply with the experimental phase-space covered by your existing control system.
3. Get an OPX+! We firmly believe that scientific progress relies on ideas, but also on the capabilities of the tools we use. Even our ideas in many cases stem from what we define as technically possible. Our goal at QM is to let you imagine any experiment, the most groundbreaking research, and the most sought-for flagship papers, and always know: yes, of course, it can run. Right out of the box!
In experimental physics, there are many bottlenecks. Long fabrication processes, mirrors alignment (and re-alignment!), helium leakages, vacuum-chamber baking, lead times of crucial equipment, and last but not least: in-house development of quantum control capabilities. Specifically, in quantum computing, the control layer can either be an enabler to progress rapidly and run even the most complex experiments seamlessly or be one of the leading bottlenecks in the lab. Our mission is to allow all teams to run even the wildest experiments of their dreams seamlessly and push the boundaries of the physics they can explore to a whole new level.
3) The level of adequacy for the specific specs & capabilities required for quantum research & development
The general-purpose equipment available today was not built for quantum. In the best-case scenario, it was rebranded. AWGs, lock-ins, and digitizers are used for communication systems, lidars, medical device research, and the list goes on. Of course, we don’t mind non-quantum-experimentalists using the same machines, but this has several consequences. First, these machines are limited in the feature-set they provide. They are also misaligned with the requirements of quantum computing by not supplying you with the critical features you require. And finally, they equip you with quite a few features you simply don’t need (that you’re still paying for). The QOP full-stack quantum control hardware and software and all of its features was created by quantum physicists for quantum physicists, with your experimental needs in mind.
What is the Pulse Processor, and how does it perform qubit control?
As physicists, we always like to ask the more fundamental questions, even when at first glance they seem trivial. In order to answer “what is the pulse processor,” it is useful to first answer the trivial question “what is a quantum experiment?”This is because the pulse processor was architected from the ground-up to run even the most complex quantum experiments one could think of. Now, let’s break-down a quantum experiment to four main components:
1. Gates – The different terms in the Hamiltonian which cause the sought time-evolution. These gates are usually performed by directing well-crafted pulses (laser, microwave) to the qubits.
2. Measurements – Since you’re a classical being, at some point, you will have to collapse the system so you can “see it for yourself”. The measurements are performed in various ways – ADCs, SPCMs, PDs, cameras, etc.
3. Classical processing – There is no quantum processing without classical processing. Whether it’s demodulation, integration, time-tagging, TTL, counting, state estimations, error-estimations, Bayesian estimations, or even arithmetics as simple as \tau=\tau+5 when looping over the time difference in a Ramsey sequence. There is not a single quantum protocol that does not require classical processing.
4. Control flow – the good-old if/else, while, for, cases, etc. From the simplest averaging loop and parameter-sweep loop, through active-reset and to multi-qubit error correction employing a multitude of nested if/else’s. Control-flow is indispensable for quantum protocols.
Every quantum experiment (or protocol) is a combination of these four elements. Every quantum protocol is an entangled sequence of gates, measurements, and classical processing, all combined in various ways and wrapped with various control-flow statements. And someone has to orchestrate all that!
The Pulse Processor is a processor architected to run sequences that combine all the above in real-time, in a perfectly synchronized and orchestrated way. That includes:
1. Waveform generation in a fully parametric manner. Namely, the pulse processor does not need you to load all the pulses in advance, but rather it generates them in real-time as defined by the program. In other words, the pulse processor directly processes pulses as opposed to simply shoving them into memory.
2. Waveform acquisition including the acquisition of both analog data (through ADCs) and digital data (from SPCMs, PDs, camera, etc). Including continuous acquisition for CW measurements or qubit-tracking (and weak measurements) as well as pulsed acquisition in synchronization with the waveform generation and real-time processing.
3. Real-time processing including real-time processing of acquired data (e.g weighted demodulators and integrations, image-processing, TTL counting, time-tagging, etc), state-estimations, and even neural networks.
4. Control flow including everything you typically do in MATLAB/python, now running in real-time, at time scales that are faster than your qubits (10s of nanoseconds)! Nested loops, complex branching trees with countless if/else’s, to form the most complex protocol you may have in mind.
And above all, these four elements are NOT to be regarded as independent. Quantum protocols are an interacting system, where waveform generation leads to waveform acquisition, followed by classical processing which then affects the following generated pulses. And many such threads running in parallel, and affecting each other as well.
To enable such performance, the pulse processor is built in a multi-core architecture containing several pulsers. Each pulser is an independent real-time core capable of driving one or more quantum elements (qubits, collective modes, two-/multi-level transitions, resonators, etc.). Every pulser is essentially a specialized processing unit that may simultaneously handle both waveform generation, waveform acquisition, and all the real-time calculations (classical processing) required (it is Turing complete!) in a deterministic manner and with ultra-low latency.
What does it mean that the Pulse Processor can run even the most complex quantum experiments out-of-the-box?
The pulse-processor orchestrates all the waveform generation, waveform acquisition, classical processing, and control flow in real-time. But what is its API?
The API for the pulse processor is QUA: a powerful yet intuitive quantum programming language. In QUA you can formulate any protocol/experiment – from spectroscopy to quantum-error-correction. Once the program is formulated it is compiled by the XQP compiler to the assembly language of the pulse processor. Next, the program, now formulated in the pulse processor’s assembly language, is sent to the pulse processor which runs it in real-time.
Using the intuitive QUA language and our compiler, you can now directly and intuitively code complex sequences from a high-level programming language, including real-time feedback, classical calculations (Turing-complete), comprehensive control flow, etc.
How can I program multi-channel sequences with the OPX+?
The OPX+’s Pulse Processor is a multi-core processor. Each pulser core executes its own sequence independently of the others, unless a protocol calls for inter-core dependencies. Synchronizing and coordinating different threads is handled by the compiler behind the scenes, making it easy to set up complex experiments with simple instructions.
Suppose we want to play a Gaussian pulse with amplitude A1 to qubit_1, simultaneously with another Gaussian with amplitude A2 to qubit_2. This would be done with the following QUA program:
play(‘gaussian’*amp(A1), ‘qubit_1’) play(‘gaussian’*amp(A2), ‘qubit_2’)
The two play() commands address different threads, and therefore play simultaneously. This results in an output as shown below:
We can delay one of the pulses by using the wait() command. In the following code, the qubit_1 thread alone is delayed by 20 clock cycles, while the qubit_2 thread is unaffected:
wait(20,’qubit_1’) play(‘gaussian’*amp(A1), ‘qubit_1’) play(‘gaussian’*amp(A2), ‘qubit_2’)
Many experiment protocols call for one sequence to begin only after another sequence is finished. Rather than manually calculate the duration of the first sequence, this can be implemented in QUA with the align() command:
play(‘gaussian’*amp(A1), ‘qubit_1’) wait(20,’qubit_1’) play(‘gaussian’*amp(A1), ‘qubit_1’) align(‘qubit_1’, ‘qubit_2’) play(‘gaussian’*amp(A2), ‘qubit_2’)
This results in the following output:
The align(‘qubit_1’, ‘qubit_2’) command causes the two threads to wait for each other. Any command appearing below this line that addresses either of the two threads will only be implemented after both have completed all commands appearing above this line. This dependency is evaluated in real time, and synchronizes the threads even if the duration of the first sequence is not known at compile time!
This last point is vital for many sequences, such as repeat-until-success protocols. Consider the following code:
with while_(result>0.2 && N < 1000): ### Subroutine involving ‘qubit_1’ ### align(‘qubit_1’, ’qubit_2’) play(‘gaussian’*amp(A2), ‘qubit_2’)
This will run a nondeterministic while() loop, within which the OPX+ might play pulses to qubit_1, measure it, update the result variable based on the measurement, and increment the counter variable N. It might run a single iteration before exiting the loop, and it might run 1,000. This is not known at compile time. But the simple align() still synchronizes the pulses, with all of the real-time control complexity handled by the compiler.
|
2023-03-31 22:08:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35417333245277405, "perplexity": 2363.445244744576}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00238.warc.gz"}
|
http://www.est.colpos.mx/R-mirror/web/packages/StratifiedMedicine/vignettes/SM_PRISM.html
|
# Introduction
Welcome to the StratifiedMedicine R package. The overall goal of this package is to develop analytic and visualization tools to aid in stratified and personalized medicine. Stratified medicine aims to find subsets or subgroups of patients with similar treatment effects, for example responders vs non-responders, while personalized medicine aims to understand treatment effects at the individual level (does a specific individual respond to treatment A?).
Currently, the main tools in this package area: (1) Filter Models (identify important variables and reduce input covariate space), (2) Patient-Level Estimate Models (using regression models, estimate counterfactual quantities, such as the individual treatment effect), (3) Subgroup Models (identify groups of patients using tree-based approaches), and (4) Parameter Estimation (across the identified subgroups), and (5) PRISM (Patient Response Identifiers for Stratified Medicine; combines tools 1-4). Development of this package is ongoing.
As a running example, consider a continuous outcome (ex: % change in tumor size) with a binary treatment (study drug vs standard of care). The estimand of interest is the average treatment effect, $$\theta_0 = E(Y|A=1)-E(Y|A=0)$$. First, we simulate continuous data where roughly 30% of the patients receive no treatment-benefit for using $$A=1$$ vs $$A=0$$. Responders vs non-responders are defined by the continuous predictive covariates $$X_1$$ and $$X_2$$ for a total of four subgroups. Subgroup treatment effects are: $$\theta_{1} = 0$$ ($$X_1 \leq 0, X_2 \leq 0$$), $$\theta_{2} = 0.25 (X_1 > 0, X_2 \leq 0)$$, $$\theta_{3} = 0.45 (X_1 \leq 0, X2 > 0$$), $$\theta_{4} = 0.65 (X_1>0, X_2>0)$$.
library(ggplot2)
library(dplyr)
library(partykit)
library(StratifiedMedicine)
library(survival)
dat_ctns = generate_subgrp_data(family="gaussian")
Y = dat_ctns$Y X = dat_ctns$X # 50 covariates, 46 are noise variables, X1 and X2 are truly predictive
A = dat_ctns$A # binary treatment, 1:1 randomized length(Y) #> [1] 800 table(A) #> A #> 0 1 #> 409 391 dim(X) #> [1] 800 50 # Filter Models The aim of filter models is to potentially reduce the covariate space such that subsequent analyses focus on the “important” variables. For example, we may want to identify variables that are prognostic and/or predictive of the outcome across treatment levels. Filter models can be run using the “filter_train” function. The default is search for prognostic variables using elastic net (Y~ENET(X); Hou and Hastie 2005). Random forest based importance (filter=“ranger”) is also available. See below for an example. Note that the object “filter.vars” contains the variables that pass the filter, while “plot_importance” shows us the relative importance of the input variables. For glmnet, this corresponds to the standardized regression coefficients (variables with coefficients=0 are not shown). res_f <- filter_train(Y, A, X, filter="glmnet") res_f$filter.vars
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24" "X26"
#> [13] "X31" "X40" "X46" "X50"
plot_importance(res_f)
An alternative approach is to search for variables that are potentially prognostic and/or predictive by forcing variable by treatment interactions, or Y~ENET(A,X,XA). Variables with estimated coefficients of 0 in both the main effects (X) and interaction effects (XA) are filtered. This can be implemented by tweaking the hyper-parameters:
res_f2 <- filter_train(Y, A, X, filter="glmnet", hyper=list(interaction=T))
res_f2$filter.vars #> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X12" "X16" "X18" "X23" "X24" "X26" #> [13] "X31" "X46" "X50" "X10" "X17" "X20" "X37" "X39" "X40" plot_importance(res_f2) Here, note that both the main effects of X1 and X2, along with the interaction effects (labeled X1_trtA and X2_trtA), have relatively large estimated coefficients. # Patient-level Estimate (PLE) Models The aim of PLE models is to estimate counterfactual quantities, for example the individual treatment effect. This is implemented through the “ple_train” function. The “ple_train” follows the framework of Kunzel et al 2019, which utilizes base learners and meta learners to obtain estimates of interest. For family=“gaussian”, “binomial”, this output estimates of and treatment differences. For family=“survival”, either logHR or restricted mean survival time (RMST) estimates are obtained. Current base-leaner options include “linear” (lm/glm/or cox), “ranger” (random forest through ranger R package), “glmnet” (elastic net), and “bart” (Bayesian Additive Regression Trees through BART R package). Meta-learners include the “T-Leaner” (treatment specific models), “S-learner” (single regression model), and “X-learner” (2-stage approach, see Kunzel et al 2019). See below for an example. Note that the object “mu_train” contains the training set patient-level estimates (outcome-based and propensity scores), “plot_ple” shows a waterfall plot of the estimated individual treatment effects, and “plot_dependence” shows the partial dependence plot for variable “X1” with respect to the estimated individual treatment effect. res_p1 <- ple_train(Y, A, X, ple="ranger", meta="X-learner") summary(res_p1$mu_train)
#> mu_0 mu_1 diff_1_0 pi_0
#> Min. :0.6493 Min. :0.4634 Min. :-0.38289 Min. :0.5112
#> 1st Qu.:1.4165 1st Qu.:1.5749 1st Qu.: 0.09688 1st Qu.:0.5112
#> Median :1.6220 Median :1.8047 Median : 0.20678 Median :0.5112
#> Mean :1.6339 Mean :1.8435 Mean : 0.20895 Mean :0.5112
#> 3rd Qu.:1.8417 3rd Qu.:2.1031 3rd Qu.: 0.31797 3rd Qu.:0.5112
#> Max. :2.6318 Max. :3.1174 Max. : 0.70128 Max. :0.5112
#> pi_1
#> Min. :0.4888
#> 1st Qu.:0.4888
#> Median :0.4888
#> Mean :0.4888
#> 3rd Qu.:0.4888
#> Max. :0.4888
plot_ple(res_p1, target = "diff_1_0") +
ggtitle("Waterfall Plot: E(Y|A=1)-E(Y|A=0)") + ylab("E(Y|A=1)-E(Y|A=0)")
plot_dependence(res_p1, X=X, vars="X1") + ylab("E(Y|A=1)-E(Y|A=0)")
Next, let’s illustrate how to change the meta-learner and the hyper-parameters. See below, along with a 2-dimension PDP plot.
res_p2 <- ple_train(Y, A, X, ple="ranger", meta="T-learner", hyper=list(mtry=5))
summary(res_p2$mu_train) #> mu_0 mu_1 diff_1_0 pi_0 #> Min. :0.7052 Min. :0.5131 Min. :-1.0249 Min. :0.5112 #> 1st Qu.:1.4341 1st Qu.:1.6060 1st Qu.:-0.0202 1st Qu.:0.5112 #> Median :1.6323 Median :1.8196 Median : 0.2125 Median :0.5112 #> Mean :1.6398 Mean :1.8457 Mean : 0.2059 Mean :0.5112 #> 3rd Qu.:1.8451 3rd Qu.:2.0851 3rd Qu.: 0.4443 3rd Qu.:0.5112 #> Max. :2.5934 Max. :2.9993 Max. : 1.0341 Max. :0.5112 #> pi_1 #> Min. :0.4888 #> 1st Qu.:0.4888 #> Median :0.4888 #> Mean :0.4888 #> 3rd Qu.:0.4888 #> Max. :0.4888 plot_ple(res_p2, target = "diff_1_0") + ggtitle("Waterfall Plot: E(Y|A=1)-E(Y|A=0)") + ylab("E(Y|A=1)-E(Y|A=0)") plot_dependence(res_p2, X=X, vars=c("X1", "X2")) + ggtitle("Heat Map (By X1,X2): E(Y|A=1)-E(Y|A=0)") ylab("E(Y|A=1)-E(Y|A=0)") #>$y
#> [1] "E(Y|A=1)-E(Y|A=0)"
#>
#> attr(,"class")
#> [1] "labels"
# Subgroup Models
Subgroup models are called using the “submod_train” function and currently only include tree-based methods (ctree, lmtree, glmtree from partykit R package and rpart from rpart R package). First, let’s run the default (for continuous, uses lmtree). This aims to find subgroups that are either prognostic and/or predictive.
res_s1 <- submod_train(Y, A, X, submod="lmtree")
table(res_s1$Subgrps.train) #> #> 3 4 6 7 #> 149 277 267 107 plot(res_s1$fit$mod) Another generic approach is “otr”, which follows an outcome weighted learning approach. Here, we regress PLE ~ ctree(X) with weights=abs(PLE) where PLE=E(Y|A=1,X)-E(Y|A=0,X) is the estimated individual treatment effect. For survival endpoints, the treatment difference would correspond to either logHR or RMST. For the example below, we set the clinically meaningful threshold to 0.1 (thres=“>0.10”). res_s2 <- submod_train(Y, A, X, thres = ">0.10", mu_train=res_p2$mu_train, submod="otr")
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> -1.0249 -0.0202 0.2125 0.2059 0.4443 1.0341
plot(res_s2$fit$mod)
# Parameter Estimation
To facilitate parameter estimation across the identified subgroups, “StratifiedMedicine” currently includes the function “param_est.” This includes param=“lm”, “dr”, “ple”, “cox”, and “rmst” which correspond respectively to linear regression, the doubly robust estimator, average the patient-level estimates, cox regresson, and RMST (as in survRM2 R package). Notably, if the subgroups are determined adaptively (for example through lmtree), without resampling corrections, point-estimates tend to be overly optimistic. We address this later.
Given a candidate set of subgroups, a simple approach is to fit linear regression models within each subgroup to obtain treatment-specific and treatment-difference estimates. See below.
param.dat1 <- param_est(Y, A, X, Subgrps = res_s1$Subgrps.train, param="lm") param.dat1 %>% filter(estimand=="mu_1-mu_0") #> Subgrps N estimand est SE LCL UCL #> 1 ovrl 800 mu_1-mu_0 0.214721873 0.07270356 0.07200932 0.3574344 #> 2 3 149 mu_1-mu_0 -0.074540934 0.17749849 -0.42529970 0.2762178 #> 3 4 277 mu_1-mu_0 -0.002507699 0.11778878 -0.23438626 0.2293709 #> 4 6 267 mu_1-mu_0 0.392978208 0.12852946 0.13991368 0.6460427 #> 5 7 107 mu_1-mu_0 0.735079896 0.19631153 0.34587320 1.1242866 #> pval alpha #> 1 0.0032352410 0.05 #> 2 0.6751291856 0.05 #> 3 0.9830298696 0.05 #> 4 0.0024593995 0.05 #> 5 0.0002942317 0.05 Alternatively, we may instead use the doubly-robust estimator, which combines the observed outcome (Y) and model estimates from “ple_train”. This requires inputting model estimates (see “mu_hat”). See below: param.dat2 <- param_est(Y, A, X, Subgrps = res_s1$Subgrps.train,
mu_hat = res_p1$mu_train, param="dr") param.dat2 %>% filter(estimand=="mu_1-mu_0") #> Subgrps N estimand est SE LCL UCL #> 1 ovrl 800 mu_1-mu_0 0.20958932 0.06361335 0.08472028 0.3344583 #> 2 3 149 mu_1-mu_0 -0.04117054 0.15841845 -0.35422480 0.2718837 #> 3 4 277 mu_1-mu_0 0.02191513 0.10010936 -0.17515979 0.2189901 #> 4 6 267 mu_1-mu_0 0.35692736 0.10860576 0.14309105 0.5707637 #> 5 7 107 mu_1-mu_0 0.67696977 0.18087503 0.31836743 1.0355721 #> pval alpha #> 1 0.0010285354 0.05 #> 2 0.7953138521 0.05 #> 3 0.8268804462 0.05 #> 4 0.0011509845 0.05 #> 5 0.0002960023 0.05 # PRISM: Patient Response Identifiers for Stratified Medicine While the above tools individually can be useful, PRISM (Patient Response Identifiers for Stratified Medicine; Jemielita and Mehrotra (to appear), https://arxiv.org/abs/1912.03337) combines each component for a stream-lined analysis. Given a data-structure of $$(Y, A, X)$$ (outcome(s), treatments, covariates), PRISM is a five step procedure: 1. Estimand: Determine the question(s) or estimand(s) of interest. For example, $$\theta_0 = E(Y|A=1)-E(Y|A=0)$$, where A is a binary treatment variable. While this isn’t an explicit step in the PRISM function, the question of interest guides how to set up PRISM. 2. Filter (filter): Reduce covariate space by removing variables unrelated to outcome/treatment. 3. Patient-level estimate (ple): Estimate counterfactual patient-level quantities, for example the individual treatment effect, $$\theta(x) = E(Y|X=x,A=1)-E(Y|X=x,A=0)$$. These can be used in the subgroup model and/or parameter estimation. 4. Subgroup model (submod): Identify subgroups of patients with potentially varying treatment response. 5. Parameter estimation and inference (param): For the overall population and discovered subgroups, output point estimates and variability metrics. If the subgroups are determined adaptively, resampling is needed to avoid overly optimistic point estimates and to form CIs. 6. Resampling: Repeat Steps 1-4 across $$R$$ non-parametric bootstrap resamplings to generate subgroup-specific parameter estimate bootstrap distributions. Ultimately, PRISM provides information at the patient-level, the subgroup-level (if any), and the overall population. While there are defaults in place, the user can also input their own functions/model wrappers into the PRISM algorithm. We will demonstrate this later. PRISM can also be run without treatment assignment (A=NULL); in this setting, the focus is on finding subgroups based on prognostic effects. The below table describes default PRISM configurations for different family (gaussian, biomial, survival) and treatment (no treatment vs treatment) settings, including the associated estimands. Note that OLS refers to ordinary least squares (linear regression), GLM refers to generalized linear model, and MOB refers to model based partitioning (Zeileis, Hothorn, Hornik 2008; Seibold, Zeileis, Hothorn 2016). To summarise, default models include elastic net (Zou and Hastie 2005) for filtering, random forest (“ranger” R package) for patient-level /counterfactual estimation, and MOB (through “partykit” R package; lmtree, glmtree, and ctree (Hothorn, Hornik, Zeileis 2005)). When treatment assignment is provided, parameter estimation for continuous and binary outcomes involves averaging the patient-level estimates within the overall population and discovered subgroups (more details later). For survival outcomes, the cox regression hazard ratio (HR) or RMST (from the survR2 package) is used. Default PRISM Configurations (With Treatment) Step gaussian binomial survival estimand(s) E(Y|A=0) E(Y|A=1) E(Y|A=1)-E(Y|A=0) E(Y|A=0) E(Y|A=1) E(Y|A=1)-E(Y|A=0) HR(A=1 vs A=0) filter Elastic Net (glmnet) Elastic Net (glmnet) Elastic Net (glmnet) ple X-learner: Random Forest (ranger) X-learner: Random Forest (ranger) T-learner: Random Forest (ranger) submod MOB(OLS) (lmtree) MOB(GLM) (glmtree) MOB(weibull) (mob_weib) param Double Robust (dr) Doubly Robust (dr) Hazard Ratios (cox) Default PRISM Configurations (Without Treatment, A=NULL) Step gaussian binomial survival estimand(s) E(Y) Prob(Y) RMST filter Elastic Net (glmnet) Elastic Net (glmnet) Elastic Net (glmnet) ple Random Forest (ranger) Random Forest (ranger) Random Forest (ranger) submod Conditional Inference Trees (ctree) Conditional Inference Trees (ctree) Conditional Inference Trees (ctree) param OLS (lm) OLS (lm) RMST (rmst) # Example: Continuous Outcome with Binary Treatment For continuous outcome data (family=“gaussian”), the default PRISM configuration is: (1) filter=“glmnet” (elastic net), (2) ple=“ranger” (X-learner with random forest models), (3) submod=“lmtree” (model-based partitioning with OLS loss), and (4) param=“dr” (doubly-robust estimator). To run PRISM, at a minimum, the outcome (Y), treatment (A), and covariates (X) must be provided. See below. # PRISM Default: filter_glmnet, ranger, lmtree, param_ple # res0 = PRISM(Y=Y, A=A, X=X) #> Observed Data #> Filtering: glmnet #> PLE: ranger #> Subgroup Identification: lmtree #> Parameter Estimation: dr summary(res0) #>$PRISM Configuration
#> [1] "glmnet => ranger => lmtree => dr"
#>
#> $Variables that Pass Filter #> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24" "X26" #> [13] "X31" "X40" "X46" "X50" #> #>$Number of Identified Subgroups
#> [1] 6
#>
#> $Variables that Define the Subgroups #> [1] "X1, X2, X50, X26" #> #>$Parameter Estimates
#> Subgrps N estimand est SE alpha CI
#> 4 10 168 mu_0 1.9265 0.0860 0.05 [1.7567,2.0964]
#> 7 11 107 mu_0 2.1880 0.1332 0.05 [1.9238,2.4521]
#> 10 3 149 mu_0 1.1687 0.1110 0.05 [0.9494,1.388]
#> 13 5 110 mu_0 1.3177 0.1057 0.05 [1.1083,1.5271]
#> 16 6 167 mu_0 1.8470 0.0813 0.05 [1.6865,2.0074]
#> 19 9 99 mu_0 1.2580 0.1491 0.05 [0.9621,1.5539]
#> 1 ovrl 800 mu_0 1.6373 0.0457 0.05 [1.5477,1.727]
#> 5 10 168 mu_1 2.1155 0.1003 0.05 [1.9176,2.3135]
#> 8 11 107 mu_1 2.9055 0.1134 0.05 [2.6808,3.1303]
#> 11 3 149 mu_1 1.1311 0.1092 0.05 [0.9154,1.3469]
#> 14 5 110 mu_1 1.5376 0.1179 0.05 [1.3039,1.7713]
#> 17 6 167 mu_1 1.7067 0.1023 0.05 [1.5047,1.9087]
#> 20 9 99 mu_1 1.8960 0.1184 0.05 [1.661,2.1311]
#> 2 ovrl 800 mu_1 1.8459 0.0487 0.05 [1.7504,1.9414]
#> 6 10 168 mu_1-mu_0 0.1890 0.1311 0.05 [-0.0698,0.4478]
#> 9 11 107 mu_1-mu_0 0.7176 0.1762 0.05 [0.3682,1.0669]
#> 12 3 149 mu_1-mu_0 -0.0376 0.1541 0.05 [-0.3422,0.267]
#> 15 5 110 mu_1-mu_0 0.2199 0.1586 0.05 [-0.0944,0.5343]
#> 18 6 167 mu_1-mu_0 -0.1403 0.1298 0.05 [-0.3967,0.1161]
#> 21 9 99 mu_1-mu_0 0.6380 0.1880 0.05 [0.2649,1.0112]
#> 3 ovrl 800 mu_1-mu_0 0.2086 0.0633 0.05 [0.0843,0.3328]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res0) # same as plot(res0, type="tree")
The summary gives a high-level overview of the findings (number of subgroups, parameter estimates, variables that survived the filter). The default plot() function currently combines tree plots with parameter estimates using the “ggparty” package. We can als directly look for prognostic effects by specifying omitting A (treatment) from PRISM:
# PRISM Default: filter_glmnet, ranger, ctree, param_lm #
res_prog = PRISM(Y=Y, X=X)
#> No Treatment Variable (A) Provided: Searching for Prognostic Effects
#> Observed Data
#> Filtering: glmnet
#> PLE: ranger
#> Subgroup Identification: ctree
#> Parameter Estimation: lm
# res_prog = PRISM(Y=Y, A=NULL, X=X) #also works
summary(res_prog)
#> $PRISM Configuration #> [1] "glmnet => ranger => ctree => lm" #> #>$Variables that Pass Filter
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24" "X26"
#> [13] "X31" "X40" "X46" "X50"
#>
#> $Number of Identified Subgroups #> [1] 6 #> #>$Variables that Define the Subgroups
#> [1] "X2, X1, X26"
#>
#> $Parameter Estimates #> Subgrps N estimand est SE alpha CI #> 2 10 87 mu 1.9091 0.1006 0.05 [1.7091,2.1091] #> 3 11 80 mu 2.6842 0.1133 0.05 [2.4586,2.9097] #> 4 4 132 mu 1.1119 0.0970 0.05 [0.92,1.3038] #> 5 5 266 mu 1.5107 0.0636 0.05 [1.3855,1.636] #> 6 7 113 mu 1.7016 0.0995 0.05 [1.5045,1.8987] #> 7 8 122 mu 2.1780 0.0856 0.05 [2.0085,2.3474] #> 1 ovrl 800 mu 1.7343 0.0363 0.05 [1.663,1.8056] #> #> attr(,"class") #> [1] "summary.PRISM" plot(res_prog) Next, circling back to the first PRISM model with treatment included, let’s review other core PRISM outputs. Results relating to the filter include “filter.mod” (model output) and “filter.vars” (variables that pass the filter). The “plot_importance” function can also be called: # elastic net model: loss by lambda # plot(res0$filter.mod)
## Variables that remain after filtering ##
res0$filter.vars #> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24" "X26" #> [13] "X31" "X40" "X46" "X50" # All predictive variables (X1,X2) and prognostic variables (X3,X5, X7) remains. plot_importance(res0) Results relating to “ple_train” include “ple.fit” (fitted “ple_train”), “mu.train” (training predictions), and “mu.test” (test predictions). “plot_ple” and “plot_dependence” can also be used with PRISM objects. For example, plot_ple(res0) plot_dependence(res0, vars=c("X2")) #> geom_smooth() using method = 'loess' and formula 'y ~ x' Next, the subgroup model (lmtree), identifies 4-subgroups based on varying treatment effects. By plotting the subgroup model object (“submod.fit$mod”)“, we see that partitions are made through X1 (predictive) and X2 (predictive). At each node, parameter estimates for node (subgroup) specific OLS models, $$Y\sim \beta_0+\beta_1*A$$. For example, patients in nodes 4 and 6 have estimated treatment effects of 0.47 and 0.06 respectively. Subgroup predictions for the train/test set can be found in the”out.train" and “out.test” data-sets.
plot(res0$submod.fit$mod, terminal_panel = NULL)
table(res0$out.train$Subgrps)
#>
#> 10 11 3 5 6 9
#> 168 107 149 110 167 99
table(res0$out.test$Subgrps)
#>
#> 10 11 3 5 6 9
#> 168 107 149 110 167 99
For any parameter estimation approache, subgroup-specific estimates tend to be overly positive or negative, as the same data that trains the subgroup model is used for parameter estimation. Resampling, such as bootstrapping, is generally perferred for “honest” treatment effect estimates (more details below).
For continuous and binary data, the default parameter estimation approach is param=“dr” (double robust estimator). This approach incorporates regression estimates, which could potentially increase the efficiency of the point-estimate. Let $$k=1,...,K$$ index the $$K$$ identified subgroups with corresponding rules $$S_1,...,S_K$$. Next, let $$E(Y|X=x,A=a) = \mu(x, a)$$ correspond to the outcome regression model(s) with estimates $$\hat{\mu}(x, a)$$. These estimates come directly from the fitted PLE model(s), in this case, treatment-specific random forest models. Define the “pseudo-outcomes” as:
$Y^{\star}_i = \frac{AY - (A-\hat{\pi}(x))\hat{\mu}(a=1,x)}{\hat{\pi}(x)} - \frac{(1-A)Y - (A-\hat{\pi}(x))\hat{\mu}(a=0,x)}{1-\hat{\pi}(x)}$
where $$\pi(x)=P(A=1|X)$$, or the treatment assignment probability for an individual. In a randomized controlled trial, this can be replaced by the marginal probability, $$P(A=1|X)$$. For each discovered subgroup ($$k=1,...,K$$), the treatment effect (or risk difference) and associated SE are then: can be estimated by averaging the patient-specific treatment effect estimates (PLEs): $\hat{\theta}_k = \sum_{i \in S_k} Y^{\star}_i$ $SE(\hat{\theta}_k) = \sqrt{ n_k ^ {-2} \sum_{i \in S_k} \left( Y^{\star}_i-\hat{\theta}(x_i) \right)^2}$ CIs can then be formed using t- or Z-intervals. For example, a two-sided 95% Z-interval, $$CI_{\alpha}(\hat{\theta}_{k}) = \left[\hat{\theta}_{k} \pm 1.96*SE(\hat{\theta}_k) \right]$$
Moving back to the PRISM outputs, for any of the provided “param” options, a key output is the object “param.dat”. By default, “param.dat” contain point-estimates, standard errors, lower/upper confidence intervals (depends on alpha_s and alpha_ovrl) and p-values. This output feeds directly into previously shown default (“tree”) plot.
## Overall/subgroup specific parameter estimates/inference
res0$param.dat #> Subgrps N estimand est SE LCL UCL #> 4 10 168 mu_0 1.92654222 0.08604345 1.75666914 2.0964153 #> 5 10 168 mu_1 2.11554429 0.10028464 1.91755523 2.3135333 #> 6 10 168 mu_1-mu_0 0.18900206 0.13109957 -0.06982402 0.4478281 #> 7 11 107 mu_0 2.18798592 0.13324137 1.92382193 2.4521499 #> 8 11 107 mu_1 2.90554294 0.11337864 2.68075877 3.1303271 #> 9 11 107 mu_1-mu_0 0.71755703 0.17619615 0.36823101 1.0668830 #> 10 3 149 mu_0 1.16873922 0.11097727 0.94943455 1.3880439 #> 11 3 149 mu_1 1.13112996 0.10917279 0.91539115 1.3468688 #> 12 3 149 mu_1-mu_0 -0.03760927 0.15414966 -0.34222787 0.2670093 #> 13 5 110 mu_0 1.31765430 0.10565303 1.10825343 1.5270552 #> 14 5 110 mu_1 1.53760339 0.11790147 1.30392650 1.7712803 #> 15 5 110 mu_1-mu_0 0.21994909 0.15862444 -0.09443939 0.5343376 #> 16 6 167 mu_0 1.84698866 0.08127062 1.68653138 2.0074459 #> 17 6 167 mu_1 1.70670115 0.10231870 1.50468744 1.9087149 #> 18 6 167 mu_1-mu_0 -0.14028750 0.12984618 -0.39665032 0.1160753 #> 19 9 99 mu_0 1.25799119 0.14909934 0.96210840 1.5538740 #> 20 9 99 mu_1 1.89604041 0.11843056 1.66101881 2.1310620 #> 21 9 99 mu_1-mu_0 0.63804921 0.18804625 0.26487754 1.0112209 #> 1 ovrl 800 mu_0 1.63730742 0.04566924 1.54766156 1.7269533 #> 2 ovrl 800 mu_1 1.84588296 0.04866485 1.75035689 1.9414090 #> 3 ovrl 800 mu_1-mu_0 0.20857553 0.06330387 0.08431399 0.3328371 #> pval alpha Prob(>0) #> 4 3.645329e-52 0.05 1.0000000 #> 5 5.758562e-49 0.05 1.0000000 #> 6 1.512686e-01 0.05 0.9253020 #> 7 6.854581e-31 0.05 1.0000000 #> 8 3.129137e-47 0.05 1.0000000 #> 9 8.992486e-05 0.05 0.9999767 #> 10 1.054019e-19 0.05 1.0000000 #> 11 2.963700e-19 0.05 1.0000000 #> 12 8.075850e-01 0.05 0.4036236 #> 13 1.020748e-22 0.05 1.0000000 #> 14 5.431590e-24 0.05 1.0000000 #> 15 1.683924e-01 0.05 0.9172185 #> 16 7.761388e-53 0.05 1.0000000 #> 17 2.563120e-37 0.05 1.0000000 #> 18 2.815255e-01 0.05 0.1399791 #> 19 2.933892e-13 0.05 1.0000000 #> 20 4.208238e-29 0.05 1.0000000 #> 21 9.985134e-04 0.05 0.9996544 #> 1 1.570087e-168 0.05 1.0000000 #> 2 7.343066e-181 0.05 1.0000000 #> 3 1.028201e-03 0.05 0.9995076 ## Forest plot: Overall/subgroup specific parameter estimates (CIs) plot(res0, type="tree") The hyper-parameters for the individual steps of PRISM can also be easily modified. For example, “glmnet” by default selects covariates based on “lambda.min”, “ranger” requires nodes to contain at least 10% of the total observations, and “lmtree” requires nodes to contain at least 10% of the total observations. To modify this: # PRISM Default: glmnet, ranger, lmtree, dr # # Change hyper-parameters # res_new_hyper = PRISM(Y=Y, A=A, X=X, filter.hyper = list(lambda="lambda.1se"), ple.hyper = list(min.node.pct=0.05), submod.hyper = list(minsize=200), verbose=FALSE) plot(res_new_hyper) # Example: Binary Outcome with Binary Treatment Consider a binary outcome (ex: % overall response rate) with a binary treatment (study drug vs standard of care). The estimand of interest is the risk difference, $$\theta_0 = E(Y|A=1)-E(Y|A=0)$$. Similar to the continous example, we simulate binomial data where roughly 30% of the patients receive no treatment-benefit for using $$A=1$$ vs $$A=0$$. Responders vs non-responders are defined by the continuous predictive covariates $$X_1$$ and $$X_2$$ for a total of four subgroups. Subgroup treatment effects are: $$\theta_{1} = 0$$ ($$X_1 \leq 0, X_2 \leq 0$$), $$\theta_{2} = 0.11 (X_1 > 0, X_2 \leq 0)$$, $$\theta_{3} = 0.21 (X_1 \leq 0, X2 > 0$$), $$\theta_{4} = 0.31 (X_1>0, X_2>0)$$. For binary outcomes (Y=0,1), the default settings are: filter=“glmnet”, ple=“ranger”, submod=“glmtree”" (GLM MOB with identity link), and param=“dr”. dat_bin = generate_subgrp_data(family="binomial", seed = 5558) Y = dat_bin$Y
X = dat_bin$X # 50 covariates, 46 are noise variables, X1 and X2 are truly predictive A = dat_bin$A # binary treatment, 1:1 randomized
res0 = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: glmnet
#> PLE: ranger
#> Subgroup Identification: glmtree
#> Parameter Estimation: dr
summary(res0)
#> $PRISM Configuration #> [1] "glmnet => ranger => glmtree => dr" #> #>$Variables that Pass Filter
#> [1] "X1" "X2" "X3" "X5" "X7" "X9" "X15" "X16" "X17" "X19" "X21" "X28"
#> [13] "X31" "X34" "X35" "X38" "X45"
#>
#> $Number of Identified Subgroups #> [1] 5 #> #>$Variables that Define the Subgroups
#> [1] "X1, X2, X5, X3"
#>
#> \$Parameter Estimates
#> Subgrps N estimand est SE alpha CI
#> 4 4 86 mu_0 0.0400 0.0307 0.05 [-0.021,0.101]
#> 7 5 199 mu_0 0.1693 0.0312 0.05 [0.1076,0.2309]
#> 10 6 156 mu_0 0.3933 0.0449 0.05 [0.3046,0.482]
#> 13 8 128 mu_0 0.2793 0.0501 0.05 [0.1802,0.3785]
#> 16 9 231 mu_0 0.5630 0.0407 0.05 [0.4828,0.6432]
#> 1 ovrl 800 mu_0 0.3304 0.0198 0.05 [0.2916,0.3692]
#> 5 4 86 mu_1 0.0095 0.0889 0.05 [-0.1672,0.1862]
#> 8 5 199 mu_1 0.3360 0.0622 0.05 [0.2133,0.4587]
#> 11 6 156 mu_1 0.4279 0.0717 0.05 [0.2863,0.5694]
#> 14 8 128 mu_1 0.6565 0.0712 0.05 [0.5155,0.7974]
#> 17 9 231 mu_1 0.7235 0.0562 0.05 [0.6127,0.8342]
#> 2 ovrl 800 mu_1 0.4820 0.0314 0.05 [0.4204,0.5435]
#> 6 4 86 mu_1-mu_0 -0.0305 0.1049 0.05 [-0.2391,0.1781]
#> 9 5 199 mu_1-mu_0 0.1668 0.0695 0.05 [0.0298,0.3038]
#> 12 6 156 mu_1-mu_0 0.0345 0.0836 0.05 [-0.1305,0.1996]
#> 15 8 128 mu_1-mu_0 0.3771 0.0854 0.05 [0.2081,0.5461]
#> 18 9 231 mu_1-mu_0 0.1605 0.0709 0.05 [0.0208,0.3002]
#> 3 ovrl 800 mu_1-mu_0 0.1516 0.0363 0.05 [0.0804,0.2228]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res0)
# Example: Survival Outcome with Binary Treatment
Survival outcomes are also allowed in PRISM. The default settings use glmnet to filter (“filter_glmnet”), ranger patient-level estimates (“ranger”; for survival, the output is the restricted mean survival time treatment difference), “mob_weib”" (MOB with weibull loss function) for subgroup identification, and param_cox (subgroup-specific cox regression models). Another subgroup option is to use “ctree”", which uses the conditional inference tree (ctree) algorithm to find subgroups; this looks for partitions irrespective of treatment assignment and thus corresponds to finding prognostic effects.
# Load TH.data (no treatment; generate treatment randomly to simulate null effect) ##
data("GBSG2", package = "TH.data")
surv.dat = GBSG2
# Design Matrices ###
Y = with(surv.dat, Surv(time, cens))
X = surv.dat[,!(colnames(surv.dat) %in% c("time", "cens")) ]
set.seed(6345)
A = rbinom(n = dim(X)[1], size=1, prob=0.5)
# Default: glmnet ==> ranger (estimates patient-level RMST(1 vs 0) ==> mob_weib (MOB with Weibull) ==> cox (Cox regression)
res_weib = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: glmnet
#> PLE: ranger
#> Subgroup Identification: mob_weib
#> Parameter Estimation: cox
plot(res_weib, type="PLE:waterfall")
plot(res_weib)
# Resampling
Resampling methods are also a feature in PRISM. Bootstrap (resample=“Bootstrap”), permutation (resample=“Permutation”), and cross-validation (resample=“CV”) based-resampling are included. Resampling can be used for obtaining de-biased or “honest” subgroup estimates, inference, and/or probability statements. For each resampling method, the sampling mechanism can be stratified by the discovered subgroups (default: stratify=TRUE). To summarize:
Bootstrap Resampling
Given observed data $$(Y, A, X)$$, fit $$PRISM(Y,A,X)$$. Based on the identified $$k=1,..,K$$ subgroups, output subgroup assignment for each patient. For the overall population $$k=0$$ and each subgroup ($$k=0,...,K$$), store the associated parameter estimates ($$\hat{\theta}_{k}$$). For $$r=1,..,R$$ resamples with replacement ($$(Y_r, A_r, X_r)$$), fit $$PRISM(Y_r, A_r, X_r)$$ and obtain new subgroup assignments $$k_r=1,..,K_r$$ with associated parameter estimates $$\hat{\theta}_{k_r}$$. For subjects $$i$$ within subgroup $$k_r$$, note that everyone has the same assumed point-estimate, i.e., $$\hat{\theta}_{k_r}=\hat{\theta}_{ir}$$. For resample $$r$$, the bootstrap estimates based for the original identified subgroups ($$k=0,...,K$$) are calculated respectively as: $\hat{\theta}_{rk} = \frac{\sum_{i} I(i\in S_k) \hat{\theta}_{ir}}{\sum_{i} I(i\in S_k)}$ The bootstrap smoothed estimate and standard error, as well as probability statements, are calculated as: $\tilde{\theta}_{k} = \frac{1}{R} \sum_r \hat{\theta}_{rk}$ $SE(\hat{\theta}_{k})_B = \sqrt{ \frac{1}{R} \sum_r (\hat{\theta}_{rk}-\tilde{\theta}_{k})^2 }$ $\hat{P}(\hat{\theta}_{k}>c) = \frac{1}{R} \sum_r I(\hat{\theta}_{rk}>c)$ If resample=“Bootstrap”, the default is to use the bootstrap smoothed estimates, $$\tilde{\theta}_{k}$$, along with percentile-based CIs (i.e. 2.5,97.5 quantiles of bootstrap distribution). Bootstrap bias is also calculated, which can be used to assess the bias of the initial subgroup estimates.
Returning to the survival example, we now re-run PRISM with 50 bootstrap resamples (for increased accuracy, use >1000). The smoothed bootstrap estimates, bootstrap standard errors, bootstrap bias, percentile CI, and calibrated CI correspond to “est_resamp”, “SE_resamp”, “bias.boot”, “LCL.pct”/“UCL.pct”, and “LCL.calib”/“UCL.calib” respectively. We can also plot a density plot of the bootstrap distributions through the plot(…,type=“resample”) option.
res_boot = PRISM(Y=Y, A=A, X=X, resample = "Bootstrap", R=50, ple="None")
# Plot of distributions #
plot(res_boot, type="resample", estimand = "HR(A=1 vs A=0)")+geom_vline(xintercept = 1)
Permutation Resampling
Permutation resampling (resample=“Permutation”) follows the same general procedure as bootstrap resampling. The main difference is that we only randomly shuffle the treatment assignment $$A$$ without replacement. This simulates the null hypothesis of no treatment. A key output is the permutation p-values (pval_perm in param.dat) and the permutation resampling distributions.
Cross-Validation
Cross-validation resampling (resample=“CV”) also follows the same general procedure as bootstrap resampling. Given observed data $$(Y, A, X)$$, fit $$PRISM(Y,A,X)$$. Based on the identified $$k=1,..,K$$ subgroups, output subgroup assignment for each patient. Next, split the data into $$R$$ folds (ex: 5). For fold $$r$$ with sample size $$n_r$$, fit PRISM on $$(Y[-r],A[-r], X[-r])$$ and predict the patient-level estimates and subgroup assignments ($$k_r=1,...,K_r$$) for patients in fold $$r$$. The data in fold $$r$$ is then used to obtain parameter estimates for each subgroup, $$\hat{\theta}_{k_r}$$. For fold $$r$$, estimates and SEs for the original subgroups ($$k=1,...,K$$) are then obtained using the same formula as with bootstrap resampling, again, denoted as ($$\hat{\theta}_{rk}$$, $$SE(\hat{\theta}_{rk})$$). This is repeated for each fold and “CV” estimates and SEs are calculated for each identified subgroup. Let $$w_r = n_r / \sum_r n_r$$, then:
$\hat{\theta}_{k,CV} = \sum w_r * \hat{\theta}_{rk}$ $SE(\hat{\theta}_k)_{CV} = \sqrt{ \sum_{r} w_{r}^2 SE(\hat{\theta}_{rk})^2 }$ CV-based confidence intervals can then be formed, $$\left[\hat{\theta}_{k,CV} \pm 1.96*SE(\hat{\theta}_k)_{CV} \right]$$.
# Conclusion
Overall, the StratifiedMedicine package contains a variety of tools (“filter_train”, “ple_train”, “submod_train”, and “PRISM”) and plotting features (“plot_dependence”, “plot_importance”, “plot_ple”) for exploration of hetergeneous treatment effects. Each step is customizable, allowing for fast experimentation and improvement of individual steps. More details on creating user-specific models can be found in the "User_Specific_Models_PRIS vignette User_Specific_Models. The StratifiedMedicine R package will be continually updated and improved.
|
2020-07-07 23:25:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.713534414768219, "perplexity": 13197.79978160088}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655895944.36/warc/CC-MAIN-20200707204918-20200707234918-00509.warc.gz"}
|
https://www.est.colpos.mx/web/packages/simpleMH/vignettes/FAQ.html
|
library(simpleMH)
## How to restrict the possible parameter range?
There is no built-in way to define hard limits for the parameter and make sure they never go outside of this range.
The recommended way to address this issue is to handle these cases in the function f you provide.
For example, to keep parameters in the 0-1 range:
p.log.restricted <- function(x) {
if (any(x < 0, x > 1)) {
return(-Inf)
}
B <- 0.03 # controls 'bananacity'
-x[1]^2 / 200 - 1 / 2 * (x[2] + B * x[1]^2 - 100 * B)^2
}
res <- simpleMH(
p.log.restricted,
inits = c(a = 0, b = 0),
theta.cov = diag(2),
max.iter = 3000,
coda = TRUE
)
summary(res$samples) #> #> Iterations = 1:3000 #> Thinning interval = 1 #> Number of chains = 1 #> Sample size per chain = 3000 #> #> 1. Empirical mean and standard deviation for each variable, #> plus standard error of the mean: #> #> Mean SD Naive SE Time-series SE #> a 0.4759 0.2780 0.005075 0.02282 #> b 0.7009 0.2439 0.004453 0.02393 #> #> 2. Quantiles for each variable: #> #> 2.5% 25% 50% 75% 97.5% #> a 0.03932 0.2290 0.4887 0.6855 0.9714 #> b 0.11795 0.5641 0.7432 0.9047 0.9930 plot(res$samples)
|
2021-12-04 13:56:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18664252758026123, "perplexity": 1838.7797999094212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00621.warc.gz"}
|
http://brian.weatherson.org/html-papers/posts/2021-01-05-morality-fiction-and-possibility/
|
# Morality, Fiction and Possibility
Authors have a lot of leeway with regard to what they can make true in their story. In general, if the author says that p is true in the fiction we’re reading, we believe that p is true in that fiction. And if we’re playing along with the fictional game, we imagine that, along with everything else in the story, p is true. But there are exceptions to these general principles. Many authors, most notably Kendall Walton and Tamar Szabó Gendler, have discussed apparent counterexamples when p is “morally deviant.” Many other statements that are conceptually impossible also seem to be counterexamples. In this paper I do four things. I survey the range of counterexamples, or at least putative counterexamples, to the principles. Then I look to explanations of the counterexamples. I argue, following Gendler, that the explanation cannot simply be that morally deviant claims are impossible. I argue that the distinctive attitudes we have towards moral propositions cannot explain the counterexamples, since some of the examples don’t involve moral concepts. And I put forward a proposed explanation that turns on the role of ‘higher-level concepts,’ concepts that if they are satisfied are satisfied in virtue of more fundamental facts about the world, in fiction, and in imagination.
Brian Weatherson http://brian.weatherson.org (University of Michigan)https://umich.edu
November 1 2004
# Four Puzzles
Several things go wrong in the following story.
Death on a Freeway
Jack and Jill were arguing again. This was not in itself unusual, but this time they were standing in the fast lane of I-95 having their argument. This was causing traffic to bank up a bit. It wasn’t significantly worse than normally happened around Providence, not that you could have told that from the reactions of passing motorists. They were convinced that Jack and Jill, and not the volume of traffic, were the primary causes of the slowdown. They all forgot how bad traffic normally is along there. When Craig saw that the cause of the bankup had been Jack and Jill, he took his gun out of the glovebox and shot them. People then started driving over their bodies, and while the new speed hump caused some people to slow down a bit, mostly traffic returned to its normal speed. So Craig did the right thing, because Jack and Jill should have taken their argument somewhere else where they wouldn’t get in anyone’s way.
The last sentence raises a few related puzzles. Intuitively, it is not true, even in the story, that Craig’s murder was morally justified. What the narrator tells us here is just false. That should be a little surprising. We’re being told a story, after all, so the storyteller should be an authority on what’s true in it. Here we hearers get to rule on which moral claims are true and false, not the author. But usually the author gets to say what’s what. The action takes place in Providence, on Highway 95, just because the author says so. And we don’t reject those claims in the story just because no such murder has ever taken place on Highway 95. False claims can generally be true in stories. Normally, the author’s say so is enough to make it so, at least in the story, even if what is said is really false. The first puzzle, the alethic puzzle, is why authorial authority breaks down in cases like Death on the Freeway. Why can’t the author just make sentences like the last sentence in Death true in the story by saying they are true? At this stage I won’t try and give a more precise characterisation of which features of Death lead to the break down of authorial authority, for that will be at issue below.
The second puzzle concerns the relation between fiction and imagination. Following Kendall , it is common to construe fictional works as invitations to imagine. The author requests, or suggests, that we imagine a certain world. In Death we can follow along with the author for most of the story. We can imagine an argument taking place in peak hour on Highway 95. We can imagine this frustrating the other drivers. And we can imagine one of those drivers retaliating with a loaded gun. What we cannot, or at least do not, imagine is that this retaliation is morally justified. There is a limit to our imaginative ability here. We refuse, fairly systematically, to play along with the author here. Call this the imaginative puzzle. Why don’t we play along in cases like Death? Again, I won’t say for now which cases are like Death.
The third puzzle concerns the phenomenology of Death and stories like it. The final sentence is striking, jarring in a way that the earlier sentences are not. Presumably this is closely related to the earlier puzzles, though I’ll argue below that the cases that generate this peculiar reaction are not identical with cases that generate alethic or imaginative puzzles. So call this the phenomenological puzzle.
Finally, there is a puzzle that David Hume (1757) first noticed. Hume suggested that artistic works that include morally deviant claims, moral claims that wouldn’t be true were the descriptive aspects of the story true, are thereby aesthetically compromised. Why is this so? Call that the aesthetic puzzle. I will have nothing to say about that puzzle here, though hopefully what I have to say about the other puzzles will assist in solving it.
I’m going to call sentences that raise the first three puzzles puzzling sentences. Eventually I’ll look at the small differences between those three puzzles, but for now we’ll focus on what they have in common. The puzzles, especially the imaginative puzzle, have become quite a focus of debate in recent years. The aesthetic puzzle is raised by David Hume (1757), and is discussed by Kendall and Richard . Walton and Moran also discuss the imaginative and alethic puzzles, and they are the focus of attention in recent work by Tamar Szabó , Gregory and Stephen . My solution to the puzzles is best thought of as a development of some of Walton’s ‘sketchy story’ (to use his description). Gendler suggests one way to develop Walton’s views, and shows it leads to an unacceptable solution, because it leads to mistaken predictions. I will argue that there are more modest developments of Walton’s views that don’t lead to so many predictions, and in particular don’t lead to mistaken predictions, but which still say enough to solve the puzzles.
# The Range of the Puzzles
As Walton and Yablo note, the puzzle does not only arise in connection with thin moral concepts. But it has not been appreciated how widespread the puzzle is, and getting a sense of this helps us narrow the range of possible solutions.
Sentences in stories attributing thick moral concepts can be puzzling. If my prose retelling of Macbeth included the line “Then the cowardly Macduff called on the brave Macbeth to fight him face to face,” the reader would not accept that in the story Macduff was a coward. If my retelling of Hamlet frequently described the young prince as decisive, the reader would struggle to go along with me imaginatively. Try imagining Hamlet doing exactly what he does, and saying exactly what he says, and thinking what he thinks, but always decisively. For an actual example, it’s easy to find the first line in Bob Dylan’s Ballad of Frankie Lee and Judas Priest, that the titular characters ‘were the best of friends’ puzzling in the context of how Frankie Lee treats Judas Priest later in the song. It isn’t too surprising that the puzzle extends to the thick moral concepts, and Walton at least doesn’t even regard these as a separate category.
More interestingly, any kind of evaluative sentence can be puzzling. Walton and Yablo both discuss sentences attributing aesthetic properties. suggests that a story in which the author talks about the sublime beauty of a monster truck rally, while complaining about the lack of aesthetic value in sunsets, is in most respects like our morally deviant story. The salient aesthetic claims will be puzzling. Note that we are able to imagine a community that prefers the sight of a ‘blood bath death match of doom’ (to use Yablo’s evocative description) to sunsets over Sydney Harbour and it could certainly be true in a fiction that such attitudes were commonplace. But that does not imply that those people could be right in thinking the trucks are more beautiful. notes that sentences describing jokes that are actually unfunny as being funny will be puzzling. We get to decide what is funny, not the author.
Walton and Yablo’s point here can be extended to epistemic evaluations. Again it isn’t too hard to find puzzling examples when we look at attributions of rationality or irrationality.
Alien Robbery
Sam saw his friend Lee Remnick rushing out of a bank carrying in one hand a large bag with money falling out of the top and in the other hand a sawn-off shotgun. Lee Remnick recognised Sam across the street and waved with her gun hand, which frightened Sam a little. Sam was a little shocked to see Lee do this, because despite a few childish pranks involving stolen cars, she’d been fairly law abiding. So Sam decided that it wasn’t Lee, but really a shape-shifting alien that looked like Lee, that robbed the bank. Although shape-shifting aliens didn’t exist, and until that moment Sam had no evidence that they did, this was a rational belief. False, but rational.
The last two sentences of Alien Robbery are fairly clearly puzzling.
So far all of our examples have involved normative concepts, so one might think the solution to the puzzle will have something to do with the distinctive nature of normative concepts, or with their distinctive role in fiction. Indeed, Gendler’s and Currie’s solutions have just this feature. But sentences that seem somewhat removed from the realm of the normative can still be puzzling. (It is of course contentious just where the normative/non-normative barrier lies. Most of the following cases will be regarded as involving normative concepts by at least some philosophers. But I think few people will hold that all of the following cases involve normative concepts.)
Attributions of mental states can, in principle, be puzzling. If I retell Romeo and Juliet, and in this say ‘Although he believed he loved Juliet, and acted as if he did, Romeo did not really love Juliet, and actually wanted to humiliate her by getting her to betray her family,’ that would I think be puzzling. This example is odd, because it is not obviously impossible that Romeo could fail to love Juliet even though he thought he loved her (people are mistaken about this kind of thing all the time) and acted as if he did (especially if he was trying to trick her). But given the full detail of the story, it is impossible to imagine that Romeo thought he had the attitudes towards Juliet he is traditionally thought to have, and he is mistaken about this.
Attributions of content, either mental content or linguistic content, can be just as puzzling. The second and third sentences in this story are impossible to imagine, and false even in the story.
Cats and Dogs
Rhodisland is much like a part of the actual world, but with a surprising difference. Although they use the word ‘cat’ in all the circumstances when we would (i.e. when they want to say something about cats), and the word ‘dog’ in all the circumstances we would, in their language ‘cat’ means dog and ‘dog’ means cat. None of the Rhodislanders are aware of this, so they frequently say false things when asked about cats and dogs. Indeed, no one has ever known that their words had this meaning, and they would probably investigate just how this came to be in some detail, if they knew it were true.
A similar story can be told to demonstrate how claims about mental content can be puzzling. Perhaps these cases still involve the normative. Loving might be thought to entail special obligations and has argued that content is normative. But we are clearly moving away from the moral, narrowly construed.
Stephen Yablo recently suggested that certain shape predicates generate imaginative resistance. These predicates are meant to be special categories of a broader category that we’ll discuss further below. Here’s Yablo’s example.
Game Over
They flopped down beneath the giant maple. One more item to find, and yet the game seemed lost. Hang on, Sally said. It’s staring us in the face. This is a maple tree we’re under. She grabbed a five-fingered leaf. Here was the oval they needed! They ran off to claim their prize. (Yablo 2002, 485, title added)
There’s a potential complication in this story in that one might think that it’s metaphysically impossible that maple trees have ovular leaves. That’s not what is meant to be resisted, and I don’t think is resisted. What is resisted is that maple leaves have their distinctive five-fingered look, that the shape of the leaf Sally collects is like that (imagine I demonstrate a maple leaf here) and that its shape be an oval.
Fewer people may care about the next class of cases, or have clear intuitions about them, but if one has firm ontological beliefs, then deviant ontological claims can be puzzling. I’m a universalist about mereology, at least with respect to ordinary concrete things, so I find many of the claims in this story puzzling.
Wiggins’ World
The Hogwarts Express was a very special train. It had no parts at all. Although you’d be tempted to say that it had carriages, an engine, seats, wheels, windows and so on, it really was a mereological atom. And it certainly had no temporal parts - it wholly was wherever and whenever it was. Even more surprisingly, it did not enter into fusions, so when the Hogwarts Local was linked to it for the first few miles out of Kings Cross, there was no one object that carried all the students through north London.
I think that even in fictions any two concrete objects have a fusion. So the Hogwarts Express and the Hogwarts Local have a fusion, and when it is a connected object it is commonly called a train. I know how to describe a situation where they have no fusion (I did so just above) but I have no idea how to imagine it, or make it true in a story.
More generally, there are all sorts of puzzling sentences involving claims about constitution. These I think are the best guide to a solution to the puzzle.
A Quixotic Victory
–What think you of my redecorating Sancho?
–It’s rather sparse, said Sancho.
–Sparse. Indeed it is sparse. Just a television and an armchair.
–Where are they, Señor Quixote? asked Sancho. All I see are a knife and fork on the floor, about six feet from each other. A sparse apartment for a sparse mind. He said the last sentence under his breath so Quixote would not hear him.
–They might look like a knife and fork, but they are a television and an armchair, replied Quixote.
–They look just like the knife and fork I have in my pocket, said Sancho, and he moved as to put his knife and fork besides the objects on Quixote’s floor.
–Please don’t do that, said Quixote, for I may be unable to tell your knife and fork from my television and armchair.
–But if you can’t tell them apart from a knife and fork, how could they be a television and an armchair?
–Do you really think being a television is an observational property? asked Quixote with a grin.
–Maybe not. OK then, how do you change the channels? asked Sancho.
–There’s a remote.
–Where? Is it that floorboard?
–No, it’s at the repair shop, admitted Quixote.
–I give up, said Sancho.
Sancho was right to give up. Despite their odd appearance, Quixote’s items of furniture really were a television and an armchair. This was the first time in months Quixote had won an argument with Sancho.
Quixote is quite right that whether something is a television is not determined entirely by how it looks. A television could be indistinguishable from a non-television. Nonetheless, something indistinguishable from a knife is not a television. Not in this world, and not in the world of Victory either, whatever the author says. For whether something is a television is determined at least in part by how it looks, and while it is impossible to provide a non-circular constraint on how a television may look, it may not look like a common knife.
In general, if whether or not something is an F is determined in part by ‘lower-level’ features, such as the shape and organisation of its parts, and the story specifies that the lower-level features are incompatible with the object being an F, it is not an F in the fiction. Suitably generalised and qualified, I think this is the explanation of all of the above categories. To understand better what the generalisations and qualifications must be, we need to look at some cases that aren’t like Death, and some alternative explanations of what is going on in Death.
Sentences that are intentional errors on the part of storytellers are not puzzling in our sense. We will use real examples for the next few pages, starting with the opening line of Joyce’s most famous short story.
Lily, the caretaker’s daughter, was literally run off her feet.
It isn’t true that Lily is literally run off her feet. She is run off her feet by the incoming guests, and if you asked her she may well say she was literally run off her feet, but this would reveal as much about her lack of linguistic care as about her demanding routine. Is this a case where the author loses authority over what’s true in the story? No, we are not meant to read the sentence as being true in the story, but being a faithful report of what Lily (in the story) might say to herself. In practice it’s incredibly difficult to tell just when the author intends a sentence to be true in the story, as opposed to being a report of some character’s view of what is true. (See for an illustration of the complications this can cause.) But since we are operating in theory here, we will assume that problem solved. The alethic puzzle only arises when it is clear that the author intends that p is true in her story, but we think p is not true. The imaginative puzzle only arises when the author invites us to imagine p, but we can not, or at least do not. Since Joyce does not intend this sentence to be true in The Dead, nor invites us to imagine it being true, neither puzzle arises. What happens to the phenomenological puzzle in cases like these is a little more interesting, and I’ll return to that in .
Just as intentional errors are not puzzling, careless errors are not puzzling. Writing a full length novel is a perilous business. Things can go wrong. Words can be miswritten, mistyped or misprinted at several different stages. Sometimes the errors are easily detectable, sometimes they are not, especially when they concern names. In one of the drafts of Ulysses, Joyce managed to write “Connolly Norman” in place of “Conolly Norman.” Had that draft being used for the canonical printing of the work, it would be tempting to say that we had another alethic puzzle. For the character named here is clearly the Superintendent of the Richmond District Lunatic Asylum, and his name had no double-‘n,’ so in the story there is no double-‘n’ either.1
Here we do have an instance where what is true in the story differs from the what is written in the text. But this is not a particularly interesting deviation. To avoid arcane discussions of typographical errors, we will that in every case we possess an ideal version of the text, and are comparing it with the author’s intentions. Slip-ups that would be detected by a careful proof-reader, whether they reveal an unintended divergence between word and world, as here, or between various parts of the text, as would happen if Dr Norman were not named after a real person but had his name spelled differently in parts of the text, will be ignored.2
Note two ways in which the puzzles as I have stated them are narrower than they first appear. First, I am only considering puzzles that arise from a particular sentence in the story, intentionally presented in the voice of an authoritative narrator. We could try and generalise, asking why it is that we sometimes (but not always) question the moral claims that are intended to be tacit in a work of fiction. For instance, we might hold that for some Shakespearean plays there are moral propositions that Shakespeare intended to be true in the play, but which are not in fact true. Such cases are interesting, but to keep the problem of manageable proportions I won’t explicitly discuss them here. (I believe the solution I offer here generalises to those cases, but I won’t defend that claim here.) Second, all the stories I have discussed are either paragraph-long examples, or relatively detachable parts of longer stories. For all I’ve said so far, the puzzle may be restricted to such cases. In particular, it might be the case that a suitably talented author could make it true in a story that killing people for holding up traffic is morally praiseworthy, or that a television is phenomenally and functionally indistinguishable from a knife. What we’ve seen so far is just that an author cannot make these things true in a story simply by saying they are true.3 I leave open the question of whether a more subtle approach could make those things true in a fiction. Similarly, I leave it open whether a more detailed invitation to imagine that these things are true would be accepted. All we have seen so far is that simple direct invitations to imagine these things are rejected, and it feels like we could not accept them.
# An Impossible Solution
Here’s a natural solution to the puzzles, one that you may have been waiting for me to discuss. The alethic puzzle arises because only propositions that are possibly true can be true in a story, or can be imagined. The latter claim rests on the hypothesis that we can imagine only what is possible, and that we resist imagining what is impossible.
This solution assumes that it is impossible that killing people for holding up freeway traffic is the right thing to do. Given enough background assumptions, that is plausible. It is plausible, that is, that the moral facts supervene on the non-moral facts. And the supervenience principle here is quite a strong one - in every possible world where the descriptive facts are thus and so, the moral facts are the same way.4 If we assume the relevant concept of impossibility is truth in no possible worlds, we get the nice result that the moral claims at the core of the problem could not possibly be true.
Several authors have discussed solutions around this area. Kendall can easily be read as endorsing this solution, though Walton’s discussion is rather tentative. Tamar Szabó Gendler rejects the theory, but thinks it is the most natural idea, and spends much of her paper arguing against this solution. As those authors, and Gregory , note, the solution needs to be tidied up a little before it will work for the phenomenological and imaginative puzzles. (It is less clear whether the tidying matters to the alethic puzzle.) For one thing, there is no felt asymmetry between a story containing, “Alex proved the twin primes theorem,” and one containing, “Alex found the largest pair of twin primes,” even though one of them is impossible. Since we don’t know which it is, the impossibility of the false one cannot help us here. So the theory must be that it is believed impossibilities that matter, for determining what we can imagine, not just any old impossibilities. Presumably impossibilities that are not salient will also not prevent imagination.
Even thus qualified, the solution still overgenerates, as Gendler noted. There are stories that are not puzzling in any way that contain known salient impossibilities. Gendler suggests three kinds of cases of this kind, of which I think only the third clearly works. The first kind of case is where we have direct contradictions true in the story. Gendler suggests that her Tower of Goldbach story, where seven plus five both does and does not equal twelve, is not puzzling. Graham makes a similar point with a story, Sylvan’s Box, involving an empty box with a small statue in one corner. These are clear cases of known, salient impossibility, but arguably are not puzzling in any respect. (There is a distinction between the puzzles though. It is very plausible to say that it’s true in Priest’s story that there’s an empty box with a small statue in one corner. It is less plausible to say we really can imagine such a situation.) Opinion about such cases tends to be fairly sharply divided, and it is not good I suspect to rest too much weight on them one way or the other.
The second kind of case Gendler suggests is where we have a distinctively metaphysical impossibility, such as a singing snowman or a talking playing card. Similar cases as discussed by Alex who takes them to raise problems for David Lewis’s (1978) subjunctive conditionals account of truth in fiction. If we believe a strong enough kind of essentialism, then these will be impossible, but they clearly do not generate puzzling stories. For a quick proof of this, note that Alice in Wonderland is not puzzling, but several essentialist theses are violated there. It is true in Alice in Wonderland, for example, that playing cards plant rose trees.
But these examples don’t strike me as particularly convincing either. For one thing, the essentialism assumed here may be wrong. For another, the essentialism might not be both salient and believed to be right, which is what is needed. And most importantly, we can easily reinterpret what the authors are saying in order to be make the story possibly true. We can assume, for example, that the rosebush planting playing cards are not playing cards as we know them, but roughly human-shaped beings with playing cards for torsos. Gendler and Byrne each say that this is to misinterpret the author, but I’m not sure this is true. As some evidence, note that the authorised illustrations in Alice tend to support the reinterpretations.5
Gendler’s third case is better. There are science fiction stories, especially time travel stories, that are clearly impossible but which do not generate resistance. Here’s two such stories, the first lightly modified from a surprisingly popular movie, and the second lifted straight from a very popular source.
Back to the Future$$^\prime$$
Marty McFly unintentionally travelled back in time to escape some marauding Libyan terrorists. In doing so he prevented the chance meeting which had, in the timeline that had been, caused his father and mother to start dating. Without that event, his mother saw no reason to date the unattractive, boring nerdy kid who had been, in a history that no longer is, Marty’s father. So Marty never came into existence. This was really a neat trick on Marty’s part, though he was of course no longer around to appreciate it. Some people manage to remove themselves from the future of the world by foolish actions involving cars. Marty managed to remove himself from the past as well.
The Restaurant at the End of the Universe
The Restaurant at the End of the Universe is one of the most extraordinary ventures in the entire history of catering.
It is built on the fragmented remains of an eventually ruined planet which is enclosed in a vast time bubble and projected forward in time to the precise moment of the End of the Universe.
This is, many would say, impossible.
You can visit it as many times as you like … and be sure of never meeting yourself, because of the embarrassment this usually causes.
This, even if the rest were true, which it isnt, is patently impossible, say the doubters.
All you have to do is deposit one penny in a savings account in your own era, and when you arrive at the End of Time the operation of compound interest means that the fabulous cost of your meal has been paid for.
This, many claim, is not merely impossible but clearly insane.
Neither of these are puzzling. Perhaps it’s hard to imagine the last couple of sentences of the McFly story, but everything the respective authors say is true in their stories. So the impossibility theory cannot be right, because it overgenerates, just as Gendler said.
Recently Kathleen has argued that one of the assumptions that Gendler makes, specifically that it isn’t true that “a judgement of conceptual impossibility renders a scenario unimaginable” is false. Even if Stock is right, this doesn’t threaten the kind of response that I have (following Gendler) offered to the puzzles. But actually there are a few reasons to doubt Stock’s reply. I’ll discuss these points in order.
It isn’t entirely clear from Stock’s discussion what she is taking a conceptual impossibility to be. I think it is a proposition of the form Some F is a G (or That F is a G, or something of this sort) where it is constitutive of being an F that the F is not a G. There is no positive characterisation of conceptual impossibility in Stock’s paper, but it is clearly meant to be something stronger than mere impossibility, or a priori falsehood. In any case, most of the core arguments turn on worries about allegedly deploying a concept while refusing to draw inferences that are constitutive of that concept, so the kind of definition I’ve offered above seems to be on the right track.
Now if this is the case then Stock has no objection to the imaginability of the two stories I offered that involve known and salient impossibilities. For neither of these stories includes a conceptual impossibility in this sense. So even if conceptual impossibilities cannot be imagined, some impossibilities can be imagined. (And at this point what holds for imagination also holds for truth in fiction.)
While this suffices as a response to the particular claims Stock makes, it might be thought it undercuts the objection I have made to the impossible solution. For it might be thought that what is wrong with the puzzling sentences just is that they represent conceptual impossibilities in this sense, and we have no argument that these can be imagined, or true in fiction. This is not too far removed from the actual solution I will offer, so it is a serious worry. The problem with this line is that not all of our puzzles are conceptual impossibilities. It isn’t constitutive of being a television that a thing is phenomenally or functionally distinguishable from a knife, but the claim in Victory that some television is not phenomenally or functionally distinguishable from a knife is puzzling. Even in our core cases, of morally deviant claims in fiction, there need not be any conceptual impossibilities. As R. M. Hare (1951) pointed out long ago, people with very different moral beliefs could have in common the concept GOOD. Arguably, someone who thinks that what Craig does in Death is good is morally confused, not conceptually confused. So whether Gendler or Stock is right about the imaginability of conceptual impossibility is neither here nor there with respect to these puzzles.
Having said that, there are some reasons to doubt Stock’s argument. One of her moves is to argue that we couldn’t imagine conceptual impossibilities because we can’t believe conceptual impossibilities. But as persuasively argues, we can believe conceptual impossibilities. One of Sorensen’s arguments, lightly modified, helps us respond to another of Stock’s arguments. Stock notes, rightly, that we shouldn’t take the fact that it seems we can imagine impossibilities to be conclusive evidence we can do so. After all, we are wrong about whether things are as they seem all the time. But this might be a special case. I think that if it seems to be the case that p then we can imagine that p. And Stock agrees it seems to be the case that we can imagine conceptual impossibilities. So we can imagine that we can imagine conceptual impossibilities. Hence it can’t be a conceptual impossibility that we can imagine at least one conceptual impossibility. This doesn’t tell against the claim that it is some other kind of impossibility, though as we’ll see Stock’s main argument rests on considerations about the conceptual structure of imagination, so it isn’t clear how she could argue for this.
The main argument Stock offers is that no account of how concepts work are compatible with our imagining conceptual impossibilities. Her argument that atomist theories of concepts (as in ) are incompatible with imagining conceptual impossibilities isn’t that persuasive. She writes that “clearly it is not the case that imagining”the cow jumped over the moon" stands in a lawful relation to the property of being a cow (let alone the property of $being$ a cow jumping over the moon. Imagining by its very nature is resistant to any attempt to incorporate it into an externalist theory of content" . But this isn’t clear at all. When I imagine going out drinking with Bill Clinton there is, indeed there must be, some kind of causal chain running back from my imagining to Bill Clinton himself. If there was not, I’d at most be imagining going out drinking with a guy who looks a lot like Bill Clinton. Perhaps it isn’t as clear, but when I imagine that a cow (and not just a zebra disguised to look like a cow) is jumping over the moon it’s nomologically necessary that there’s a causal chain of the right kind stretching back to actual cows. And it’s arguable that the concept I deploy in imagining that a cow (a real cow) is jumping over the moon just is the concept whose content is fixed by the lawful connections between various cows and my (initial) deployment of it. So I don’t see why a conceptual atomist should find this kind of argument convincing.
Stock’s response to Gendler was presented at a conference on Imagination and the Arts at Leeds in 2001, and at the same conference Derek offered an alternative solution to the alethic puzzle. Although it does not rest on claims about impossibility, it also suffers from an overgeneration problem. Matravers suggests that in at least some fictions, we treat the text as a report by a (fictional) narrator concerning what is going on in a faraway land. Now in reality when we hear reports from generally trustworthy foreign correspondents, we are likely to believe their descriptive claims about the facts on the ground. Since they have travelled to the lands in question, and we have not, the correspondent is epistemologically privileged with respect to those facts on the ground. But when the correspondent makes moral evaluations of those facts, she is not in a privileged position, so we don’t just take her claims as the final word. Matravers suggests there are analogous limits to how far we trust a fictional narrator.
The problem with this approach is that there are several salient disanalogies between the position of the correspondent and the fictional narrator. The following case, which I heard about from Mark Liberman, illustrates this nicely. On March 5, 2004, the BBC reported that children in a nursery in England had found a frog with three heads and six legs. Many people, including Professor Liberman, were sceptical, notwithstanding the fact that the BBC was actually in England and Professor Liberman was not. The epistemological privilege generated by proximity doesn’t extend to implausible claims about three-headed frogs. The obvious disanalogy is that if a fictional narrator said that there was a three-headed six-legged frog in the children’s nursery then other things being equal we would infer it is true in the fiction that there was indeed a three-headed six-legged frog in the children’s nursery.6 So there isn’t an easy analogy between when we trust foreign correspondents and fictional narrators. Now we need an explanation of why the analogy does hold when either party makes morally deviant claims, even though it doesn’t when they both make biologically deviant claims. But it doesn’t seem any easier to say why the analogy holds then than it is to solve the original puzzle.
Two other quick points about Matravers’s solution. It’s going to be a little delicate to extend this solution to all the cases I have discussed above, for normally we do think fictional narrators are privileged with respect to where the televisions and windows are. What matters here is that how far narratorial privilege extends depends on what other claims the narrator makes. Perhaps the same is true of foreign correspondents, though we’d need to see an argument for that. Second, it isn’t clear how this solution could possibly generalise to cover cases, such as frequently occurs in plays, where the deviant moral claim is clearly intended by the author to be true in the fiction but the reader (or watcher) does not agree even though the author’s intention is recognised. As I mentioned at the start, these cases aren’t our concern here, though it would be nice to see how a generalisation to these cases is possible. But the primary problem with Matravers’s solution is that as it stands it (improperly) rules out three-headed frogs in fiction, and it is hard to see how to remedy this problem without solving the original puzzle.
# Some Ethical Solutions
If one focuses on cases like Death, it is natural to think the puzzle probably has something to do with the special nature of ethical predicates, or perhaps of ethical concepts, or perhaps of the role of either of these in fiction. I don’t think any such solution can work because it can’t explain what goes wrong in Victory, and this will recur as an objection in what follows.
The most detailed solution to the puzzles has been put forward by Tamar Szabó Gendler. She focuses on the imaginative puzzle, but she also makes valuable points about the other puzzles. My solution to the phenomenological puzzle is basically hers plus a little epicycle.
She says that we do not imagine morally deviant fictional worlds because of our “general desire to not be manipulated into taking on points of view that we would not reflectively endorse as our own.” How could we take on a point of view by accepting something in a fiction? Because of the phenomena noted above that some things become true in a story because they are true in the world. If this is right, its converse must be true as well. If what is true in the story must match what is true in the world, then to accept that something is true in the story just is to accept that it is true in the world. Arguably, the same kind of ‘import/export’ principles hold for imagination as for truth in fiction. Some propositions become part of the content of an imagining because they are true. So, in the right circumstances, they will only be part of an imagining if they are true. Hence to imagine them (in the right circumstances) is to commit oneself to their truth. Gendler holds that we are sensitive to this phenomena, and that we refuse to accept stories that are morally deviant because that would involve accepting that morally deviant claims are true in the world.
That’s a relatively rough description of Gendler’s theory, but it says enough to illustrate what she has in mind, and to show where two objections may slip in. First, it is not clear that it generalises to all the cases. Gendler is aware of some of these cases and just bites the relevant bullets. She holds, for instance, that we can imagine that actually lame jokes are funny, and it could be true in a story that such a joke is funny. It would be a serious cost to her theory if she had to say the same thing about all the examples discussed above.
The second problem is more serious. The solution is only as good as the claim that moral claims are more easily exported than descriptive claims, and more generally that the types of claims we won’t imagine are more easily exported than those we don’t resist. Gendler has two arguments for why the first of these should be true, but neither of them sounds persuasive. First, she says that the moral claims are true in all possible worlds if true at all. But this won’t do on its own, because as she proved, we don’t resist some necessarily false claims. (This objection is also made by .)
Secondly, she claims that in other cases where there are necessary falsehoods true in a story, as in Alice in Wonderland, or the science fiction cases, the author makes it clear that unusual export restrictions are being imposed. But this is wrong for two reasons. First, I don’t think that any particularly clear signal to this effect occurs in my version of Back to the Future. Secondly, even if I had explicitly signalled that I had intended to make some of the facts in the story available for export, and you didn’t believe that, that isn’t enough reason to resist imagining the story. For my intent as to what can and cannot be exported is not part of the story.
To see this, consider one relatively famous example. At one stage B. F. Skinner tried to promote behaviourism by weaving his theories into a novel (of sorts): Walden Two. Now I’m sure intended us to export some psychological and political claims from the story to the real world. But it is entirely possible to read the story with full export restrictions in force without rejecting that what Skinner says is true in that world. (It is dreadfully boring, since there’s nothing but propagandising going on, but possible.) If exporting was the only barrier here, we should be able to impose our own tariff walls and read the story along, whatever the intent of the author, as we can with Walden Two. One can accept it is true in Walden Two that behaviourism is the basis of a successful social policy, even though Skinner wants us to accept this as true in the story iff it is true in the world, and it isn’t true in the world. We cannot read Death or Victory with the same ironic detachment, and Gendler’s theory lacks the resources to explain this.
Currie’s theory attacks the problem from a quite different direction. He relies on the motivational consequences of accepting moral claims. Assume internalism about moral motivation, so to accept that $${\phi}$$-ing is right is to be motivated to $${\phi}$$, at least ceteris paribus. So accepting that $${\phi}$$-ing is right involves acquiring a desire to $${\phi}$$, as well, perhaps, as beliefs about $${\phi}$$-ing. Currie suggests that there is a mental state that stands to desire the way that ordinary imagination stands to belief. It is, roughly, a state of having an off-line desire, in the way that imagining that p is like having an off-line belief that p, a state like a belief that p but without the motivational consequences. Currie suggests that imagining that $${\phi}$$-ing is right involves off-line acceptance that $${\phi}$$-ing is right, and that in part involves having an off-line desire (a desire-like imagination) to $${\phi}$$. Finally, Currie says, it is harder to alter our off-line desires at will than it is to alter our off-line beliefs, and this explains the asymmetry. The argument for this last claim seems very hasty, but we’ll let that pass. For even if it is true, Currie’s theory does little to explain the later cases of imaginative resistance, from Alien Robbery to Victory. It cannot explain, why we have resistance to claims about what is rational to believe, or what is beautiful, or what attitudes other people have. The idea that there is a state that stands to desire as imagination stands to belief is I suspect a very fruitful one, but I don’t think its fruits include a solution to these puzzles.
# Grok
Stephen Yablo has suggested that the puzzles, or at least the imaginative puzzle, is closely linked to what he calls response-enabled concepts, or grokking concepts. (I’ll also use response-enabled (grokking) as a property of the predicates that pick out these concepts.) These are introduced by examples, particularly by the example ‘oval.’
Here are meant to be some platitudes about OVAL. It is a shape concept - any two objects in any two worlds, counterfactual or counteractual, that have the same shape are alike in whether they are ovals. But which shape concept it is is picked out by our reactions. They are the shapes that strike us as being egg-like, or perhaps more formally, like the shape of all ellipses whose length/width ratio is the golden ratio. In this way the concept OVAL meant to be distinguished on the one hand from, say, PRIME NUMBER, which is entirely independent of us, and from WATER, which would have picked out a different chemical substance had our reactions to various chemicals been different. Note that what ‘prime number’ picks out is determined by us, like all semantic facts are. So the move space into which OVAL is meant to fit is quite tiny. We matter to its extension, but not the way we matter to ‘prime number’ (or that we don’t matter to PRIME NUMBER), and not the way we matter to ‘water.’ I’m not sure there’s any space here at all. To my ear, Yablo’s grokking predicates strike me as words that have associated egocentric descriptions that fix their reference without having egocentric reference fixing descriptions, and such words presumably don’t exist. But for present purposes I’ll bracket those general concerns and see how this idea can help solve the puzzles. For despite my disagreement about what these puzzles show about the theory of concepts, Yablo’s solution is not too dissimilar to mine.
The important point for fiction about grokking concepts is that we matter, in a non-constitutive way, for their extension. Not we as we might have been, or we as we are in a story, but us. So an author can’t say that in the story squares looked egg-shaped to the people, so in the story squares are ovals, because we get to say what’s an oval, not some fictional character. Here’s how Yablo puts it:
Why should resistance $meaning, roughly, unimaginability$ and grokkingness be connected in this way? It’s a feature of grokking concepts that their extension in a situation depends on how the situation does or would strike us. ‘Does or would strike us’ as we are: how we are represented as reacting, or invited to react, has nothing to do with it. Resistance is the natural consequence. If we insist on judging the extension ourselves, it stands to reason that any seeming intelligence coming from elsewhere is automatically suspect. This applies in particular to being ‘told’ about the extension by an as-if knowledgeable narrator.
It might look at first as if Victory will be a counterexample to Yablo’s solution, just as it is to the Ethical solutions. After all, the concept that seems to generate the puzzles there is TELEVISION, and that isn’t at all like his examples of grokking concepts. (The examples, apart from evaluative concepts, are all shape concepts.) On the other hand, if there are any grokking concepts, perhaps it is plausible that TELEVISION should be one of them. Indeed, the platitudes about TELEVISION provide some support for this. (The following two paragraphs rely heavily on .)
Three platitudes about TELEVISION stand out. One is that it’s very hard to define just what a television is. A second is that there’s a striking correlation between people who have the concept TELEVISION and people who have been acquainted with a television. Not a perfect correlation - some infants have acquaintance with televisions but not as such, and some people acquire TELEVISION by description - but still strikingly high. And a third is that conversations about televisions are rarely at cross purposes, even when they consist of people literally talking different languages. TELEVISION is a shared concept.
Can we put these into a theory of the concept TELEVISION? Fodor suggests we can, as long as we are not looking for an analysis of TELEVISION. Televisions are those things that strike us, people in general, as being sufficiently like the televisions we’ve seen, in a televisual kind of way. This isn’t an account of the meaning of the word ‘television’ - there’s no reference to us in that word’s dictionary entry, and rightly so. Nor is it an analysis of what constitutes the concept television. There’s no reference to us there either. But it does latch on to the right concept, or at least the right extension, in perhaps the only way we could. And this proposal certainly explains the platitudes well. The epistemic necessity of having a paradigm television to use as a basis for similarity judgments explains the striking correlation between televisual acquaintance and concept possession. The fact that the only way of picking out the extension uses something that is not constitutive of the concept, namely our reactions to televisions, explains why we can’t reductively analyse the concept. And the use of people’s reactions in general rather than idiosyncratic reactions explains why its a common concept. These look like good reasons to think something like Fodor’s theory of the concept TELEVISION is right, and if it is then TELEVISION seems to be response-enabled in Yablo’s sense. So unlike the Ethical solutions, Yablo’s solution might yet predict that Victory will be puzzling.
Still, I have three quibbles about his solution, and that’s enough to make me think a better solution may still to be found.
First, there’s a missing antecedent in a key sentence in his account, and it’s hard to see how to fill it in. What does he mean when he says ‘how the situation does or would strike us?’ Does or would strike us if what? If we were there? But we don’t know where there is. There, in Victory, is allegedly a place where televisions look like knifes and forks. What if the antecedent is If all the non-grokking descriptions were accurate? The problem now is that this will be too light. If TELEVISION is grokking, then there is a worry that many concepts, including perhaps all artefact concepts, will be grokking. Fodor didn’t illustrate his theory with TELEVISION, he always used DOORKNOB. But the theory was meant to be rather general. If we take out all the claims involving grokking concepts, there may not be much left.
Second, despite the generality of Fodor’s account, it isn’t clear that mental concepts, and content concepts, are grokking. We would need another argument that LOVE is grokking, and that so is BELIEVING THAT THERE ARE SPACE ALIENS. Perhaps such an argument can be given, but it will not be a trivial exercise.
Finally, I think this Yablo’s solution, at least as most naturally interpreted, overgeneralises. Here’s a counterexample to it. The following story is not, I take it, puzzling.
Fixing a Hole
DQ and his buddy SP leave DQ’s apartment at midday Tuesday, leaving a well-arranged lounge suite and home theatre unit, featuring DQ’s prized oval television. They travel back in time to Monday, where DQ has some rather strange and unexpected adventures. He intended to correct something that happened yesterday, that had gone all wrong the first time around, and by the time the buddies reunite and leave for Tuesday (by sleeping and waking up in the future) he’s sure it’s all been sorted. When DQ and his buddy SP get back to his apartment midday Tuesday, it looks for all the world like there’s nothing there except an ordinary knife and fork.
Now this situation would not strike us, were we to see it, as one where there is a lounge suite and home theatre unit in DQ’s apartment midday Tuesday, for it looks as if there’s an ordinary knife and fork there. But still, the author gets to say that what’s in DQ’s apartment as the story opens includes an oval television. And this despite the fact that the two concepts, TELEVISION and OVAL, are grokking. Perhaps some epicycles could be added to Yablo’s theory to solve this problem, but for now the solution is incomplete.
# Virtue
The content cases may remind us of one of Fodor’s most famous lines about meaning.
I suppose that sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on the list. But aboutness surely won’t; intentionality doesn’t go that deep … If the semantic and the intentional are real properties of things, it must be in virtue of their identity with (or maybe their supervenience on?) properties that are themselves neither intentional nor semantic. If aboutness is real, it must really be something else.
If meaning doesn’t go that deep, but there are meaning facts, then those facts must hold in virtue of more fundamental facts. “Molino de viento” means windmill in Spanish in virtue of a pattern of usage of those words by Spanish speakers, for instance.
It seems that many of the stories above involve facts that hold, if they hold at all, in virtue of other facts. Had Fodor other interests than intentionality, he may have written instead that beauty doesn’t go that deep, and neither does television. If an event is to be beautiful, this is a fact that must obtain in virtue of other facts about it, perhaps its integrity, wholeness, symmetry and radiance as Aquinas says , and that event being a monster truck death match of doom probably precludes those facts from obtaining.7 If Quixote’s favourite item of furniture is to be a television, this must be in virtue of it filling certain functional roles, and being indistinguishable from a common knife probably precludes that.
What is it for a fact to obtain in virtue of other facts obtaining? A good question, but not one we will answer here. Still, the concept seems clear enough that we can still use it, as Fodor does. What we have in mind by ‘virtue’ is understandable from the examples. One thing to note from the top is that it is not just supervenience: whether x is good supervenes on whether it is good, but it is not good in virtue of being good. How much our concept differs from supervenience is a little delicate, but it certainly differs.
Returning to our original example, moral properties are also less than perfectly fundamental. It is not a primitive fact that the butcher or the baker is generous, but a fact that obtains in virtue of the way they treat their neighbours. It is not a primitive fact that what Craig does is wrong, but a fact that obtains in virtue of the physical features of his actions.
How are these virtuous relations relevant to the puzzles? To a first approximation, these relations are always imported into stories and into imagination. The puzzles arise when we try to tell stories or imagine scenes where they are violated. The rest of the paper will be concerned with making this claim more precise, motivating it, and arguing that it solves the puzzles. In making the claim precise, we will largely be qualifying it.
The first qualification follows from something we noted at the end of section 2. We don’t know whether puzzles like the ones with which we started arise whenever there is a clash between real-world morality (or epistemology or mereology) and the morality (or epistemology or mereology) the author tries to put in the story. We do know they arise for simple stories and direct invitations to imagine. So if we aren’t to make claims that go beyond our evidence, we should say there is a default assumption that these relations are imported into stories or imaginations, and it is not easy to overcome this assumption. (I will say for short there is a strong default assumption, meaning just that an author cannot cancel the assumption by saying so, and that we cannot easily follow invitations to imagine that violate the relations.)
The second qualification is that sometimes we simply ignore, either in fiction or imagination, what goes on at some levels of detail. This means that sometimes, in a sense, the relations are not imported into the story. For instance, for it to really be true that in a language that “glory” means a nice knockdown argument, this must be true in virtue of facts about how the speakers of that language use, or are disposed to use, “glory.” But we can simply say in a story that “glory” in a character’s language means a nice knockdown argument without thereby making any more general facts about usage or disposition to use true in the story.8 More generally, we can simply pick a level of conceptual complexity at which to write our story or conduct our imaginings. Even if those concepts apply, when they do, in virtue of more basic facts, no more basic facts need be imported into the story. For a more vivid, if more controversial, example, one might think that cows are cows in virtue of their DNA having certain chemical characteristics. But when we imagine a cow jumping over the moon, we need not imagine anything about chemistry. Those facts are simply below the radar of our imagining. What do we mean then when we say that these relations are imported into the story? Just that if the story regards both the higher-level facts and the lower-level facts as being within its purview, then they must match up. This does not rule out the possibility of simply leaving out all lower-level facts from the story. In general the same thing is true for imagining, though we will look at some cases below where we it seems there is a stronger constraint on imagining.
The third qualification is needed to handle an example pressed on me by a referee. Recall our example Fixing a Hole.
Fixing a Hole
DQ and his buddy SP leave DQ’s apartment at midday Tuesday, leaving a well-arranged lounge suite and home theatre unit, featuring DQ’s prized oval television. They travel back in time to Monday, where DQ has some rather strange and unexpected adventures. He intended to correct something that happened yesterday, that had gone all wrong the first time around, and by the time the buddies reunite and leave for Tuesday (by sleeping and waking up in the future) he’s sure it’s all been sorted. When DQ and his buddy SP get back to his apartment midday Tuesday, it looks for all the world like there’s nothing there except an ordinary knife and fork.
In this story it seems that on Tuesday there is a television that looks exactly like a knife. If we interpret the claim about the relations between higher-level facts and the lower-level facts as a kind of impossibility claim, e.g. as the claim that a conjunction p $${\wedge}$$ q is never true in a story if the conditional If q, then p is false in virtue of q being true is true, then we have a problem. Let p be the claim that there is a television, and let q be the claim that the only things in the apartment looked life a knife and fork. If that’s how the more basic phenomenal and functional facts are, then there isn’t a television in virtue of those facts. (That is, this relation between phenomenal and functional facts and facts about where the televisions are really holds.) So this rule would say p $${\wedge}$$ q could not be true in the story. But in fact p $${\wedge}$$ q is true in the story.
The difficulty here is that Fixing a Hole is a contradictory story, and contradictory stories need care. First, here’s how we should interpret the rule
Virtue
If p is the kind of claim that if true must be true in virtue of lower-level facts, and if the story is about those lower-level facts, then it must be true in the story that there is some true proposition r which is about those lower-level facts such that p is true in virtue of r.
In Fixing a Hole there are some true lower-level claims that are inconsistent with there being a television. But there is also in the story a true proposition about how DQ’s television looked before his time-travel misadventure. And it is true (both in reality and in the story) that something is a television in virtue of looking that way. (Note that we don’t say there must be some proposition r that is true in the story in virtue of which p is true. For there is no fact of the matter in Fixing a Hole about how DQ’s television looked before he left. So in reality we could not find such a proposition. But it is true in the story that his television looks some way or other, so as long as we talk about what in the story is true, and don’t quantify over propositions that are (in reality) true in the story, we avoid this pitfall.)
So my solution to the alethic puzzle is that Virtue is a strong default principle of fictional interpretation. I haven’t done much yet to motivate it, apart from noting that it seems to cover a lot of the cases that have been raised without overgenerating in the manner of the impossible solution. A more positive motivation must wait until I have presented my solutions to the phenomenological and imaginative puzzles. I’ll do that in the next section, then in tell a story about why we should believe Virtue.
# More Solutions
### The Phenomenological Puzzle
My solution here is essentially the same as Gendler’s. She think that when we strike a sentence that generated imaginative resistance we respond with something like, “That’s what you think!” What makes this notable is that it’s constitutive of playing the fiction game that we not normally respond that we way, that we give the author some flexibility in setting up a world. I think that’s basically right, but a little more is needed to put the puzzle to bed.
Sometimes the “That’s what you think!” response does not constitute abandoning the fiction game. At times it is the only correct way to play the game. It’s the right thing to say to Lily when reading the first line of The Dead. (Maybe it would be rude to say it aloud to poor Lily, the poor girl is run off her feet after all, but it’s appropriate to think it.) This pattern recurs throughout Dubliners. When in Eveline the narrator says that Frank has sailed around the world, the right reaction is to say to Eveline (or whoever is narrating then), “That’s what you think!” There’s a cost to playing the game this way. We end up knowing next to nothing about Frank. But it is not as if making the move stops us playing, or even stops us playing correctly. It’s part of the point of Eveline that we know next to nothing about Frank.
What makes cases like Death and Victory odd is that our reaction is directed at someone who isn’t in the story. One of Alex Byrne’s (1993) criticisms of Lewis was that on Lewis’s theory it is true in every story that the story is being told. Byrne argued that in many fictions it is not true that in the fictional world there is someone sufficiently knowledgeable to tell the story. In these fictions, we have a story without a storyteller. If there are such stories, then presumably Death and Victory are amongst them. It is not a character in the story who ends by saying that Craig’s action was right or that Quixote’s apartment contains a television. The author says that, and hence deserves our reproach, but the author isn’t in the story. Saying “That’s what you think!” directly to him or her breaks the fictional spell for suddenly we have to recognise a character not in the fictional world.
This proposal for the phenomenological puzzle yields a number of predictions which seem to be true and interesting. First, a story that has a narrator should not generate a phenomenological puzzle, even when outlandish moral claims are made. The more prominent the narrator, the less striking the moral claim. Imagine, for example, a version of Death where the text purports to be Craig’s diary, and it includes naturally enough his own positive evaluation of what he did. We wouldn’t believe him, of course, but we wouldn’t be struck by the claim the same way we are in the actual version of Death.
One might have thought that what is shocking is what we discover about the author. But this isn’t right, as can be seen if we reflect on stories that contain Craig’s diary. It is possible, difficult but possible, to embed the diary entry corresponding to Death in a longer story where it is clear that the author endorses Craig’s opinions. (Naturally I won’t do this. Examples have to come to an end somewhere.) Such a story would, in a way, be incredibly shocking. But it wouldn’t make the final line shocking in just the way that the final line of Death is shocking. Our reactions to these cases suggest that the strikingness of the last line of Death is not a function of what it reveals about the author, but of how it reveals it.
The final prediction my theory makes is somewhat more contentious. Some novels announce themselves as works of fiction. They go out of their way to prevent you ignoring the novel’s role as mediation to a fictional world. (For an early example of this, consider the sudden appearance of newspaper headlines in the ‘Aeolus’ episode of Ulysses.) In such novels we already have to recognise the author as a player in the fictional game, if not a character in the story. I predict that sentences where we do not take what is written to really be true in the story, even though this is what the author intended, should be less striking in these cases because we are already used to reacting to the author as such rather than just to the characters. Such books go out of their way to break the fictional spell, so spell breaking should matter less in these cases. I think this prediction is correct, although the works in question tend to be so complicated that it is hard to generate clear intuitions about them.
### The Imaginative Puzzle
Imagine, if you will, a chair. Have you done so? Good. Let me make some guesses about what you imagined. First, it was a specific kind of chair. There is a fact of the matter about whether the chair you imagined is, for example, an armchair or a dining chair or a classroom chair or an airport lounge chair or an outdoor chair or an electric chair or a throne. We can verbally represent something as being a chair without representing it as being a specific kind of chair, but imagination cannot be quite so coarse.9
Secondly, what you imagined was incomplete in some respects. You possibly imagined a chair that if realised would contain some stitching somewhere, but you did not imagine any details about the stitching. There is no fact of the matter about how the chair you imagined holds together, if indeed it does. If you imagined a chair by imagining bumping into something chair-like in the dead of night, you need not have imagined a chair of any colour, although in reality the chair would have some colour or other.10
Were my guesses correct? Good. The little I needed to know about imagination to get those guesses right goes a long way towards solving the puzzle.
Chairs are not very distinctive. Whenever we try to imagine that a non-fundamental property is instantiated the content of our imagining will be to some extent more specific than just that the object imagined has the property, but not so much more specific as to amount to a complete description of a possibilia. It’s the latter fact that does the work in explaining how can imagine impossible situations. If we were, foolishly, to try to fill in all the details of the impossible science fiction cases it would be clear they contained not just impossibilities, but violations of Virtue, and then we would no longer be able to imagine them. But we can imagine the restaurant at the end of the universe without imagining it in all its glorious gory detail. And when we do so our imagining appears to contain no such violations.
But why can’t we imagine these violations in fictions? It is primarily because we can only imagine the higher-level claim some way or another, just as we only imagine a chair as some chair or other, and the instructions that go along with the fiction forbid us from imagining any relevant lower-level facts that would constitute the truth of the higher-level claim. We have not stressed it much above, but it is relevant that fictions understood as invitations to imagine have a “That’s all” clause.11 We are not imagining Death if we imagine that Jack and Jill had just stopped arguing with each other and were about to shoot everyone in sight when Craig shot them in self-defence. The story does not explicitly say that wasn’t about to happen. It doesn’t include a “That’s all” clause. But such clauses have to be understood. So not only are we instructed to imagine something that seems incompatible with Craig’s action being morally acceptable, we are also instructed (tacitly) to not imagine anything that would make it the case that his action is morally acceptable. But we can’t simply imagine moral goodness in the abstract, to imagine it we have to imagine a particular kind of goodness.
### Two Thoughts Too Many?
I have presented three solutions to the three different puzzles with which we started. Might it not be better to have a uniform solution? No, because although the puzzles are related, they are not identical. Three puzzles demand three solutions.
We saw already that the phenomenological puzzle is different to the other two. If we rewrite Death as Craig’s diary there would be nothing particularly striking about the last sentence, certainly in the context of the story as so told. But the last sentence generates alethic and imaginative puzzles. Or at least it could generate these puzzles if the author has made it clear elsewhere in the story that Craig’s voice is authoritative. So we shouldn’t expect the same solution to that puzzle as the other two.
The alethic puzzle is different to the other two because ultimately it depends on what the moral and conceptual truths are not on what we take them to be. Consider the following story.
The Benefactor
Smith was a very generous, just and in every respect moral man. Every month he held a giant feast for the village where they were able to escape their usual diet of gains, fruits and vegetables to eat the many and varied meats that Smith provided for them.
Consider in particular, what should be easy to some, how Benefactor reads to someone who believes that we are morally required to be vegetarian if this is feasible. In Benefactor it is clear in the story that most villagers can survive on a vegetarian diet. So it is morally wrong to serve them the many and varied meats that Smith does. Hence such a reader should disagree with the author’s assessment that Smith is moral ‘in every respect.’ Such a reader will think that in fact in the story Smith is quite immoral in one important respect.
Now for our final assumption. Assume it is really true that we morally shouldn’t eat meat if it is avoidable. Since the ethical vegetarians have true ethical beliefs about the salient facts here, it seems plausible that their views on what is true in the story should carry more weight than ours. (I’m just relying on a general epistemological principle here: other things being equal trust the people who have true beliefs about the relevant background facts.) So it seems that it really is false in the story that Smith is in every respect moral. Benefactor raises an alethic puzzle even though for non-vegetarians it does not raise a phenomenological or imaginative puzzle.
This point generalises, so we need not assume for the general point that vegetarianism is true or that our typical reader is not vegetarian. We can be very confident that some of our ethical views will be wrong, though for obvious reasons it is hard to say which ones. Let p be a false moral belief that we have. And let S be a story in which p is asserted by the (would-be omniscient) narrator. For reasons similar to what we said about Benefactor, p is not true in S. But S need not raise any imaginative or phenomenological puzzles. Hence the alethic puzzle is different to the other two puzzles.
# Why Virtue Matters
I owe you an argument for why authors should be unable to easily generate violations of Virtue, though there is no general bar on making impossibilities true in a story. My general claims here are not too dissimilar to Yablo’s solution to the puzzles, but there are a couple of distinctive new points. Before we get to the argument, it’s time for another story.
Three design students walk into an furniture showroom. The new season’s fashions are all on display. The students are all struck by the piece de resistance, though they are all differently struck by it. Over drinks later, it is revealed that while B and C thought it was a chair, A did not. But the differences did not end there. When asked to sketch this contentious object, A and B produced identical sketches, while C’s recollections were drawn somewhat differently. B clearly disagrees with both A and C, but her differences with each are quite different. With C she disagrees on some simple empirical facts, what the object in question looked like. With A she disagrees on a conceptual fact, or perhaps a semantic fact, whether the concept CHAIR, or perhaps just the term ‘chair,’ applies to the object in question. As it turns out, A and B agree that ‘chair’ means CHAIR, and agree that CHAIR is a public concept so one of them is right and the other wrong about whether this object falls under the concept. In this case, their disagreement will have a quite different feel to B’s disagreement with C. It may well be that there is no analytic/synthetic distinction, and that questions about whether an object satisfies a concept are always empirical questions, but this is not how it feels to A and B. They feel that they agree on what the world is like, or at least what this significant portion of it is like, and disagree just on which concepts apply to it.
The difference between these two kinds of disagreement is at the basis of our attitudes towards the alethic puzzle. It may look like we are severely cramping authorial freedom by not permitting violations of Virtue.12 From A and B’s perspective, however, this is no restriction at all. Authors, they think, are free to stipulate which world will be the site of their fiction. But as their disagreement about whether the piece de resistance was a chair showed, we can agree about which world we are discussing and disagree about which concepts apply to it. The important point is that the metaphysics and epistemology of concepts comes apart here.
There can be no difference in whether the concept CHAIR applies without a difference in the underlying facts. But there can be a difference of opinion about whether a thing is a chair without a difference of opinion about the underlying facts. The fact that it’s the author’s story, not the reader’s, means that the author gets to say what the underlying facts are. But that still leaves the possibility for differences of opinion about whether there are chairs, and on that question the author’s opinion is just another opinion.
Authorial authority extends as far as saying which world is fictional in their story, it does not extend as far as saying which concepts are instantiated there. Since the main way that we specify which world is fictional is by specifying which concepts are instantiated at it, authorial authority will usually let authors get away with any kind of conceptual claim. But once we have locked onto the world being discussed, the author has no special authority to say which concepts, especially which higher-level concepts like RIGHT or FUNNY or CHAIR are instantiated there.
(Does it matter much that the distinction between empirical disagreements and conceptual disagreements with which I started might turn out not to rest on very much? Not really. I am trying to explain why we have the attitudes towards fiction that we do, which in turn determines what is true in fiction generally. All that matters is that people generally think that there is something like a conceptual truth/empirical truth distinction, and I think enough people would agree that A and B’s disagreement is different in kind from B and C’s disagreement to show that is true. If folks are generally wrong about this, if there is no difference in kind between conceptual truths and empirical truths, then our communal theory of truth in fiction will rest on some fairly untenable supports. But it will still be our theory, although any coherent telling of it will have to be in terms of things that are taken to be conceptual truths and things that are taken to be empirical truths.)
This explanation of why authorial authority collapses just when it does yields one fairly startling, and I think true, prediction. I argued above that authors could not easily generate violations of Virtue. That this is impossible is compatible with any number of hypotheses about how readers will resolve those impossibilities that authors attempt to slip in. The story here, that authors get to say which world is at issue but not which concepts apply to it, yields the prediction that readers will resolve the tension in favour of the lower-level claims. When given a physical description of a world and an incompatible moral description, we will take the physical description to fix which world is at issue and reduce the moral description to a series of questionable claims about the world. Compare what happens with A, B and C. We take A and B to agree about the world and disagree about concepts, rather than say taking B and C to agree about what the world is like (there’s a chair at the heart of the furniture show) and say that A and B disagree about the application of some recognitional concepts. This prediction is borne out in every case discussed in . We do not conclude that Craig did not really shoot Jack and Jill, because after all the world at issue is stipulated to be one where he did the right thing. Even more surprisingly, we do not conclude that Quixote’s furniture does not look like kitchen utensils, because it consists of a television and an armchair. This is surprising because in Victory I never said that the furniture looked like kitchen utensils. The tacit low-level claim about appearances is given precedence over the explicit high-level claims about which objects populate Quixote’s apartment. The theory sketched here predicts that, and supports the solution to the alethic puzzle sketched in , which is good news for both the theory and the solution.
It’s been a running theme here that the puzzles do not have anything particularly to do with normativity. But some normative concepts raise the kind of issues about authority mentioned here in a particularly striking way. There is always some division of cognitive labour in fiction. The author’s role is, among other things, to say which world is being made fictional. The audience’s role is, among other things, to determine the artistic merit of the fictional work. On other points there may be some sharing of roles, but this division is fairly absolute. The division threatens to collapse when authors start commenting on the aesthetic quality of words produced by their characters. At the end of Ivy Day in the Committee Room Joyce has one character describe a poem just recited by another character as “A fine piece of writing” . Most critics seem to be happy to accept the line, because Joyce’s poem here really is, apparently, a fine piece of writing. But to me it seems rather jarring, even if it happens to be true. It’s easy to feel a similar reaction when characters in a drama praise the words of another character.13 This is a special, and especially vivid, illustration of the point I’ve been pushing towards here. The author gets to describe the world at whichever level of detail she chooses. But once it has been described, the reader has just as much say in which higher-level concepts apply to parts of that world. When the concepts are evaluative concepts that directly reflect on the author, the reader’s role rises from being an equal to having more say than the author, just as we normally have less say than others about which evaluative concepts apply to us.
This idea is obviously similar to Yablo’s point that we get to decide when grokking concepts apply, not the author. But it isn’t quite the same. I think that if any concepts are grokking, most concepts are, so it can’t be the case that authors never get to say when grokking concepts apply in their stories. Most of the time authors will get to say which grokking concepts apply, because they have to use them to tell us about the world. What’s special about the kind of concepts that cause puzzles is that we get to decide when they apply full stop, but that we get to decide how they apply given how more fundamental concepts apply. So the conciliatory version of the relation between my picture here and Yablo’s is that I’ve been filling in, in rather laborious detail, his missing antecedent.
# Two Hard Cases
The first hard case is suggested by Kendall . Try to imagine a world where the over-riding moral duty is to maximise the amount of nutmeg in the world. If you are like me, you will find this something of a challenge. Now consider a story Nutmeg that reads (in its entirety!): “Nobody ever discovered this, but it turned out all along their over-riding moral duty was to maximise the amount of nutmeg in the world.” What is true in Nutmeg? It seems that there are no violations of Virtue here, but it is hard to imagine what is being described.
The second hard case is suggested by Tamar Szabó . (I’m simplifying this case a little, but it’s still hard.) In her Tower of Goldbach, God decrees that 12 shall no longer be the sum of two primes, and from this it follows (even in the story) that it is not the sum of 7 and 5. (It is not clear why He didn’t just make 5 no longer prime - say the product of 68 and 57. That may have been simpler.) Interestingly, this has practical consequences. When a group of seven mathematicians from one city attempts to join a group of five from another city, they no longer form a group of twelve. Again, two questions. Can we imagine a Goldbachian situation, where 7 and 5 equal not 12? Is it true in Gendler’s story that 7 and 5 equal not 12? If we cannot imagine Goldbach’s tower, where is the violation of Virtue?
First a quick statement of my responses to the two cases then I’ll end with my detailed responses. To respond properly we need to tease apart the alethic and imaginative puzzles. I claim that the alethic puzzle only arises when there’s a violation of Virtue. There’s no violation in either story, so there is no alethic puzzle. I think there are independent arguments for this conclusion in both cases. We can’t imagine either (if we can’t) because any way of filling in the more basic facts leads to violations.
It follows from my solution to the alethic puzzle that Nutmegism (Tyler Doggett’s name for the principle that we must maximise quantities of nutmeg) could be true in a story. There is no violation in Nutmeg, since there are no lower level claims made. Still, the story is very hard to imagine. The reason for this is quite simple. As noted, we cannot just imagine a chair, we have to imagine something more detailed that is a chair in virtue of its more basic properties. (There is no particular more basic property we need imagine, as is shown by the fact that we can imagine a chair just by imagining something with a certain look, or we can imagine a chair in the dark with no visual characteristics. But there is always something more basic.) Similarly to imagine a duty, we have to imagine something more detailed, in this case presumably a society or an ecology, in virtue of which the duty exists. But no such possible, or even impossible, society readily springs to mind. So we cannot imagine Nutmegism is true.
But it is hard to see how, or why, this inability should be raised into a restriction on what can be true in a story. One might think that what is wrong with Nutmeg is that the fictional world is picked out using high-level predicates. If we extend the story any way at all, the thought might go, we will generate a violation of Virtue. And that is enough to say that Nutmegism is not true in the story. But actually this isn’t quite right. If we extend the story by adding more moral claims, there is no duty to minimise suffering, there is no duty to help the poor etc, there are still no violations in the story. The restriction we would have to impose is that there is no way of extending the story to fill out the facts in virtue of which the described facts obtain, without generating a violation. But that looks like too strong a constraint, mostly because if we applied it here, to rule out Nutmegism being true in Nutmeg, we would have to apply it to every story written in a higher level language than that of microphysics. It doesn’t seem true that we have to be able to continue a story all the way to the microphysical before we can be confident that what the author says about, for instance, where the furniture in the room is. So there’s no reason to not take the author’s word in Nutmeg, and since the default is always that what the author says is true, Nutmegism is true in the story.
The mathematical case is more difficult. The argument that 7 and 5 could fail to equal 12 in the story turns on an example by Gregory . (The main conclusions of this example are also endorsed by .) Currie imagines a story in which the hero refutes Gödel’s Incompleteness Theorem. Currie argues that the story could be written in such a way that it is true in the story not merely that everyone believes our hero refuted Gödel, but that she really did. But if it could be true in a story that Gödel’s Incompleteness Theorem could be false, then it’s hard to see just why it could not be true in a story that a simpler arithmetic claim, say that 7 and 5 make 12, could also be false. Anything that can’t be true in a story can’t be true in virtue of some feature it has. The only difference between Gödel’s Incompleteness Theorem and a simple arithmetic statement appears to be the simplicity of the simple statement. And it doesn’t seem possible, or advisable, to work that kind of feature into a theory of truth in fiction.
The core problem here is that how simple a mathematical impossibility is very much a function of the reader’s mathematical knowledge and acumen. Some readers probably find the unique prime factorisation theorem so simple and evident that for them a story in which it is false is as crashingly bad as a story in which 7 and 5 do not make 12. For other readers, it is so complex that a story in which it has a counterexample is no more implausible than a story in which Gödel is refuted. I think it cannot be true for the second reader that the unique prime factorisation theorem fails in the story and false for the first reader. That amounts to a kind of relativism about truth in fiction that seems preposterous. But I agree with Currie that some mathematical impossibilities can be true in a fiction. So I conclude that, whether it is imaginable or not, it could be true in a story that 7 and 5 not equal 12.
I think, however, that it is impossible to imagine that 7 plus 5 doesn’t equal 12. Can we explain that unimaginability in the same way we explained why Nutmeg couldn’t be imagined? I think we can. It seems that the sum of 7 and 5 is what it is in virtue of the relations between 7, 5 and other numbers. It is not primitive that various sums take the values they take. That would be inconsistent with, for example, it being constitutive of addition that it’s associative, and associativity does seem to be constitutive of addition. We cannot think about 7, 5, 12 and addition without thinking about those more primitive relations. So we cannot imagine 7 and 5 equally anything else. Or so I think. There’s some rather sophisticated, or at least complicated, philosophy of mathematics in the story here, and not everyone will accept all of it. So we should predict that not everyone will think that these arithmetic claims are unimaginable. And, pleasingly, not everyone does. Gendler, for instance, takes it as a data point that Tower of Goldbach is imaginable. So far so good. Unfortunately, if the story is true we should also expect that whether people find the story imaginable links up with the various philosophies of mathematics they believe. And the evidence for that is thin. So there may be more work to do here. But there is clearly a story that we can tell that handles the case.
Adams, Douglas. 1980. The Restaurant at the End of the Universe. London: Pan Macmillan.
Byrne, Alex. 1993. “Truth in Fiction - the Story Continued.” Australasian Journal of Philosophy 71 (1): 24–35. https://doi.org/10.1080/00048409312345022.
Currie, Gregory. 1990. The Nature of Fiction. Cambridge: Cambridge University Press.
———. 2002. “Desire in Imagination.” In Conceivability and Possibility, edited by Tamar Szabó Gendler and John Hawthorne, 201–21. Oxford: Oxford University Press.
Fodor, Jerry A. 1987. Psychosemantics. Cambridge, MA: MIT Press.
———. 1998. Concepts: Where Cognitive Science Went Wrong. Oxford: Oxford University Press.
Gendler, Tamar Szabó. 2000. “The Puzzle of Imaginative Resistance.” Journal of Philosophy 97 (2): 55–81. https://doi.org/10.2307/2678446.
Hare, R. M. 1951. The Language of Morals. Oxford: Oxford University Press.
Holton, Richard. 1997. “Some Telling Examples: Reply to Tsohatzidis.” Journal of Pragmatics 28 (5): 625–28. https://doi.org/10.1016/s0378-2166(96)00081-1.
Hume, David. 1757. “On the Standard of Taste.” In Essays: Moral, Political and Legal, 227–49. Indianapolis: Liberty Press.
Jackson, Frank. 1998. From Metaphysics to Ethics: A Defence of Conceptual Analysis. Clarendon Press: Oxford.
Joyce, James. 1914/2000. Dubliners. Oxford: Oxford University Press.
———. 1944/1963. Stephen Hero. New Directions: Norfolk, CT.
———. 1922/1993. Ulysses. Oxford: Oxford University Press.
Kidd, John. 1988. “The Scandal of ‘Ulysses’.” The New York Review of Books 35 (11): 32–39.
Kripke, Saul. 1982. Wittgenstein on Rules and Private Language. Oxford: Basil Blackwell.
Lewis, David. 1978. “Truth in Fiction.” American Philosophical Quarterly 15 (1): 37–46.
Matravers, Derek. 2003. “Fictional Assent and the (so-Called) ‘Puzzle of Imaginative Resistance’.” In Imagination, Philosophy and the Arts, edited by Matthew Kieran and Dominic McIver Lopes, 91–108. London. Routledge.
Moran, Richard. 1995. “The Expression of Feeling in Imagination.” Philosophical Review 103 (1): 75–106. https://doi.org/10.2307/2185873.
Priest, Graham. 1997. “Sylvan’s Box: A Short Story and Ten Morals.” Notre Dame Journal of Formal Logic. 38 (4): 573–82. https://doi.org/10.1305/ndjfl/1039540770.
Skinner, B. F. 1948. Walden Two. New York: Macmillan.
Sorensen, Roy. 2001. Vagueness and Contradiction. Oxford: Oxford University Press.
Stock, Kathleen. 2003. “The Tower of Goldbach and Other Impossible Tales.” In Imagination, Philosophy and the Arts, edited by Matthew Kieran and Dominic McIver Lopes, 107–24. London. Routledge.
Walton, Kendall. 1990. Mimesis as Make Believe. Cambridge, MA: Harvard University Press.
———. 1994. “Morals in Fiction and Fictional Morality.” Aristotelian Society 68(Supp) (1): 27–50. https://doi.org/10.1093/aristoteliansupp/68.1.27.
Yablo, Stephen. 2002. “Coulda, Woulda, Shoulda.” In Conceivability and Possibility, edited by Tamar Szabó Gendler and John Hawthorne, 441–92. Oxford: Oxford University Press.
1. For details on the spelling of Dr Norman’s name, and the story behind it, see Kidd (1988). The good doctor appears on page 6 of .↩︎
2. At least, they will be ignored if it is clear they are errors. If there seems to be a method behind the misspellings, as in Ulysses there frequently is, the matter is somewhat different, and somewhat more difficult.
Tyler Doggett has argued that these cases are more similar to paradigm cases of imaginative resistance than I take them to be. Indeed, I would not have noticed the problems they raise without reading his paper. It may be a shortcoming of my theory here that I have to set questions about whether these sentences are puzzling to one side and assume an ideal proof-reader.↩︎
3. Thanks here to George Wilson for reminding me that we haven’t shown anything stronger than that.↩︎
4. Arguably the relevant supervenience principle is even stronger than that. To use some terminology of Stephen Yablo’s, there’s no difference in moral facts without a difference in non-moral facts between any two counteractual worlds, as well as between any two counterfactual worlds. This might be connected to some claims I will make below about the relationship between the normative and the descriptive.↩︎
5. Determining whether this is true in all such stories would be an enormous task, I fear, and somewhat pointless given the next objection. If anyone wants to say all clearly impossible statements in fiction are puzzling, I suspect the best strategy is to divide and conquer. The most blatantly impossible claims are most naturally fit for reinterpretation, and the other claims rest on an essentialism that is arguably not proven. I won’t try such a massive defence of a false theory here.↩︎
6. There is a complication here in that such a sentence might be evidence that the fictional work is not to be understood as this kind of report, and instead understood as something like a recording of the children’s thoughts. I’ll assume we’re in a story where it is clear that the sentences are not to be so interpreted.↩︎
7. Although it isn’t obvious just which of the Thomistic properties the death match lacks.↩︎
8. Do we make facts about the actual speaker’s usage true in the story? No. The character might have idiosyncratic reasons for not using the word “glory,” and for ignoring all others who use it. That’s consistent with the word meaning a nice knockdown argument.↩︎
9. This relates to another area in which my solution owes a debt to Gendler’s solution. Supposing can be coarse in a way that imagining cannot. We can suppose that Jack sold a chair without supposing that he sold an armchair or a dining chair or any particular kind of chair at all. Gendler concludes that what we do in fiction, where we try and imagine the fictional world, is very different to what we do, say, in philosophical argumentation, where we often suppose that things are different to the way they actually are. We can suppose, for the sake of argument as it’s put, that Kantian or Aristotelian ethical theories are entirely correct, even if we have no idea how to imagine either being correct. Thanks to Tyler Doggett for pointing out the connection to Gendler here.↩︎
10. Thanks to Kendall Walton for pointing out this possibility.↩︎
11. “That’s all” clauses play a distinct, but related, role in (Jackson 1998 Ch. 1). It’s also crucial to my solution to the alethic puzzle that there be a “That’s all” clause in the story. What’s problematic about these cases is that the story (implicitly) rules out there being the lower-level facts that would make the expressed higher-level claims true.↩︎
12. Again, it is worth noting that I am not ruling out any violation of Virtue, just easy violations of it. The point being made in the text is that even a blanket ban on violations would not be a serious restriction on authorial freedom.↩︎
13. For a while this would happen frequently on the TV series The West Wing. President Bartlett would deliver a speech, and afterwards his staffers would congratulate themselves on what a good speech it was. The style of the congratulations was clearly intended to convey the author’s belief that the speech they themselves had written was a good speech, not just the characters’ beliefs to this effect. When in fact it was a very bad speech, this became very jarring. In later series they would often not show the speeches in question and hence avoid this problem.
↩︎
|
2021-04-21 01:33:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42228344082832336, "perplexity": 1150.4054884290117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00125.warc.gz"}
|
https://metricsf20.classes.ryansafner.com/r/1.2-r-practice/
|
# 1.2 — Meet R — R Practice
## Getting Set Up
Before we begin, start a new file with File $$\rightarrow$$ New File $$\rightarrow$$ R Script. As you work through this sheet in the console in R, also add (copy/paste) your commands that work into this new file. At the end, save it, and run to execute all of your commands at once.
## Creating Objects
### 1
Work on the following parts:
• a. Create a vector called me with two objects, your first name, and your last name.
1. Call the vector to inspect it.
1. Confirm it is a character class vector.
### 2
Use R’s help functions to determine what the paste() function does. Then paste together your first name and last name.
### 3
Create a vector called my_vector with all the even integers from 2 to 10.
### 4
Find the mean of my_vector with mean().
### 5
Take all the integers from 18 to 763,Hint: use the : operator to create a sequence from a starting number to an ending number
then get the mean.
## Playing with Data
For the following questions, we will use the diamonds dataset, included as part of ggplot2.
### 6
Install ggplot2.
### 7
Load ggplot2 with the library() command.
### 8
Get the structure of the diamonds data frame. What are the different variables and what kind of data does each contain?
### 9
Get summary statistics separately for carat, depth, table, and price.
### 10
color, cut, and clarity are categorical variables (factors). Use the table() command to generate frequency tables for each.
### 11
Now rerun the summary() command on the entire data frame.
### 12
Now look only at (subset) the first 4 diamonds in the dataset.
### 13
Now look only at (subset) the third and seventh diamond in the dataset.
### 14
Now look only at (subset) the second column of the dataset.
### 15
Do this again, but look using the \$ to pull up the second column by name.
### 16
Now look only at diamonds that have a carat greater than or equal to 1.
### 17
Now look only at diamonds that have a VVS1 clarity.
### 18
Now look only at dimaonds that have a color of E, F, I, and J.
### 19
Now look only at diamonds that have a carat greater than or equal to 1 and a VVS1 clarity.
### 20
Get the average price of diamonds in question 18.Hints: use your subset command as an argument to the mean function. You will not need a comma here because you are looking for a single row.
### 21
What is the highest price for a diamond with a 1.0 carat, D color, and VVS1 clarity?
Save the R Script you created at the beginning and (hopefully) have been pasting all of your valid commands to. This creates a .R file wherever you choose to save it to. Now looking at the file in the upper left pane of R Studio look for the button in the upper right corner that says Run. Sit back and watch R redo everything you’ve carefully worked on, all at once.
|
2022-01-26 13:36:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.307790070772171, "perplexity": 2858.807055650264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00628.warc.gz"}
|
https://intelligencemission.com/free-electricity-free-electricity-key-meter.html
|
For ex it influences Free Power lot the metabolism of the plants and animals, things that cannot be explained by the attraction-repulsion paradigma. Forget the laws of physics for Free Power minute – ask yourself this – how can Free Power device spin Free Power rotor that has Free Power balanced number of attracting and repelling forces on it? Have you ever made one? I have tried several. Gravity motors – show me Free Power working one. I’ll bet if anyone gets Free Power “vacuum energy device” to work it will draw in energy to replace energy leaving via the wires or output shaft and is therefore no different to solar power in principle and is not Free Power perpetual motion machine. Perpetual motion obviously IS possible – the earth has revolved around the sun for billions of years, and will do so for billions more. Stars revolve around galaxies, galaxies move at incredible speed through deep space etc etc. Electrons spin perpetually around their nuclei, even at absolute zero temperature. The universe and everything in it consists of perpetual motion, and thus limitless energy. The trick is to harness this energy usefully, for human purposes. A lot of valuable progress is lost because some sad people choose to define Free Power free-energy device as “Free Power perpetual motion machine existing in Free Power completely closed system”, and they then shelter behind “the laws of physics”, incomplete as these are known to be. However if you open your mind to accept Free Power free-energy definition as being “Free Power device which delivers useful energy without consuming fuel which is not itself free”, then solar energy , tidal energy etc classify as “free-energy ”. Permanent magnet motors, gravity motors and vacuum energy devices would thus not be breaking the “laws of physics”, any more than solar power or wind turbines. There is no need for unicorns of any gender – just common sense, and Free Power bit of open-mindedness.
Research in the real sense is unheard of to these folks. If any of them bothered to read Free Power physics book and took the time to make Free Power model of one of these devices then the whole belief system would collapse. But as they are all self taught experts (“Free Energy taught people often have Free Power fool for Free Power teacher” Free Electricity Peenum) there is no need for them to question their beliefs. I had Free Power long laugh at that one. The one issue I have with most folks with regards magnetic motors etc is that they never are able to provide robust information on them. Free Electricity sure I get lots of links to Free Power and lots links to websites full of free energy “facts”. But do I get anything useful? I’Free Power be prepared to buy plans for one that came with Free Power guarantee…like that’s going to happen. Has anyone who proclaimed magnetic motors work actually got one? I don’t believe so. Where, I ask, is the evidence? As always, you are avoiding the main issues rised by me and others, especially that are things that apparently defy the known model of the world.
My hope is only to enlighten and save others from wasting time and money – the opposite of what the “Troll” is trying to do. Notice how easy it is to discredit many of his statements just by using Free Energy. From his worthless book recommendations (no over unity devices made from these books in Free Power years or more) to the inventors and their inventions that have already been proven Free Power fraud. Take the time and read ALL his posts and notice his tactics: Free Power. Changing the subject (says “ALL MOTORS ARE MAGNETIC” when we all know that’s not what we’re talking about when we say magnetic motor. Free Electricity. Almost never responding to Free Power direct question. Free Electricity. Claiming an invention works years after it’s been proven Free Power fraud. Free Power. Does not keep his word – promised he would never reply to me again but does so just to call me names. Free Power. Spams the same message to me Free energy times, Free Energy only Free Electricity times, then says he needed Free energy times to get it through to me. He can’t even keep track of his own lies. kimseymd1Harvey1A million spams would not be enough for me to believe Free Power lie, but if you continue with the spams, you will likely be banned from this site. Something the rest of us would look forward to. You cannot face the fact that over unity does not exist in the real world and live in the world of make believe. You should seek psychiatric help before you turn violent. jayanth Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far!
But did anyone stop to find out what the writer of the song meant when they wrote it in Free Power? Yes, actually, some did, thankfully. But many didn’t and jumped on the hate bandwagon because nowadays many of us seem to have become headline and meme readers and take all we see as fact without ever questioning what we’re being told. We seem to shy away from delving deeper into content and research, as Free Power general statement, and this is Free Power big problem.
Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries.
Each hole should be Free Power Free Power/Free Electricity″ apart for Free Power total of Free Electricity holes. Next will be setting the magnets in the holes. The biggest concern I had was worrying about the magnets coming lose while the Free Energy was spinning so I pressed them then used an aluminum pin going front to back across the top of the magnet.
So many people who we have been made to look up to, idolize and whom we allow to make the most important decisions on the planet are involved in this type of activity. Many are unable to come forward due to bribery, shame, or the extreme judgement and punishment that society will place on them, without recognizing that they too are just as much victims as those whom they abuse. Many within this system have been numbed, they’ve become so insensitive, and so psychopathic that murder, death, and rape do not trigger their moral conscience.
Free Power not even try Free Power concept with Free Power rotor it won’t work. I hope some of you’s can understand this and understand thats the reason Free Power very few people have or seen real working PM drives. My answers are; No, no and sorry I can’t tell you yet. Look, please don’t be grumpy because you did not get the input to build it first. Gees I can’t even tell you what we call it yet. But you will soon know. Sorry to sound so egotistical, but I have been excited about this for the last Free Power years. Now don’t fret………. soon you will know what you need to know. “…the secret is in the “SHAPE†of the magnets” No it isn’t. The real secret is that magnetic motors can’t and don’t work. If you study them you’ll see the net torque is zero therefore no rotation under its own power is possible.
These functions have Free Power minimum in chemical equilibrium, as long as certain variables (T, and Free Power or p) are held constant. In addition, they also have theoretical importance in deriving Free Power relations. Work other than p dV may be added, e. g. , for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.
I might have to play with it and see. Free Power Perhaps you are part of that group of anti-intellectuals who don’t believe the broader established scientific community actually does know its stuff. Ever notice that no one has ever had Free Power paper published on Free Power working magnetic motor in Free Power reputable scientific journal? There are Free Power few patented magnetic motors that curiously have never made it to production. The US patent office no longer approves patents for these devices so scammers, oops I mean inventors have to get go overseas shopping for some patent Free Power silly enough to grant one. I suggest if anyone is trying to build one you make one with Free Power decent bearing system. The wobbly system being shown on these recent videos is rubbish. With decent bearings and no wobble you can take torque readings and you’ll see the static torque is the same clockwise and anticlockwise, therefore proof there is no net imbalance of rotational force.
Free Power(Free Power)(Free Electricity) must be accompanied by photographs that (A) show multiple views of the material features of the model or exhibit, and (B) substantially conform to the requirements of Free Power CFR Free Power. Free energy. See Free Power CFR Free Power. Free Power(Free Electricity). Material features are considered to be those features which represent that portion(s) of the model or exhibit forming the basis for which the model or exhibit has been submitted. Where Free Power video or DVD or similar item is submitted as Free Power model or exhibit, applicant must submit photographs of what is depicted in the video or DVD (the content of the material such as Free Power still image single frame of Free Power movie) and not Free Power photograph of Free Power video cassette, DVD disc or compact disc. <“ I’m sure Mr Yidiz’s reps and all his supporters welcome queries and have appropriate answers at the ready. Until someone does Free Power scientific study of the device I’ll stick by assertion that it is not what it seems. Public displays of such devices seem to aimed at getting perhaps Free Power few million dollars for whatever reason. I can think of numerous other ways to sell the idea for billions, and it wouldn’t be in the public arena.
Both sets of skeptics will point to the fact that there has been no concrete action, no major arrests of supposed key Deep State players. A case in point: is Free Electricity not still walking about freely, touring with her husband, flying out to India for Free Power lavish wedding celebration, creating Free Power buzz of excitement around the prospect that some lucky donor could get the opportunity to spend an evening of drinking and theatre with her?
# The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial, the Free Power free energy of the reactants.
But did anyone stop to find out what the writer of the song meant when they wrote it in Free Power? Yes, actually, some did, thankfully. But many didn’t and jumped on the hate bandwagon because nowadays many of us seem to have become headline and meme readers and take all we see as fact without ever questioning what we’re being told. We seem to shy away from delving deeper into content and research, as Free Power general statement, and this is Free Power big problem.
Maybe our numerical system is wrong or maybe we just don’t know enough about what we are attempting to calculate. Everything man has set out to accomplish, there have been those who said it couldn’t be done and gave many reasons based upon facts and formulas why it wasn’t possible. Needless to say, none of the ‘nay sayers’ accomplished any of them. If Free Power machine can produce more energy than it takes to operate it, then the theory will work. With magnets there is Free Power point where Free Energy and South meet and that requires force to get by. Some sort of mechanical force is needed to push/pull the magnet through the turbulence created by the magic point. Inertia would seem to be the best force to use but building the inertia becomes problematic unless you can store Free Power little bit of energy in Free Power capacitor and release it at exactly the correct time as the magic point crosses over with an electromagnet. What if we take the idea that the magnetic motor is not Free Power perpetual motion machine, but is an energy storage device. Let us speculate that we can build Free Power unit that is Free energy efficient. Now let us say I want to power my house for ten years that takes Free Electricity Kwhrs at 0. Free Energy /Kwhr. So it takes Free energy Kwhrs to make this machine. If we do this in Free Power place that produces electricity at 0. 03 per Kwhr, we save money.
Years later, Free Power top U. S. General who was the liaison between DynCorp and the U. S. Military was implicated in the sexual assault of teenage girls. Earlier this year, Florida Air National Guard Col. Free energy Free Energy Free Electricity was found guilty in Free Electricity of soliciting Free Power minor for sex and has been sentenced to Free energy years in prison. Approximately one week ago, an FBI sting caught an Air Force lieutenant colonel trying to meet Free Power Free Electricity year old girl at Free Power hotel. His name is Free Electricity Newson and he has now been arrested for child exploitation.
Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner!
I’ve told you about how not well understood is magnetism. There is Free Power book written by A. K. Bhattacharyya, A. R. Free Electricity, R. U. Free Energy. – “Magnet and Magnetic Free Power, or Healing by Magnets”. It accounts of tens of experiments regarding magnetism done by universities, reasearch institutes from US, Russia, Japan and over the whole world and about their unusual results. You might wanna take Free Power look. Or you may call them crackpots, too. 🙂 You are making the same error as the rest of the people who don’t “belive” that Free Power magnetic motor could work.
Blind faith over rules common sense. Mr. Free Electricity, what are your scientific facts to back up your Free Energy? Progress comes in steps. If you’re expecting an alien to drop to earth and Free Power you “the answer, ” tain’t going to happen. Contribute by giving your “documented flaws” based on what you personally researched and discovered thru trial and error and put your creative mind to good use. Overcome the problem(s). As to the economists, they believe oil has to reach Free Electricity. Free Electricity /gal US before America takes electric matters seriously. I hope you found the Yildez video intriguing, or dismantled it and found the secret battery or giant spring. I’Free Power love to see Free Power live demo. Mr. Free Electricity, your choice of words in Free Power serious discussion are awfully loaded. It sounds like you have been burned along the way.
No “boing, boing” … What I am finding is that the abrupt stopping and restarting requires more energy than the magnets can provide. They cannot overcome this. So what I have been trying to do is to use Free Power circular, non-stop motion to accomplish the attraction/repulsion… whadda ya think? If anyone wants to know how to make one, contact me. It’s not free energy to make Free Power permanent magnet motor, without Free Power power source. The magnets only have to be arranged at an imbalanced state. They will always try to seek equilibrium, but won’t be able to. The magnets don’t produce the energy , they only direct it. Think, repeating decimal…..
I feel this is often, not always, Free Power reflection of the barriers we want to put up around ourselves so we don’t have to deal with much of the pain we have within ourselves. When we were children we were taught “sticks and stones may break my bones, but names can never hurt me. ” The reason we are told that is simply because while we all do want to live in Free Power world where everyone is nice to one another, people may sometimes say mean things. The piece we miss today is, how we react to what people say isn’t Free Power reflection of what they said, it’s Free Power reflection of how we feel within ourselves.
Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner!
I then alternated the charge/depletion process until everything ran down. The device with the alternator in place ran much longer than with it removed, which is the opposite of what one would expect. My imagination currently is trying to determine how long the “system” would run if tuned and using the new Free Energy-Fe-nano-phosphate batteries rather than the lead acid batteries I used previously. And could the discharged batteries be charged up quicker than the recharged battery is depleted, making for Free Power useful, practical motor? Free Energy are claiming to have invented perpetual motion MACHINES. That is my gripe. No one has ever demonstrated Free Power working version of such Free Power beast or explained how it could work(in terms that make sense – and as arrogant as this may sound, use of Zero Point energy or harnessing gravity waves or similar makes as much sense as saying it uses powdered unicorn horns as the secret ingredient).
|
2021-05-19 00:18:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44679319858551025, "perplexity": 1262.0095929857498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00445.warc.gz"}
|
http://tex.stackexchange.com/questions/3892/how-do-you-draw-the-snake-arrow-for-the-connecting-homomorphism-in-the-snake-l
|
# How do you draw the “snake” arrow for the connecting homomorphism in the snake lemma?
How does one draw the "snake" arrow for the connecting homomorphism when using the snake lemma?
I'd also be interested in drawing similar arrows act as "carriage returns" when considering a long exact sequence of cohomology.
I'm sorry if this is a little vague. I'm hoping that someone who's already done this might be willing to share a template. I'd prefer things in xy-pic, but would also be interested to see other ways it can be done.
-
I'm glad you said:
but would also be interested to see other ways it can be done.
because I don't use xy-pic anymore (fantastic though it was when in the pre-TikZ days). Here's my best shot with TikZ.
Here's the result:
The code follows. A couple of comments on my choices. None of the libraries loaded is strictly necessary, but I felt that the matrix and calc libraries made for cleaner code and I don't like the standard arrows so the arrows library makes it look better. I labelled the matrix entries explicitly, which isn't strictly necessary, again because I felt it made the code easier to read. I shifted the horizontal of the snake arrow upwards to avoid the labels on the downward arrows. The key effect, of rounded corners, is the (wait for it) rounded corners option to the final path. The amsmath package is purely to get the \DeclareMathOperator command for \coker (\ker is already defined in latex). Again, this could easily be done another way.
(Added in edit) As pointed out in the comments, in the original code the arrow from the lower 0 to the A' was slightly slanted. This was because the prime adds a little height to the node containing the A' meaning that the natural targets for the arrow (east of 0 and west of A') aren't in a straight line. There are numerous ways to fix this, the one I chose was to align the nodes in the matrix according to their centres instead of their baselines.
Since this answer has proved so popular, I got a little more perfectionistic about it! I decided that the grey horizontal arrows (kernels and cokernels) were a little high for my taste. So I chose the 'mid' targets. Putting these in with the 'edge' command resulted in bizarre extra arrow heads (try it) so I had to put each one in as a separate \draw command. I also changed the spacing of the grid so that the spaces were measured from the centres of the nodes rather than their edges.
Idle thoughts on this version: I ought to be able to put the mid east/west targets in the edge commands. I also wondered if there was some neat way to say "make sure all the arrows are strictly horizontal" rather than using the explicit targets. I wondered about using the "coordinates at intersections" syntax but would need to play with it a little and I'm not sure it would be shorter.
Edit: 2012-09-10 I've just learnt about the asymmetrical rectangle of the tikz-cd package and it answers my idle thoughts above so I thought - as this is in many ways my "show case" answer - I'd update this to take advantage of it.
\documentclass{article}
\thispagestyle{empty}
\usepackage{amsmath}
\usepackage{tikz}
\usepackage{tikz-cd}
\usetikzlibrary{%
matrix,%
calc,%
arrows%
}
\DeclareMathOperator{\coker}{coker}
\begin{document}
\begin{tikzpicture}[>=triangle 60]
\matrix[matrix of math nodes,column sep={60pt,between origins},row
sep={60pt,between origins},nodes={asymmetrical rectangle}] (s)
{
&|[name=ka]| \ker f &|[name=kb]| \ker g &|[name=kc]| \ker h \\
%
&|[name=A]| A' &|[name=B]| B' &|[name=C]| C' &|[name=01]| 0 \\
%
|[name=02]| 0 &|[name=A']| A &|[name=B']| B &|[name=C']| C \\
%
&|[name=ca]| \coker f &|[name=cb]| \coker g &|[name=cc]| \coker h \\
};
\draw[->] (ka) edge (A)
(kb) edge (B)
(kc) edge (C)
(A) edge (B)
(B) edge node[auto] {$$p$$} (C)
(C) edge (01)
(A) edge node[auto] {$$f$$} (A')
(B) edge node[auto] {$$g$$} (B')
(C) edge node[auto] {$$h$$} (C')
(02) edge (A')
(A') edge node[auto] {$$i$$} (B')
(B') edge (C')
(A') edge (ca)
(B') edge (cb)
(C') edge (cc)
;
\draw[->,gray] (ka) edge (kb)
(kb) edge (kc)
(ca) edge (cb)
(cb) edge (cc)
;
\draw[->,gray,rounded corners] (kc) -| node[auto,text=black,pos=.7]
{$$\partial$$} ($(01.east)+(.5,0)$) |- ($(B)!.35!(B')$) -|
($(02.west)+(-.5,0)$) |- (ca);
\end{tikzpicture}
\end{document}
-
If you give a name to the \matrix like you do in your example, you don't need to assign a name to every single node. You can access them with (s-1-1), (s-1-2) and so on. – Thorsten Donig Oct 8 '10 at 9:00
@Thorsten: Perhaps I wasn't clear when I said: "I labelled the matrix entries explicitly, which isn't strictly necessary, again because I felt it made the code easier to read.". The "isn't strictly necessary" refers to the fact that if I didn't label them, then TikZ would label them automatically. However, I found it easier to typeset the arrows if I gave the nodes proper names. – Loop Space Oct 8 '10 at 10:37
Pretty amazing stuff you can do with TikZ! And very clean code too! – Juan A. Navarro Oct 8 '10 at 11:20
Could someone tell me why the 0 → A' arrow is slightly slanted? Is this just the pdf-viewer’s fault? – Caramdir Oct 8 '10 at 17:33
The arrow 0 to A' probably is slightly slanted. If you replace the 0 with . then it is definitely slanted. To avoid this, add text height=1.5ex, text depth=0.25ex] – Mephisto Oct 9 '10 at 12:30
Andrew has already produced a really beautiful snake (I think I never saw the diagram typeset nicer), so let me add a long exact sequence. For this I usually prefer curved arrow between the lines. These are easily produced using [out=...,in=...] giving the angles at which the lines starts and ends.
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{matrix,arrows}
\begin{document}
\begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}]
\matrix (m) [
matrix of math nodes,
row sep=1em,
column sep=2.5em,
text height=1.5ex, text depth=0.25ex
]
{ 0 & H^0(\mathcal A) & H^0(\mathcal B) & H^0(\mathcal C) \\
& H^1(\mathcal A) & H^1(\mathcal B) & H^1(\mathcal C) \\
& H^2(\mathcal A) & H^2(\mathcal B) & H^2(\mathcal C) \\
& \mbox{} & & \mbox{} \\
& H^n(\mathcal A) & H^n(\mathcal B) & H^n(\mathcal C) \\
};
\path[overlay,->, font=\scriptsize,>=latex]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge (m-1-4)
(m-1-4) edge[out=355,in=175,red] node[descr,yshift=0.3ex] {$\alpha^0$} (m-2-2)
(m-2-2) edge (m-2-3)
(m-2-3) edge (m-2-4)
(m-2-4) edge[out=355,in=175,red] node[descr,yshift=0.3ex] {$\alpha^1$} (m-3-2)
(m-3-2) edge (m-3-3)
(m-3-3) edge (m-3-4)
(m-3-4) edge[out=355,in=175,dashed,red] (m-5-2)
(m-5-2) edge (m-5-3)
(m-5-3) edge (m-5-4);
\end{tikzpicture}
\end{document}
The code is pretty much the same as Andrew’s except that I don’t name the matrix entries (as they are used in a linear, non-confusing fashion). The descr style define at the top allows to place nodes in the middle of a line by interrupting the line (technically by placing a white box on top of it). The text height=1.5ex, text depth=0.25ex option of the matrix is not necessary for this example, but helps with aligning the arrows and text when the nodes get more complicated. The overlay makes it so that the path is not considered for calculation of the bounding box as TikZ often gets that wrong.
-
Like it! I agree with having a curved arrow, but I think I'd want the cross part a little more horizontal. Also, I'd lift the alpha's up a bit so that the line appeared to go through the centre of the alpha. – Loop Space Oct 9 '10 at 19:33
@Andrew You are right about the alphas. I moved them up a little bit. – Caramdir Oct 28 '10 at 15:58
Here is a solution with xy-pic. For the snake lemma:
\documentclass{minimal}
\usepackage[all,cmtip]{xy}
\usepackage{amsmath}
\DeclareMathOperator{\coker}{coker}
\newcommand*\pp{{\rlap{$$'$$}}}
\begin{document}
%
$\xymatrix@!{ && {\ker(a)} \ar[r] & {\ker(b)} \ar[r] & {\ker(c)} \arr[d][l]^d[lll]|!{[];[d]}\hole|!{[l];[dl]}\hole|!{[ll];[dll]}\hole [dddll]|!{[ddllll];[ddll]}\hole [dddll] & \\ && A \ar[r]^{f} & B \ar[r]^{g} & C \ar[r] & 0 \\ 0 \ar[rr] && A\pp \ar[r]^{f'} & B\pp \ar[r]^{g'} & C\pp & \\ && {\coker(a)} \ar[r] & {\coker(b)} \ar[r] & {\coker(c)} & \\ % vertical arrows \ar"1,3";"2,3" \ar"1,4";"2,4" \ar"1,5";"2,5" \ar"2,3";"3,3"^a \ar"2,4";"3,4"^b \ar"2,5";"3,5"^c \ar"3,3";"4,3" \ar"3,4";"4,4" \ar"3,5";"4,5" }$
\end{document}
And for the long exact sequence:
\documentclass{minimal}
\usepackage[all,cmtip]{xy}
\begin{document}
\xymatrix{
0 \ar[r] & H^0(\mA) \ar[r] & H^0(\mB) \ar[r] & H^0(\mC)
\ar@{->} r/8pt[d] /10pt[l] ^dl[ll]|{\alpha^0} ^r/3pt[dll] [dll] \\
& H^1(\mA) \ar[r] & H^1(\mB) \ar[r] & H^1(\mC)
\ar@{->} r/8pt[d] /10pt[l] ^dl[ll]|{\alpha^1} ^r/3pt[dll] [dll] \\
& H^2(\mA) \ar[r] & H^2(\mB) \ar[r] & H^2(\mC)
\ar@{-->} r/8pt[d] /10pt[l] ^dl[ll] ^d[dlll] ^r[ddlll] [ddll]\\
&&&\\
& H^n(\mA) \ar[r] & H^n(\mB) \ar[r] & H^n(\mC)
}
\end{document}
Remark: there is a bug with bending arrows: they don't works fine with color and dotted shape.
It's even possible to improve some details (position of the alphas, last arrow in the exact sequence) with low level xy code.
Some explication:
• \xymatrix@! forces all space equal
• \newcommand*\pp{{\rlap{$$'$$}}} is to hide the prime's width (not really necessary)
• \ar"1,3";"2,3" is an explicit position of arrow
• |!{[];[d]}\hole makes a hole in the present arrow at the position where it intersects the (straight) line from [] to [d]
• Finally, \ard[t1][t2][t3] begins in the d direction, go to the t1 entry, makes a quarter of turn, go to the t2 entry, makes a quarter of turn and ends in t3. The /4pt changes the radius. It's even possible to decide the direction of the turn with ^d[t] or _ul[t] syntax. For more details, see the xy-pic user's guide.
-
Nice to see the xy version of this! – Loop Space Apr 25 '11 at 16:47
For the record, here is how I would do it with tikz-cd. One possibility is to use in=<angle> and out=<angle> options, as in Caramdir's answer. Let me remark that there is also a looseness parameter that can be used to stretch the lines a little bit.
\documentclass{article}
\usepackage{tikz-cd}
\begin{document}
\begin{tikzcd}
A^0 \rar & B^0 \rar & C^0 \ar[out=-30, in=150]{dll} \\
A^1 \rar & B^1 \rar & C^1 \ar[out=0, in=180]{dll} \\
A^2 \rar & B^2 \rar & C^2 \ar[out=0, in=180, looseness=2]{dll}\\
A^3 \rar & B^3 \rar & C^3 \ar[out=0, in=180, looseness=3]{dll}\\
A^4 \rar & \cdots &
\end{tikzcd}
\end{document}
Now, more on the lines of Andrew's answer:
\documentclass{article}
\usepackage{tikz-cd}
\begin{document}
\begin{tikzcd}
A \rar & B \rar
\ar[draw=none]{d}[name=X, anchor=center]{}
& C \ar[rounded corners,
to path={ -- ([xshift=2ex]\tikztostart.east)
|- (X.center) \tikztonodes
-| ([xshift=-2ex]\tikztotarget.west)
-- (\tikztotarget)}]{dll}[at end]{\delta} \\
D \rar & E \rar & F
\end{tikzcd}
\end{document}
The "arrow" with draw=none (which is of course not drawn) has an empty label; the node thereby generated lies exactly halfway between "B" and "E", and we call it X for later reference.
In the arrow starting at "C", we use the option to path to modify the path that is actually drawn. In the argument to to path, \tikztostart and \tikztoend will expand to the start and end points of the arrow ("C" and "D" respectively). We use these and X to determine a few points we want the arrow passing through. Also, \tikztonodes will expand to the list of nodes` attached to the arrow.
-
|
2015-11-29 03:55:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400650262832642, "perplexity": 1853.821615559029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00240-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://componentlibrary.moodle.com/admin/tool/componentlibrary/docspage.php/bootstrap/layout/utilities-for-layout/
|
Utilities for layout
For faster mobile-friendly and responsive development, Bootstrap includes dozens of utility classes for showing, hiding, aligning, and spacing content.
Changing display
Use our display utilities for responsively toggling common values of the display property. Mix it with our grid system, content, or components to show or hide them across specific viewports.
Flexbox options
Bootstrap 4 is built with flexbox, but not every element’s display has been changed to display: flex as this would add many unnecessary overrides and unexpectedly change key browser behaviors. Most of our components are built with flexbox enabled.
Should you need to add display: flex to an element, do so with .d-flex or one of the responsive variants (e.g., .d-sm-flex). You’ll need this class or display value to allow the use of our extra flexbox utilities for sizing, alignment, spacing, and more.
Use the margin and padding spacing utilities to control how elements and components are spaced and sized. Bootstrap 4 includes a five-level scale for spacing utilities, based on a 1rem value default \$spacer variable. Choose values for all viewports (e.g., .mr-3 for margin-right: 1rem), or pick responsive variants to target specific viewports (e.g., .mr-md-3 for margin-right: 1rem starting at the md breakpoint).
Toggle visibility
When toggling display isn’t needed, you can toggle the visibility of an element with our visibility utilities. Invisible elements will still affect the layout of the page, but are visually hidden from visitors.
|
2023-03-21 01:06:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.238790825009346, "perplexity": 9931.637029770827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00079.warc.gz"}
|
https://www.tutorialspoint.com/articles/category/ncert/3
|
## A copper wire, $3 \mathrm{~mm}$ in diameter, is wound about a cylinder whose length is $12 \mathrm{~cm}$, and diameter $10 \mathrm{~cm}$, so as to cover the curved surface of the cylinder. Find the length and mass of the wire, assuming the density of copper to be $8.88 \mathrm{~g} \mathrm{per} \mathrm{cm}^{3}$.
Updated on 10-Oct-2022 13:47:38
Given:A copper wire, $3 \mathrm{~mm}$ in diameter, is wound about a cylinder whose length is $12 \mathrm{~cm}$, and diameter $10 \mathrm{~cm}$, so as to cover the curved surface of the cylinder.The density of copper is $8.88 \mathrm{~g} \mathrm{per} \mathrm{cm}^{3}$.To do:We have to find the length and mass of the wire.Solution:Diameter of the cylinder $(d)= 10\ cm$This implies, Radius of the cylinder $(r) = \frac{10}{2}\ cm$$= 5\ cmLength of the wire one complete round = 2 \pi r$$= 2\times3.14\times5\ cm$$= 31.4\ cmThe diameter of the wire = 3\ mm$$= \frac{3}{10}\ cm$This implies, Radius ... Read More
## Formulate the following problems as a pair of equations, and hence find their solutions:(i) Ritu can row downstream 20 km in 2 hours, and upstream 4 km in 2 hours. Find her speed of rowing in still water and the speed of the current.(ii) 2 women and 5 men can together finish an embroidery work in 4 days, while 3 women and 6 men can finish it in 3 days. Find the time taken by 1 woman alone to finish the work, and also that taken by 1 man alone.(iii) Roohi travels 300 km to her home partly by train and partly by bus. She takes 4 hours if she travels 60 km by train and the remaining by bus. If she travels 100 km by train and the remaining by bus, she takes 10 minutes longer. Find the speed of the train and the bus separately
Updated on 10-Oct-2022 13:47:38
## Solve the following equations and check your results.$2 y+\frac{5}{3}=\frac{26}{3}-y$.
Updated on 10-Oct-2022 13:47:38
Given:$2 y+\frac{5}{3}=\frac{26}{3}-y$To do: We have to solve the given equation and check the result.Solution:$2 y+\frac{5}{3}=\frac{26}{3}-y$$2y+y=\frac{26}{3}-\frac{5}{3}$$3y=\frac{26-5}{3}$$3y=\frac{21}{3}$$3y=7$$y=\frac{7}{3}Substituting the value of y in LHS, we get,2y+\frac{5}{3}=2(\frac{7}{3})+\frac{5}{3}$$=\frac{14}{3}+\frac{5}{3}$$=\frac{14+5}{3}$$=\frac{19}{3}$Substituting the value of $y$ in RHS, we get,$\frac{26}{3}-y=\frac{26}{3}-\frac{7}{3}$$=\frac{26-7}{3}$$=\frac{19}{3}$LHS $=$ RHSThe value of $y$ is $\frac{7}{3}$.
## Solve the following equations and check your results.$\frac{2 x}{3}+1=\frac{7 x}{15}+3$.
Updated on 10-Oct-2022 13:47:38
Given:$\frac{2 x}{3}+1=\frac{7 x}{15}+3$To do: We have to solve the given equation and check the result.Solution:$\frac{2 x}{3}+1=\frac{7 x}{15}+3$$\frac{2 x}{3}-\frac{7 x}{15}=3-1$$\frac{5(2 x)-7x}{15}=2$ (LCM of 3 and 15 is 15)$\frac{10x-7x}{15}=2$$\frac{3 x}{15}=2$$3x=15(2)$$3x=30$$x=\frac{30}{3}$$x=10Substituting the value of x in LHS, we get,\frac{2 x}{3}+1=\frac{2 (10)}{3}+1$$=\frac{20}{3}+1$$=\frac{20+1(3)}{3}$$=\frac{20+3}{3}$$=\frac{23}{3}Substituting the value of x in RHS, we get,\frac{7 x}{15}+3=\frac{7 (10)}{15}+3$$=\frac{70}{15}+3$$=\frac{70+3(15)}{15}$$=\frac{70+45}{15}$$=\frac{115}{15}$$=\frac{23}{3}$LHS $=$ RHSThe value of $x$ is $10$.
## Solve the following equations and check your results.$x=\frac{4}{5}(x+10)$.
Updated on 10-Oct-2022 13:47:38
|
2023-03-21 20:56:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.778317928314209, "perplexity": 1269.7364939418549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00669.warc.gz"}
|
https://cms.math.ca/10.4153/CMB-1997-034-7
|
location: Publications → journals → CMB
Abstract view
# Fonctions elliptiques et équations différentielles ordinaires
Published:1997-09-01
Printed: Sep 1997
• Raouf Chouikha
Format: HTML LaTeX MathJax PDF PostScript
## Abstract
In this paper, we detail some results of a previous note concerning a trigonometric expansion of the Weierstrass elliptic function $\{\wp(z);\, 2\omega, 2\omega'\}$. In particular, this implies its classical Fourier expansion. We use a direct integration method of the ODE $$(E)\left\{\matrix{{d^2u \over dt^2} = P(u, \lambda)\hfill \cr u(0) = \sigma\hfill \cr {du \over dt}(0) = \tau\hfill \cr}\right.$$ where $P(u)$ is a polynomial of degree $n = 2$ or $3$. In this case, the bifurcations of $(E)$ depend on one parameter only. Moreover, this global method seems not to apply to the cases $n > 3$.
MSC Classifications: 33E05 - Elliptic functions and integrals 34A05 - Explicit solutions and reductions 33E20 - Other functions defined by series and integrals 33E30 - Other functions coming from differential, difference and integral equations 34A20 - unknown classification 34A2034C23 - Bifurcation [See also 37Gxx]
top of page | contact us | privacy | site map |
|
2018-08-21 06:19:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5042025446891785, "perplexity": 1962.2336086790065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217970.87/warc/CC-MAIN-20180821053629-20180821073629-00606.warc.gz"}
|
https://courses.lumenlearning.com/chemistryformajors/chapter/uses-of-radioisotopes-2/
|
## Uses of Radioisotopes
### Learning Outcomes
• List common applications of radioactive isotopes
Radioactive isotopes have the same chemical properties as stable isotopes of the same element, but they emit radiation, which can be detected. If we replace one (or more) atom(s) with radioisotope(s) in a compound, we can track them by monitoring their radioactive emissions. This type of compound is called a radioactive tracer (or radioactive label). Radioisotopes are used to follow the paths of biochemical reactions or to determine how a substance is distributed within an organism. Radioactive tracers are also used in many medical applications, including both diagnosis and treatment. They are used to measure engine wear, analyze the geological formation around oil wells, and much more.
Radioisotopes have revolutionized medical practice (see Half-Lives for Several Radioactive Isotopes), where they are used extensively. Over 10 million nuclear medicine procedures and more than 100 million nuclear medicine tests are performed annually in the United States. Four typical examples of radioactive tracers used in medicine are technetium-99 $\left({}_{43}^{99}\text{Tc}\right)$, thallium-201 $\left({}_{81}^{201}\text{Tl}\right)$, iodine-131 $\left({}_{53}^{131}\text{I}\right)$, and sodium-24 $\left({}_{11}^{24}\text{Na}\right)$. Damaged tissues in the heart, liver, and lungs absorb certain compounds of technetium-99 preferentially. After it is injected, the location of the technetium compound, and hence the damaged tissue, can be determined by detecting the γ rays emitted by the Tc-99 isotope. Thallium-201 becomes concentrated in healthy heart tissue, so the two isotopes, Tc-99 and Tl-201, are used together to study heart tissue. Iodine-131 concentrates in the thyroid gland, the liver, and some parts of the brain. It can therefore be used to monitor goiter and treat thyroid conditions, such as Grave’s disease, as well as liver and brain tumors. Salt solutions containing compounds of sodium-24 are injected into the bloodstream to help locate obstructions to the flow of blood.
Figure 1. Administering thallium-201 to a patient and subsequently performing a stress test offer medical professionals an opportunity to visually analyze heart function and blood flow. (credit: modification of work by “Blue0ctane”/Wikimedia Commons)
Radioisotopes used in medicine typically have short half-lives—for example, the ubiquitous Tc-99m has a half-life of 6.01 hours. This makes Tc-99m essentially impossible to store and prohibitively expensive to transport, so it is made on-site instead. Hospitals and other medical facilities use Mo-99 (which is primarily extracted from U-235 fission products) to generate Tc-99. Mo-99 undergoes β decay with a half-life of 66 hours, and the Tc-99 is then chemically extracted. The parent nuclide Mo-99 is part of a molybdate ion, ${\text{MoO}}_{4}^{2-}$; when it decays, it forms the pertechnetate ion, ${\text{TcO}}_{4}^{\text{-}}$. These two water-soluble ions are separated by column chromatography, with the higher charge molybdate ion adsorbing onto the alumina in the column, and the lower charge pertechnetate ion passing through the column in the solution. A few micrograms of Mo-99 can produce enough Tc-99 to perform as many as 10,000 tests.
Figure 2. (a) The first Tc-99m generator (circa 1958) is used to separate Tc-99 from Mo-99. The MoO42− is retained by the matrix in the column, whereas the TcO4 passes through and is collected. (b) Tc-99 was used in this scan of the neck of a patient with Grave’s disease. The scan shows the location of high concentrations of Tc-99. (credit a: modification of work by the Department of Energy; credit b: modification of work by “MBq”/Wikimedia Commons)
Radioisotopes can also be used, typically in higher doses than as a tracer, as treatment. Radiation therapy is the use of high-energy radiation to damage the DNA of cancer cells, which kills them or keeps them from dividing. A cancer patient may receive external beam radiation therapy delivered by a machine outside the body, or internal radiation therapy (brachytherapy) from a radioactive substance that has been introduced into the body. Note that chemotherapy is similar to internal radiation therapy in that the cancer treatment is injected into the body, but differs in that chemotherapy uses chemical rather than radioactive substances to kill the cancer cells.
Figure 3. The cartoon in (a) shows a cobalt-60 machine used in the treatment of cancer. The diagram in (b) shows how the gantry of the Co-60 machine swings through an arc, focusing radiation on the targeted region (tumor) and minimizing the amount of radiation that passes through nearby regions.
Cobalt-60 is a synthetic radioisotope produced by the neutron activation of Co-59, which then undergoes β decay to form Ni-60, along with the emission of γ radiation. The overall process is:
${}_{27}^{59}\text{Co}+{}_{0}^{1}\text{n}\longrightarrow {}_{27}^{60}\text{Co}\longrightarrow {}_{28}^{60}\text{Ni}+{}_{-1}^{0}\beta+2{}_{0}^{0}\gamma$
The overall decay scheme for this is shown graphically in Figure 4.
Figure 4. Co-60 undergoes a series of radioactive decays. The γ emissions are used for radiation therapy.
Radioisotopes are used in diverse ways to study the mechanisms of chemical reactions in plants and animals. These include labeling fertilizers in studies of nutrient uptake by plants and crop growth, investigations of digestive and milk-producing processes in cows, and studies on the growth and metabolism of animals and plants.
For example, the radioisotope C-14 was used to elucidate the details of how photosynthesis occurs. The overall reaction is:
${\text{6CO}}_{2}\left(g\right)+{\text{6H}}_{2}\text{O}\left(l\right)\longrightarrow {\text{C}}_{6}{\text{H}}_{12}{\text{O}}_{6}\left(s\right)+{\text{6O}}_{2}\left(g\right)$,
but the process is much more complex, proceeding through a series of steps in which various organic compounds are produced. In studies of the pathway of this reaction, plants were exposed to CO2 containing a high concentration of ${}_{6}^{14}\text{C}$. At regular intervals, the plants were analyzed to determine which organic compounds contained carbon-14 and how much of each compound was present. From the time sequence in which the compounds appeared and the amount of each present at given time intervals, scientists learned more about the pathway of the reaction.
Commercial applications of radioactive materials are equally diverse. They include determining the thickness of films and thin metal sheets by exploiting the penetration power of various types of radiation. Flaws in metals used for structural purposes can be detected using high-energy gamma rays from cobalt-60 in a fashion similar to the way X-rays are used to examine the human body. In one form of pest control, flies are controlled by sterilizing male flies with γ radiation so that females breeding with them do not produce offspring. Many foods are preserved by radiation that kills microorganisms that cause the foods to spoil.
Figure 5. Common commercial uses of radiation include (a) X-ray examination of luggage at an airport and (b) preservation of food. (credit a: modification of work by the Department of the Navy; credit b: modification of work by the US Department of Agriculture)
Americium-241, an α emitter with a half-life of 458 years, is used in tiny amounts in ionization-type smoke detectors. The α emissions from Am-241 ionize the air between two electrode plates in the ionizing chamber. A battery supplies a potential that causes movement of the ions, thus creating a small electric current. When smoke enters the chamber, the movement of the ions is impeded, reducing the conductivity of the air. This causes a marked drop in the current, triggering an alarm.
Figure 6. Inside a smoke detector, Am-241 emits α particles that ionize the air, creating a small electric current. During a fire, smoke particles impede the flow of ions, reducing the current and triggering an alarm. (credit a: modification of work by “Muffet”/Wikimedia Commons)
### Try It
1. How can a radioactive nuclide be used to show that the equilibrium:$\text{AgCl}\left(s\right)\rightleftharpoons {\text{Ag}}^{\text{+}}\left(aq\right)+{\text{Cl}}^{\text{-}}\left(aq\right)$is a dynamic equilibrium?
2. Technetium-99m has a half-life of 6.01 hours. If a patient injected with technetium-99m is safe to leave the hospital once 75% of the dose has decayed, when is the patient allowed to leave?
3. Iodine that enters the body is stored in the thyroid gland from which it is released to control growth and metabolism. The thyroid can be imaged if iodine-131 is injected into the body. In larger doses, I-133 is also used as a means of treating cancer of the thyroid. I-131 has a half-life of 8.70 days and decays by β emission.
1. Write an equation for the decay.
2. How long will it take for 95.0% of a dose of I-131 to decay?
## Glossary
chemotherapy: similar to internal radiation therapy, but chemical rather than radioactive substances are introduced into the body to kill cancer cells
external beam radiation therapy: radiation delivered by a machine outside the body
internal radiation therapy: (also, brachytherapy) radiation from a radioactive substance introduced into the body to kill cancer cells
radiation therapy: use of high-energy radiation to damage the DNA of cancer cells, which kills them or keeps them from dividing
radioactive tracer: (also, radioactive label) radioisotope used to track or follow a substance by monitoring its radioactive emissions
|
2021-01-18 16:43:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5188912153244019, "perplexity": 2555.997954758221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00609.warc.gz"}
|
https://math.stackexchange.com/questions/495065/extending-integrals-from-continuous-functions-to-bounded-borel-functions
|
# Extending integrals from continuous functions to bounded Borel functions
Let $X$ be a compact, Hausdorff space. Let $B(X)$ be the Banach space of bounded, Borel-measurable, complex-valued functions on $X$ under the uniform norm. Let $C(X) \subset B(X)$ be the closed subspace of continuous, complex-valued functions.
By the Riesz representation theorem, we have an isomorphism $M(X) \cong C(X)^*$ where $M(X)$ denotes the (finite) regular, complex Borel measures on $X$ under the total variation norm. The isomorphism sends $\mu \in M(X)$ to integration against $\mu$. By the Hahn-Banach theorem, the element of $C(X)^*$ corresponding to $\mu \in M(X)$ extends to an element of $B(X)^*$ with the same norm. In fact, since the functions in $B(X)$ can also be integrated against $\mu$, we have a rather canonical choice of extension. It is not difficult to see that $f \mapsto \int_X f \ d\mu$ is in $B(X)^*$ and has the same norm as its restriction to $C(X)$.
Question: Is it true that every functional in $C(X)^*$ has a unique norm-preserving extension to $B(X)^*$ (given by integration against the corresponding measure)? Or, if not, what sort of functional analysis-type statements can be made in order to single out this extension which is obviously the preferred one from a measure theory standpoint?
From Hildebrandt-Knatorovich theorem we know that $B(X)^*$ is isometrically isomorphic to the Banach space of finitely additive (not necesarily regular) complex valued measures of bounded variation. This space is denoted by $ba(X)$. Recall also that $C(X)^*$ is $M(X)$. Now the mapping $$i: M(X)\to B(X)^*, \mu\mapsto \left(f\mapsto\int_X fd \mu\right)$$ is nothing more than the natural embedding of the space of good measures $M(X)$ into the space of weird measures $ba(X)$. It is worth to say that $M(X)$ is nicely situated in $ba(X)$ - it is norm one complemented via projector $j_C^*:ba(X)\to M(X)$. Here $j_C:C(X)\to B(X)$ is a natural embedding.
• This does not seem to answer the question. The OP asks if what you denote by $i$ is the only way to "promote" objects from $M(X)$ to objects in $ba(X)$, not about its properties. – Alex M. Mar 30 '18 at 19:44
|
2021-03-07 02:29:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297662973403931, "perplexity": 99.36996204082996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00183.warc.gz"}
|
https://www.ni.com/documentation/en/ni-daqmx/20.0/daqmx-prop-ref/task-ai-rvdt-sensitivity-903/
|
# AI.RVDT.Sensitivity
Specifies the sensitivity of the . This value is in the units you specify with . Refer to the sensor documentation to determine this value.
Data type:
|
2020-09-25 20:24:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864058792591095, "perplexity": 1810.1071156732064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00489.warc.gz"}
|
https://tex.stackexchange.com/questions/114039/xml-with-listings-different-colors-for-attributes-and-elements
|
XML with Listings: Different colors for attributes and elements
How can I make every attribute name in color X and every element name in color Y without adding keywords and stuff?
What I did so far:
\lstdefinelanguage{myXML}
{
morestring=[b]",
morecomment=[s]{<?}{?>},
morekeywords={
name,
type,
targetNamespace,
element,
xmlns,
xsd,
s0,
soap,
http
}
}
\lstdefinestyle{xmlStyle}{
language=myXML,
stringstyle=\color{mygreen},
identifierstyle=\color{blue},
keywordstyle=\color{mymauve},
}
What this results in:
As you can see, some of the element names (like xsd:something) are mauve instead of blue because they appear both as element and attribute name. Same with binding and element
It seems as if the problem is unsolvable with the listings-package. As recommended by @Marco Daniel I switched to the minted-package which works like a charm.
The installation was complicated but the instructions here helped me.
How to install syntax highlight package minted on Windows 7?
What all of these instructions where missing: You need to reboot after installing everything (at least that worked for me).
If you use TeXlipse (like me) and want to add the -shell-escape flag
Rightclick on Project > Properties > Latex Project Properties > Setup build tools... >
Select 'PdfLatex program' > Edit > Insert '-shell-escape' somewhere before '%input'
After that you can actually use minted.
I put this somewhere in the beginning (preamble) to make a reusable xml-style
\newminted[xml]{xml}{
bgcolor = mygray,
fontfamily = tt,
fontsize = \scriptsize,
gobble = 1,
samepage
}
In my actual content I embedded XML like this
\begin{listing}[!ht]
\begin{xml}
<element attribute="value" />
\end{xml}
\caption[Test]{Just a test caption}
\label{lst:test}
\end{listing}
The listing environment is to reference this piece of code later. If you are using \autoref{} you might find this helpful
\providecommand*{\listingautorefname}{Listing}
Thanks to Marco Daniel for showing me the minted-package :)
|
2019-10-22 15:56:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5679855346679688, "perplexity": 6594.46826509019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00454.warc.gz"}
|
https://www.r-bloggers.com/2011/09/variogram-fit-with-rpanel/
|
[This article was first published on R Video tutorial for Spatial Statistics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
During the UseR 2011 conference I saw lots of examples of the use of RPanel to create a GUI in R. Yesterday, because I was a bit bored of the work I was doing I started thinking about this and I decided to try this package.
My objective was to create a new panel with all the main setting for fitting a variogram model to an omnidirectional variogram with gstat.
This is the script:
library(rpanel)
library(gstat)
coordinates(data)=~Lat+Lon
##Variogram Fitting
variogram.plot <- function(panel) {
with(panel, {
variogram<-variogram(Oxigen~1,data,cutoff=Cutoff)
vgm.var<-vgm(psill=Sill,model=Model,range=Range,nugget=Nugget)
#fit<-fit.variogram(variogram,vgm.var)
plot(variogram$dist,variogram$gamma,xlab=”Distance”,ylab=”Semivariance”)
lines(variogramLine(vgm.var,maxdist=Range))
})
panel
}
var.panel <- rp.control("Variogram",Sill=20,Range=250,Nugget=0,Model="Mat",Cutoff=250)
rp.listbox(var.panel,Model,c(“Mat”,”Sph”,”Gau”))
rp.slider(var.panel, Cutoff, 0,500,showvalue=T)
rp.slider(var.panel, Sill, 0,500,showvalue=T)
rp.slider(var.panel, Range, 0,1000,showvalue=T)
rp.slider(var.panel, Nugget, 0,15,showvalue=T)
rp.button(var.panel, title = “Fit”, action = variogram.plot)
At this address you can find a zip file with a sample dataset that you can use to try this script, however if you know a bit of gstat you can start customizing it straigth away:
This is the screenshot from my R Console:
I also tried to embed the variogram plot into the panel in order to execute the script in batch mode, this is the result:
if(print(require(tcltk))==FALSE){install.packages(“tcltk”,repos=”http://cran.r-project.org”)}
if(print(require(tcltk))==TRUE){require(tcltk)}
if(print(require(rpanel))==FALSE){install.packages(“rpanel”,repos=”http://cran.r-project.org”)}
if(print(require(rpanel))==TRUE){require(rpanel)}
if(print(require(gstat))==FALSE){install.packages(“gstat”,repos=”http://cran.r-project.org”)}
if(print(require(gstat))==TRUE){require(gstat)}
coordinates(data)=~Lat+Lon
grid <- spsample(data, cellsize = 10, type = "regular")
gridded(grid) <- TRUE
##Variogram Fitting
variogram.plot <- function(panel) {
with(panel, {
variogram<-variogram(Oxigen~1,data,cutoff=Cutoff)
vgm.var<-vgm(psill=Sill,model=Model,range=Range,nugget=Nugget)
#fit<-fit.variogram(variogram,vgm.var)
plot(variogram$dist,variogram$gamma,xlab=”Distance”,ylab=”Gamma”)
lines(variogramLine(vgm.var,maxdist=Range*2))
})
panel
}
replot.smooth <- function(object) {
rp.tkrreplot(var.panel, plot)
object
}
var.panel <- rp.control("Variogram",Sill=20,Range=250,Nugget=0,Model="Mat",Cutoff=250,size=c(800,600))
rp.listbox(var.panel,Model,c(“Mat”,”Sph”,”Gau”))
rp.slider(var.panel, Cutoff, 0,sqrt(areaSpatialGrid(grid)),showvalue=T)
rp.slider(var.panel, Sill, 0,var(data$Oxigen)*2,showvalue=T) rp.slider(var.panel, Range, 0,sqrt(areaSpatialGrid(grid)),showvalue=T) rp.slider(var.panel, Nugget, 0,var(data$Oxigen),showvalue=T)
rp.button(var.panel, title = “Fit”,action=replot.smooth)
rp.tkrplot(var.panel, plot,variogram.plot)
rp.block(var.panel)
It probably need some more work in order to be perfect but at the moment I’m not interested in a perfect automatic script.
If someone has time to spend on it and is able to made it perfectly automatic for every data file, please share the script with the community.
By the way, I also implemented a line from the tcltk package for the selection of the data file interactively.
This is the resulting panel:
|
2021-12-07 23:35:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1847722977399826, "perplexity": 2592.813165926784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363420.81/warc/CC-MAIN-20211207232140-20211208022140-00201.warc.gz"}
|
https://www.tutorialspoint.com/signals_and_systems/signals_sampling_theorem.htm
|
# Signals Sampling Theorem
Statement: A continuous time signal can be represented in its samples and can be recovered back when sampling frequency fs is greater than or equal to the twice the highest frequency component of message signal. i. e.
$$f_s \geq 2 f_m.$$
Proof: Consider a continuous time signal x(t). The spectrum of x(t) is a band limited to fm Hz i.e. the spectrum of x(t) is zero for |ω|>ωm.
Sampling of input signal x(t) can be obtained by multiplying x(t) with an impulse train δ(t) of period Ts. The output of multiplier is a discrete signal called sampled signal which is represented with y(t) in the following diagrams:
Here, you can observe that the sampled signal takes the period of impulse. The process of sampling can be explained by the following mathematical expression:
$\text{Sampled signal}\, y(t) = x(t) . \delta(t) \,\,...\,...(1)$
The trigonometric Fourier series representation of $\delta$(t) is given by
$\delta(t)= a_0 + \Sigma_{n=1}^{\infty}(a_n \cos n\omega_s t + b_n \sin n\omega_s t )\,\,...\,...(2)$
Where $a_0 = {1\over T_s} \int_{-T \over 2}^{ T \over 2} \delta (t)dt = {1\over T_s} \delta(0) = {1\over T_s}$
$a_n = {2 \over T_s} \int_{-T \over 2}^{T \over 2} \delta (t) \cos n\omega_s\, dt = { 2 \over T_2} \delta (0) \cos n \omega_s 0 = {2 \over T}$
$b_n = {2 \over T_s} \int_{-T \over 2}^{T \over 2} \delta(t) \sin n\omega_s t\, dt = {2 \over T_s} \delta(0) \sin n\omega_s 0 = 0$
Substitute above values in equation 2.
$\therefore\, \delta(t)= {1 \over T_s} + \Sigma_{n=1}^{\infty} ( { 2 \over T_s} \cos n\omega_s t+0)$
Substitute δ(t) in equation 1.
$\to y(t) = x(t) . \delta(t)$
$= x(t) [{1 \over T_s} + \Sigma_{n=1}^{\infty}({2 \over T_s} \cos n\omega_s t) ]$
$= {1 \over T_s} [x(t) + 2 \Sigma_{n=1}^{\infty} (\cos n\omega_s t) x(t) ]$
$y(t) = {1 \over T_s} [x(t) + 2\cos \omega_s t.x(t) + 2 \cos 2\omega_st.x(t) + 2 \cos 3\omega_s t.x(t) \,...\, ...\,]$
Take Fourier transform on both sides.
$Y(\omega) = {1 \over T_s} [X(\omega)+X(\omega-\omega_s )+X(\omega+\omega_s )+X(\omega-2\omega_s )+X(\omega+2\omega_s )+ \,...]$
$\therefore\,\, Y(\omega) = {1\over T_s} \Sigma_{n=-\infty}^{\infty} X(\omega - n\omega_s )\quad\quad where \,\,n= 0,\pm1,\pm2,...$
To reconstruct x(t), you must recover input signal spectrum X(ω) from sampled signal spectrum Y(ω), which is possible when there is no overlapping between the cycles of Y(ω).
Possibility of sampled frequency spectrum with different conditions is given by the following diagrams:
## Aliasing Effect
The overlapped region in case of under sampling represents aliasing effect, which can be removed by
• considering fs >2fm
• By using anti aliasing filters.
|
2021-04-11 18:29:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788722515106201, "perplexity": 1396.3235422436458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00140.warc.gz"}
|
https://settheory.mathtalks.org/page/3/
|
## Jan Pachl: Topological centres for group actions
Place: Fields Institute (Room 210)
Date: December 1, 2017 (13:30-15:00)
Speaker: Jan Pachl
Title: Topological centres for group actions
Abstract: Based on joint work with Matthias Neufang and Juris Steprans. By a variant of Foreman’s 1994 construction, every tower ultrafilter on $\omega$ is the unique invariant mean for an amenable subgroup of $S_\infty$, the infinite symmetric group. From this we prove that in any model of ZFC with tower ultrafilters there is an element of $\ell_1(S_\infty)^{\ast\ast} \setminus \ell_1(S_\infty)$ whose action on $\ell_1(\omega)^{\ast\ast}$ is w* continuous. On the other hand, in ZFC there are always such elements whose action is not w* continuous.
## Scott Cramer: Algebraic properties of elementary embeddings
Time: Mon, 12/04/2017 – 4:00pm – 5:30pm
Location: RH 440R
Speaker: Scott Cramer (California State University San Bernardino)
Title: Algebraic properties of elementary embeddings
Abstract. We will investigate algebraic structures created by rank-into-rank elementary embeddings. Our starting point will be R. Laver’s theorem that any rank-into-rank embedding generates a free left-distributive algebra on one generator. We will consider extensions of this and related results. Our results will lead to some surprisingly coherent conjectures on the algebraic structure of rank-into-rank embeddings in general.
## Thematic Semester on Descriptive Set Theory and Polish groups, More info
Thematic Semester on Descriptive Set Theory and Polish groups
Bernoulli Center, Lausanne, Switzerland.
January – June, 2018.
During the period January 1st – June 30th, 2018, there will be a thematic semester on Descriptive Set Theory and Polish Groups held at the Bernoulli Center in Lausanne, Switzerland.
The semester is organised around five week long activities, including three conferences and two workshops, along with three Bernoulli Lectures held in connection with these events.
Conference: Borel combinatorics and ergodic theory (organised by C. Conley and D. Gaboriau), February 5-9.
Bernoulli Lecture: Stephen Jackson (Univ. North Texas), February 8.
Conference: Structure and dynamics of Polish groups (organised by A. Thom and T. Tsankov), March 19-23.
Workshop: Large scale geometry of Polish groups (organised by J. Moore and C. Rosendal), March 26-29.
Bernoulli Lecture: Mikhail Gromov (IHES) – What is Probability? March 27.
Workshop: Ideals and exceptional sets in Polish spaces (organised by M. Elekes and S. Solecki), June 4-8.
Conference: Descriptive set theory (organised by B. Miller, A. Kechris and S. Todorcevic), June 18-22.
Bernoulli Lecture: Slawomir Solecki (Cornell) – Projective Fraisse limits, approaching topology through logic. June 21.
In addition to these semester activities, the 11th Young Set Theory Workshop will be held at the Bernoulli Center during the week of June 25-29.
Detailed information about the semester and these event is available at the following link
https://bernoulli.epfl.ch/semesters/32/show
The Bernoulli Center has large capacity and everyone is invited to attend the events of the semester. Registration for the individual events can be done through the above link by clicking at the conference/workshop in the right column.
Funding for US based visitors is secured through an NSF grant, while limited funding for other participants is also available. Requests for funding should be made during online registration.
Please note that registration and funding requests for the events in March should be made before mid-December.
## David Chodounsky: A generalization of the Solovay–Tennenbaum theorem
Dear all,
The seminar meets on Wednesday November 29th at 11:00 in the Institute of Mathematics CAS, Zitna 25, seminar room, 3rd floor, front building.
Program:
David Chodounsky will present a generalization of the Solovay–Tennenbaum theorem; Assuming a diamond principle and given a suitable class PHI of ccc posets, there is a poset in the class PHI which forces MA(PHI) and c=kappa.
## Maxwell Levine: Forcing Square Sequences
KGRC research seminar on 2017-11-30 at 4pm.
Speaker: Maxwell Levine (KGRC)
Abstract: In the 1970’s, Jensen proved that Gödel’s constructible universe $L$ satisfies a combinatorial principle called $\square_\kappa$ for every uncountable cardinal $\kappa$. Its significance is partially in that it clashes with the reflection properties of large cardinals – for example, if $\mu$ is supercompact and $\kappa \ge \mu$ then $\square_\kappa$ fails – and so it characterizes the minimality of $L$ in an indirect way. Schimmerling devised an intermediate hierarchy of principles $\square_{\kappa,\lambda}$ for $\lambda \le \kappa$ as a means of comparing a given model of set theory to $L$, the idea being that a smaller value of $\lambda$ yields a model that is more similar to $L$ at $\kappa$.
Cummings, Foreman, and Magidor proved that for any $\lambda<\kappa$, $\square_{\kappa,\lambda}$ implies the existence of a PCF-theoretic object called a very good scale for $\kappa$, but that $\square_{\kappa,\kappa}$ (usually denoted $\square_\kappa^\ast$) does not. They asked whether $\square_{\kappa,<\kappa}$ implies the existence of a very good scale for $\kappa$, and we resolve this question in the negative.
We will discuss the technical background of the problem, provide a complete solution, and discuss further avenues of research.
## Russell Miller: Isomorphism and Classification for Countable Structures
KGRC research seminar on 2017-11-23 at 4pm.
Speaker: Russell Miller (Queens College, City University of New York (CUNY), USA)
Abstract: We describe methods of classifying the elements of certain classes of countable structures: algebraic fields, finite-branching trees, and torsion-free abelian groups of rank 1. The classifications are computable homeomorphisms onto known spaces of size continuum, such as Cantor space or Baire space, possibly modulo a standard equivalence relation. The classes involved have arithmetic isomorphism problems, making such classifications possible, and the results help suggest exactly which properties of their elements must be known in order to produce a nice classification.
For algebraic fields, this homeomorphism makes it natural to transfer Lebesgue measure from Cantor space onto the class of these fields, although there is another probability measure on the same class which seems in some ways more natural than Lebesgue measure. We will discuss how certain properties of these fields — notably, relative computable categoricity — interact with these measures: the basic result is that only measure-0-many of these fields fail to be relatively computably categorical. (The work on computable categoricity is joint with Johanna Franklin.)
## Paul Szeptycki: Ladder systems after forcing with a Suslin tree
Place: Fields Institute (Room 210)
Date: November 24, 2017 (13:30-15:00)
Speaker: Paul Szeptycki
Title: Ladder systems after forcing with a Suslin tree
Abstract: Uniformization properties of ladder systems in models obtained by forcing with a Suslin tree S over a model of MA(S) are considered.
## Andres Caicedo: Real-valued measurability and the extent of Lebesgue measure (II)
Thursday, November 30, 2017, from 4 to 5:30pm
East Hall, room 3096
Speaker: Andres Caicedo (Math Reviews)
Title: Real-valued measurability and the extent of Lebesgue measure (II)
Abstract:
On this second talk I begin with Solovay’s characterization of real-valued measurability in terms of generic elementary embeddings, and build on results of Judah to prove that if there is an atomlessly measurable cardinal, then all (boldface) Delta-1-3 sets of reals are Lebesgue measurable. This is optimal in two respects: Just from the existence of measurable cardinals we cannot prove that lightface Delta-1-3 sets are measurable, and there are models with atomlessly measurable cardinals where there is a non-measurable Sigma-1-3 set. I will also discuss some related results.
## Merlin Carl: Complexity theory for ordinal Turing machines
Monday, November 27, 2017, 16.30
Seminar room 0.008, Mathematical Institute, University of Bonn
Speaker: Merlin Carl (Universität Konstanz)
Title: Complexity theory for ordinal Turing machines
Abstract:
Ordinal Turing Machines (OTMs) generalize Turing machines to transfinite working time and space. We consider analogues of theorems from complexity theory for OTMs, among them the Cook-Levin theorem, the P vs. NP problem and Ladner’s theorem. This is joint work with Benedikt Löwe and Benjamin Rin.
## Philipp Schlicht: The Hurewicz dichotomy for definable subsets of generalized Baire spaces
Monday, November 20, 2017, 16.30
Seminar room 0.008, Mathematical Institute, University of Bonn
Speaker: Philipp Schlicht (Universitat Bonn)
Title: The Hurewicz dichotomy for definable subsets of generalized Baire spaces
|
2018-01-16 11:41:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6139740347862244, "perplexity": 2001.8319238447343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00427.warc.gz"}
|
http://codeforces.com/blog/entry/6625
|
### Sereja's blog
By Sereja, 9 years ago,
We will bruteforce number of fingers that will be show Dima, then if total sum of fingers = 1 modulo (n+1), Dima will clean the room. So we should increase answer if the remaining part after division by (n+1) is not 1.
272B - Dima and Sequence
First of all — f(i) is number of ones in binary presentation of number. We will repair all numbers to functions of them. Now we have to find number of pairs of equal numbers. Lets Q[i] — number of numbers with i bits, the answer will be sum of values Q[i]*(Q[i]-1)/2 for all i.
273A - Dima and Staircase
Lets L will be the answer after last block, last block was (w1, h1), next block is (w2, h2). Next answer will be max(L+h1, A[w2]), where A — given array. At the beggining we can suppose that L = 0, w1 = 0, h1 = 0.
273B - Dima and Two Sequences
Not hard to understand that answer will be (number of numbers with first coordinate = 1)! * (number of numbers with first coordinate = 2)! * ... * (number of numbers with first coordinate = 10^9)!/(2^(number of such i = 1..n, that Ai=Bi)). The only problem was to divide number with non prime modulo, it can be easely done if we will count number of prime mulpiplies=2 in all factorials. Then we can simply substract number that we need and multiply answer for some power of 2.
273C - Dima and Horses
Not hard to understand that we have undirected graph. Lets color all vetexes in one color. Then we will find some vertex that is incorrect. We will change color of this vertex, and repeat our search, while it is possible. After every move number of bad edges will be decrease by 1 or 2, so our cycle will end in not more then M operations. So solutions always exists and we need to change some vertex not more then M times, so we will take queue of bad vertexes and simply make all operations of changes.
273D - Dima and Figure
Good picture is connected figure that saticfy next condition: most left coordinates in every row of figure vere we have some cells will be almost-ternary, we have the same situation with right side, but here we have another sign. So it is not hard to write dp[i][j1][j2][m1][m2] numbr of figures printed of field size i*m, where last row contain all cells from j1 to j2, the most left coordinate will be m1, the most right coordinate will be m2. But it is not enough. We have to rewrite it in way that m1 will mean — was there some rows j and j+1 that most left coordinate if row j is bigger then most left coordinate in j+1. So now it is not hard to write solution with coplexity O(n*m*m*m*m). But we should optimize transfer to O(1), is can be done using precalculations of sums on some rectangels.
273E - Dima and Game
• +12
» 9 years ago, # | ← Rev. 2 → 0 [Deleted]
• » » 9 years ago, # ^ | ← Rev. 6 → +6 [Deleted]Sorry, because of my browser's error.
» 9 years ago, # | 0 In solve D(div2), why we divided by 2^(number of such i = 1..n,that Ai=Bi)? I don't understand, please proof.
• » » 9 years ago, # ^ | +6 Consider the influence of the position of pair(Ai,i) and pair(Bi,i). In one sequence,if we change the position of (Ai,i) and (Bi,i),that will not form a new sequence. But when we solve the problem, what we are calculating is that all different positions for all pairs with same x-coordinate. And that contains the different positions of Ai and Bi. So we divide the answer with 2, because (Ai,i) and (Bi,i) will lead to 2 different answer, but in the problem they should be considered to be 1. For all different Ai==Bi, we divide the answer with 2^k where k is the number of pairs with Ai==Bi.
» 9 years ago, # | -8 "brtueforce"? I think you mean "bruteforce"
• » » 9 years ago, # ^ | 0 I think there are a lot of small mistakes. Anyway (as we like to read the tutorial as soon as possible) it's acceptably
» 9 years ago, # | +27 It seems there is a mistake in posting tutorial. English version of tutorial is TopCoder SRM announcement and the post itself is for 8 days ago!
• » » 9 years ago, # ^ | -24 Nobody cares
• » » » 9 years ago, # ^ | +6 It was little mistake, sorry.
» 9 years ago, # | +17 I expected more detailed tutorial ...
• » » 9 years ago, # ^ | -22 Your expectation failed. Yours Cap
» 9 years ago, # | +6 the problem E(div 2), I want to know why it will operate at most M times?
• » » 9 years ago, # ^ | ← Rev. 2 → +4 I think DFS can explain it more easily.This solution is like magnetic repulsion.consider that there are two sets marked S1 and S2(at first S2 is empty),we pick one element from S1 that has more than one enemy in S1,then put it in S2,so bad edge will decrease,but in S2 maybe increase some bad edges and cause some bad elements,so we pick out these bad elements and put them into S1,and repeat this way. If there is a answer then each time the bad edges are decrease,and we can find a stable state.But i don't know how to prove that there must exist one answer.
• » » » 9 years ago, # ^ | +3 As you said, each time bad edges will decrease. If at some point, you cannot find a vertex to fix, then the solution is found. If not, you can always find a vertex to fix and the number of bad edges will decrease. The total number of bad edges is finite, therefore the process cannot go infinitely, cause each time the number of bad edges decrease by 1 or 2 or 3,(This is ensured by each horse having at most 3 enemies, so if a horse have more than 1 enemies, it will have 1 or 0 enemies in the other party) and the total number of bad edges cannot go below zero. Therefore the process will end and a solution will come.
• » » » » 9 years ago, # ^ | 0 thanks to byijie and Bobgy.
• » » » » » 9 years ago, # ^ | 0 no problem
• » » » » » 9 years ago, # ^ | 0 What is meant by "bad edge"?
• » » » » 9 years ago, # ^ | 0 What is meant by "bad edge"?
• » » » » » 9 years ago, # ^ | +1 an edge that is connecting two vertex in the same set.
» 9 years ago, # | ← Rev. 3 → +7 C(div. 1). "Then we will find some vertex that is incorrect." what does it mean? What do you mean by saying "incorrect"?
• » » 9 years ago, # ^ | 0 Vertex (horse) is incorrect, if it has more than one enemy in other party.
» 9 years ago, # | ← Rev. 2 → 0 How can we solve problem "Dima and Horses" by 2-SAT method? Thanks Actually, I don't know method 2-SAT.
• » » 9 years ago, # ^ | 0 I don't sure that this problem can be solved by 2-SAT.
• » » » 9 years ago, # ^ | 0 But in Problem Tags of "Dima and Horses" (in Div 2), there is "2-sat". Can anyone help me? thanks
» 9 years ago, # | 0 Isn't Problem D for Division 1 is supposed to solved in O(M*N*N) time? Don't need to keep track of m1 and m2, just keep that whether m1 < j1 or not and whether m2 > j2 or not, that is whether it has reached its leftmost cell and its rightmost cell.
» 9 years ago, # | +3 Никак не пойму, как все-таки в D делать переход за O(1)?
• » » 9 years ago, # ^ | +3 а вы поняли по каким правилам dp строится ?у меня идея подобная была, я трактовал m1 и m2 {0,1} — можно ли след-ий столбец расширить вниз/вверх соотв-но. здесь тоже так?
• » » » 9 years ago, # ^ | +3 Я мало что понял из разбора, но моем в моем "решении" за O(N^5) все так. И для перехода из состояния у меня нужно значение динамики в этом состоянии добавить к нескольким прямоугольникам на следующем "слое". Вот как-то надо сделать это быстро. Видимо загадочное "precalculations of sums on some rectangels" — это как раз то что нужно.
• » » » » 9 years ago, # ^ | +8 Лучше сделать переход дп не вперед, а назад, тогда перед переходами делаешь прекальк частичных сумм для текущего слоя, чтобы потом можно взять сумму любой подматрицы.
• » » » » » 9 years ago, # ^ | 0 Точно, спасибо.
• » » » 9 years ago, # ^ | ← Rev. 3 → 0 create an array a[M][2][2][N+1][N+1]. a[i][k1][k2][j1][j2] stands for number of figures with last row i and last row has cells from j1 to j2 and k1 is 0 if j1 <= m1 else 1 and k2 is 0 if j2 >= m2 else 1. Thena[i][0][0][j1][j2] = a[i][0][0][j1+1][j2]+a[i][0][0][j1][j2-1]-a[i][0][0][j1+1][j2-1]+a[i-1][0][0][j1][j2].a[i][0][1][j1][j2] = a[i][0][1][j1+1][j2]+a[i][0][1][j1][j2+1]-a[i][0][1][j1+1][j2+1]+a[i-1][0][1][j1][j2]+a[i-1][0][0][j1][j2+1].a[i][1][0][j1][j2] is similar with a[i][0][1][j1][j2].a[i][1][1][j1][j2] = a[i][1][1][j1-1][j2]+a[i][1][1][j1][j2+1]-a[i][1][1][j1-1][j2+1]+a[i-1][1][1][j1][j2]+a[i-1][0][0][j1-1][j2+1]+a[i-1][0][1][j1-1][j2]+a[i-1][1][0][j1][j2+1].Note that if you use bottom up dp and do it in correct order you can actually omit the i and only use dp array of a[2][2][N+1][N+1]. So time complexity O(M*N*N) and memory space O(N*N).
• » » » » 8 years ago, # ^ | 0 what's the meaning of m1 and m2?
• » » » » » 8 years ago, # ^ | 0 the most left and right coordinates of the figure
• » » » » » » 8 years ago, # ^ | 0 thanks,then how should I initialize the array a's state?I think I am not good at dynamic programming.
» 9 years ago, # | ← Rev. 2 → +4 Так когда будет разбор на русском, я заждалься
• » » 9 years ago, # ^ | +27 Ждать нечего — на английском разбор напоминает конспект разбора от Burunduk1 на каком-то чемпионате матмеха
» 9 years ago, # | 0 Can Problem Setter post the solution of problem "Dima and Game"? Thank you so much.
• » » 9 years ago, # ^ | 0 ya, I'm waiting for it too.
• » » 9 years ago, # ^ | 0 I will post it little bit later, sorry for so long delay.
» 8 years ago, # | 0 the tutorial is too simple,I want more details.
» 7 years ago, # | 0 In the problem "Dima and Horses",why does this problem always can be worked out,i means never have the answer with -1.
• » » 7 years ago, # ^ | 0 If there is at least one unsatisfied vertex, then it can be moved to another group and the overall number of conflicts is decreased(otherwise we are done). Problem is obviously solved when the number of conflicts is zero. Since we can decrease the number of conflicts whenever the problem is not yet solved, then we simply do it. One day the process will finish because the initial number of conflicts is finite.
» 6 years ago, # | ← Rev. 2 → 0 273E — Dima and Game . please give this problem tutorial . Sereja
» 5 years ago, # | 0 It's very good
» 4 years ago, # | 0 For 272A, why it is error when my program print 1 as the out of the first example input? It is embarrassing.
» 3 years ago, # | 0 please anybody help me in Dema And Staircase problem 167 div 2
» 2 years ago, # | 0 Problem A, which is supposed to have multiple answers is only accepting single/ specific answers.
» 17 months ago, # | 0 For 273A/272C (Div 2 C), the general case when blocks can fall from any position (not just 0) can be solved using lazy propagation of segment tree (for range max update and range max query) in $O(m\cdot \log n)$. See my submission 81852819
|
2021-10-17 10:23:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3795098066329956, "perplexity": 2991.858804408142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00231.warc.gz"}
|
http://spmphysics.onlinetuition.com.my/2013/08/ohms-law-relationship-between-current.html
|
# Ohm's Law - Relationship Between Current and Potential Difference
### Ohm’s Law
1. The relationship between the current passing through 2 points in a conductor and the potential difference between the 2 points is given by Ohm's law.
2. Ohm’s Law states that the current flowing in the metallic conductor is directly proportional to the potential difference applied across it’s ends, provided that the physical conditions ( such as temperature ) are constant. $IαV$ or $V=kI$ where k is a constant
Example:
What is the current flow through an 800Ω toaster when it is operating on 240V?
Answer:
Resistance, R = 800Ω
Potential difference, V = 240V
Current, I = ?
$R= V I I= V R = 240 800 =0.3A$
|
2021-02-26 18:12:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.612594485282898, "perplexity": 461.27003620734354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00580.warc.gz"}
|
https://dimfour.com/en/questions/121678
|
11.052002
# Find the solutions to the equation -5/2x+3/4x=-34
Jaye Mathematics 2 3
Find the solutions to the equation -5/2x+3/4x=-34
Posted by Jaye | Posted at May 11, 2002 | Categories: Mathematics
$-frac{5}{2}x+frac{3}{4}x=-frac{3}{4}\-frac{5 imes2}{2 imes2}x + frac{3}{4}x=-frac{3}{4}\frac{-10}{4}x+frac{3}{4}x=-frac{3}{4}\frac{-10+3}{4}x=-frac{3}{4}\-frac{7}{4}x=-frac{3}{4}\-frac{7}{4}x imes -4 =-frac{3}{4} imes -4\7x = 3\oxed{f{x =frac{3}{7}}}$
|
2019-10-14 01:19:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6736165881156921, "perplexity": 4629.108616770806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00001.warc.gz"}
|
https://www.transtutors.com/questions/for-small-motion-the-matrix-eom-for-a-double-pendulum-can-be-stated-as-where-f-and-a-2215479.htm
|
# For small motion, the matrix EOM for a double pendulum can be stated as where f and ? are the...
For small motion, the matrix EOM for a double pendulum can be stated as
where φ and θ are the rotation angles of the top and bottom bars, respectively.
(a) For m1 = m2 = 0.5 kg, l1 = l2 = 1.0 m, calculate the eigenvalues, eigenvectors, and natural frequencies.
(b) Stating the EOM symbolically as , develop (with numbers) the matrix of normalized eigenvectors [A*] and show that they satisfy
] are the diagonal matrix of eigenvalues and the identity matrix, respectively. (c) State the modal consisting of the modal differential equations and the coordinate transformation from the modal coordinates to the physical coordinates. (d) For the initial conditions, , state the modal initial conditions.\
|
2020-03-31 02:19:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981312096118927, "perplexity": 920.3425316669648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00204.warc.gz"}
|
https://physics.stackexchange.com/questions/202838/spring-mass-system-with-variable-stifness-m-ddotxkxx-0-time-period-is-k
|
# Spring-mass system with variable stifness $m \ddot{x}+k(x)x=0$, time period is known, stifness is unknown
This question is somewhat related to my previous question: What is the time period of an oscillator with varying spring constant?
In that question, time period of mass-spring system with variable stiffness was asked. Answers were really helpful, and with help of answers I managed to derive time period of certain oscillator system $$m \ddot{x}+b \dot{x}+k(x)x=0 \, .$$ This stiffness $k(x)$ had dependence on displacement which was assumed to be second order polynomial. I also put a small attenuation term $b$ to spring equation, but that's just to check how attenuation term effect the system. In reality, this term is quite small and can be dismissed. Equation I got is
$$k(x) = \alpha + \beta x +\gamma x^2$$
from which we find
$$T = 4 \int_{x_0}^{x_\text{max}} \frac{dx}{\sqrt{\left(v_0^2 - \dfrac{2}{m} \left(\dfrac{\alpha x^2}{2} + \dfrac{\beta x^3}{3} + \dfrac{\gamma x^4}{4}\right) - \dfrac{bx^2}{m}\right)}}$$
From this equation I got quite nice results when I calculated it numerically with Wolfram Alpha. However something came to my mind: If we do not know the stiffness of spring a-priori, but we know time period of the spring we get into trouble. Approaches mentioned in last post seem not to work in this situation. If we have variable stiffness mass-spring system $m \ddot{x} + k(x) x = 0$ and we know always it's period of oscillation, initial values $x(0)$ and $\dot{x}(0)$ and even $x(t)$, what is it's stiffness? So what is function $K=f(T)$? I am quite sure that $T=2 \pi \sqrt{\dfrac{m}{K}}$ is no good in this case.
• What do you mean by "we know its period of oscillation with any $x$"? Most oscillations will go through many different values of $x$. Do you mean "as a function of amplitude"? – Michael Seifert Aug 27 '15 at 13:58
• I suspect that the answer will not be unique. – march Aug 27 '15 at 16:47
• Michael Seifert: Good question. "We know its period of oscillation with any x" is now changed to “we know always it's period of oscillation, initial values x(0) and x'(0) and even x(t)”.. Originally I was trying to say something like “we know always x(t)”, but wrote some weird stuff instead. – dr_mushroom Aug 28 '15 at 7:03
|
2019-08-24 00:03:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257748484611511, "perplexity": 498.4007171979951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00017.warc.gz"}
|
https://kops.uni-konstanz.de/handle/123456789/31504
|
Electrogenic Binding of Ions at the Cytoplasmic Side of the Na+,K+-ATPase
2015
Authors
Tashkin, Vsevolod Yu.
Gavrilchik, A.N.
Ilovaisky, Alexey I.
Sokolov, Valeriy S.
Journal article
Published in
Biochemistry (Moscow) Supplement Series A : Membrane and Cell Biology ; 9 (2015), 2. - pp. 92-99. - ISSN 1990-7478. - eISSN 1990-7494
Abstract
Electrogenic binding of ions from the cytoplasmic side of the Na+,K+-ATPase has been studied by measurements of changes of the membrane capacitance and conductance triggered by a jump of pH or of the sodium-ion concentration in the absence of ATP. The pH jumps were performed in experiments with membrane fragments containing purified Na+,K+-ATPase adsorbed to a bilayer lipid membrane (BLM). Protons were released in a sub-millisecond time range from a photosensitive compound (caged H+) triggered by a UV light flash. The sodium concentration jumps were carried out by a fast solution exchange in experiments with membrane fragments attached to a solid-supported membrane deposited on a gold electrode. The change of the membrane capacitance triggered by the pH jump depended on the sodium-ion concentration. Potassium ions had a similar effect on the capacitance change triggered by a pH jump. The effects of these ions are explained by the their competition with protons in the binding sites on cytoplasmic side of the Na+,K+-ATPase. The approximation of the experimental data by a theoretical model yields the dissociation constants, K, and the cooperativity coefficients, n, of the binding sites for sodium ions (K = 2.7 mM, n = 2) and potassium ions (K = 1.7 mM, n = 2). In the presence of magnesium ions the apparent dissociation constants of sodium increased. A possible reason of the inhibition of sodium-ion binding by magnesium ions can be an electrostatic or conformational effect of magnesium ions bound to a separate site of the Na+,K+-ATPase close to the entrance to the sodium-ion binding sites.
Subject (DDC)
570 Biosciences, Biology
Keywords
sodium pump, electrogenic ion transport, Na+,K+-ATPase, caged H+, solid-supported membranes
Cite This
ISO 690TASHKIN, Vsevolod Yu., A.N. GAVRILCHIK, Alexey I. ILOVAISKY, Hans-Jürgen APELL, Valeriy S. SOKOLOV, 2015. Electrogenic Binding of Ions at the Cytoplasmic Side of the Na+,K+-ATPase. In: Biochemistry (Moscow) Supplement Series A : Membrane and Cell Biology. 9(2), pp. 92-99. ISSN 1990-7478. eISSN 1990-7494. Available under: doi: 10.1134/S1990747815020105
BibTex
@article{Tashkin2015Elect-31504,
year={2015},
doi={10.1134/S1990747815020105},
title={Electrogenic Binding of Ions at the Cytoplasmic Side of the Na<sup>+</sup>,K<sup>+</sup>-ATPase},
number={2},
volume={9},
issn={1990-7478},
journal={Biochemistry (Moscow) Supplement Series A : Membrane and Cell Biology},
pages={92--99},
author={Tashkin, Vsevolod Yu. and Gavrilchik, A.N. and Ilovaisky, Alexey I. and Apell, Hans-Jürgen and Sokolov, Valeriy S.}
}
RDF
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:bibo="http://purl.org/ontology/bibo/"
xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:void="http://rdfs.org/ns/void#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
<bibo:uri rdf:resource="http://kops.uni-konstanz.de/handle/123456789/31504"/>
<dcterms:issued>2015</dcterms:issued>
<dc:contributor>Ilovaisky, Alexey I.</dc:contributor>
<foaf:homepage rdf:resource="http://localhost:8080/"/>
<dc:creator>Gavrilchik, A.N.</dc:creator>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2015-07-27T14:58:05Z</dc:date>
<dc:rights>terms-of-use</dc:rights>
<dcterms:title>Electrogenic Binding of Ions at the Cytoplasmic Side of the Na<sup>+</sup>,K<sup>+</sup>-ATPase</dcterms:title>
<dc:creator>Apell, Hans-Jürgen</dc:creator>
<dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/31504/3/Tashkin_0-292115.pdf"/>
<dc:contributor>Sokolov, Valeriy S.</dc:contributor>
<dc:creator>Ilovaisky, Alexey I.</dc:creator>
<dc:contributor>Tashkin, Vsevolod Yu.</dc:contributor>
<dcterms:abstract xml:lang="eng">Electrogenic binding of ions from the cytoplasmic side of the Na<sup>+</sup>,K<sup>+</sup>-ATPase has been studied by measurements of changes of the membrane capacitance and conductance triggered by a jump of pH or of the sodium-ion concentration in the absence of ATP. The pH jumps were performed in experiments with membrane fragments containing purified Na<sup>+</sup>,K<sup>+</sup>-ATPase adsorbed to a bilayer lipid membrane (BLM). Protons were released in a sub-millisecond time range from a photosensitive compound (caged H<sup>+</sup>) triggered by a UV light flash. The sodium concentration jumps were carried out by a fast solution exchange in experiments with membrane fragments attached to a solid-supported membrane deposited on a gold electrode. The change of the membrane capacitance triggered by the pH jump depended on the sodium-ion concentration. Potassium ions had a similar effect on the capacitance change triggered by a pH jump. The effects of these ions are explained by the their competition with protons in the binding sites on cytoplasmic side of the Na<sup>+</sup>,K<sup>+</sup>-ATPase. The approximation of the experimental data by a theoretical model yields the dissociation constants, K, and the cooperativity coefficients, n, of the binding sites for sodium ions (K = 2.7 mM, n = 2) and potassium ions (K = 1.7 mM, n = 2). In the presence of magnesium ions the apparent dissociation constants of sodium increased. A possible reason of the inhibition of sodium-ion binding by magnesium ions can be an electrostatic or conformational effect of magnesium ions bound to a separate site of the Na<sup>+</sup>,K<sup>+</sup>-ATPase close to the entrance to the sodium-ion binding sites.</dcterms:abstract>
<dc:contributor>Apell, Hans-Jürgen</dc:contributor>
<dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/31504/3/Tashkin_0-292115.pdf"/>
<dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
<dc:creator>Tashkin, Vsevolod Yu.</dc:creator>
<dc:language>eng</dc:language>
<dc:contributor>Gavrilchik, A.N.</dc:contributor>
<dc:creator>Sokolov, Valeriy S.</dc:creator>
<void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
<dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/>
<dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2015-07-27T14:58:05Z</dcterms:available>
<dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/>
</rdf:Description>
</rdf:RDF>
Yes
|
2023-03-28 03:20:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3879872262477875, "perplexity": 7161.8211082221305}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00784.warc.gz"}
|
https://www.groundai.com/project/a-view-of-large-magellanic-cloud-hii-regions-n159-n132-and-n166-through-the-345-ghz-window/
|
A view of some LMC HII regions through the 345 GHz window
# A view of Large Magellanic Cloud HII regions N159, N132, and N166 through the 345 GHz window
S. Paron, M. E. Ortega, C. Fariña, M. Cunningham, P. A. Jones, and M. Rubio
Instituto de Astronomía y Física del Espacio (IAFE), CC 67, Suc. 28, 1428 Buenos Aires, Argentina
Isaac Newton Group of Telescopes, E38700, La Palma, Spain
School of Physics, University of New South Wales, Sydney, NSW 2052, Australia
Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago, Chile
sparon@iafe.uba.ar
Accepted XXXX. Received XXXX; in original form XXXX
###### Abstract
We present results obtained towards the Hii regions N159, N166, and N132 from the emission of several molecular lines in the 345 GHz window. Using ASTE we mapped a 24 24 region towards the molecular cloud N159-W in the CO J=3–2 line and observed several molecular lines at an IR peak very close to a massive young stellar object. CO and CO J=3–2 were observed towards two positions in N166 and one position in N132. The CO J=3–2 map of the N159-W cloud shows that the molecular peak is shifted southwest compared to the peak of the IR emission. Towards the IR peak we detected emission from HCN, HNC, HCO, CH J=4–3, CS J=7–6, and tentatively CO J=3–2. This is the first reported detection of these molecular lines in N159-W. The analysis of the CH line yields more evidence supporting that the chemistry involving this molecular species in compact and/or UCHii regions in the LMC should be similar to that in Galactic ones. A non-LTE study of the CO emission suggests the presence of both cool and warm gas in the analysed region. The same analysis for the CS, HCO, HCN, and HNC shows that it is very likely that their emissions arise mainly from warm gas with a density between to some cm. The obtained abundance ratio greater than 1 is compatible with warm gas and with an star-forming scenario. From the analysis of the molecular lines observed towards N132 and N166 we propose that both regions should have similar physical conditions, with densities of about 10 cm.
###### keywords:
galaxies: ISM – (galaxies:) Magellanic Clouds – (ISM:) Hii regions – ISM: molecules
pagerange: A view of Large Magellanic Cloud HII regions N159, N132, and N166 through the 345 GHz windowReferencespubyear: 2015
## 1 Introduction
The Large Magellanic Cloud (LMC), at only 49.97 kpc (Pietrzyński et al., 2013) is a gas rich environment with reduced metallicity (Z about half of the Galactic value) that allow us to obtain detailed observational data to study the physical processes leading to massive star formation in an interstellar medium (ISM) that may be comparable, to certain degree, to those conditions of some star-forming sites in our Galaxy in the early stages (see for example Yamada et al. 2013). Several global surveys of the molecular component in the LMC have been done so far, mainly in the CO J=1–0 emission, in increasing resolutions, starting with an angular resolution of 10 (Cohen et al., 1988; Rubio et al., 1991), to the latest made with NANTEN with 26 of angular resolution (Fukui et al., 2008). These surveys were complemented with observations at better angular resolution of known individual cloud complexes, such as the ESO SEST Key Programme (e.g. Israel et al. 1993; Garay et al. 2002; Israel et al. 2003), the Magellanic Mopra Assessment (e.g. Wong et al. 2011), and the second survey of molecular clouds with NANTEN (Kawamura et al., 2009).
Most of the molecular line surveys and individual observations towards the LMC do not cover the 345 GHz window, which contains several molecular lines that provide substantial information about the physical and chemical conditions. With the idea of carry out a comparative study of the physical conditions of the molecular gas in Hii regions in the LMC using molecular lines in the 345 GHz windows, we have made observations with the Atacama Submillimetre Telescope Experiment (ASTE) towards different regions. In Paron et al. (2014) (hereafter Paper I), we published the results from N113 study. In this paper we add the results from the study of other three Hii regions: N159, N132, and N166. Although these regions share the same global characteristics in terms of metallicity, they are located in a completely different environment within the LMC (see Figure 1).
N159 is by far the most studied Hii region of the set. Situated approximately 600 pc in projection south of 30 Dor, in the CO molecular ridge, it is a region likely perturbed by the interaction with the Milky Way halo (Ott et al., 2008). The N159 complex was classified, in the NANTEN catalog compiled by Fukui et al. (2008), as a type III giant molecular cloud (GMC), that is a GMC with Hii regions and young star clusters. This complex is populated by young massive stars (e.g. Fariña et al. 2009) and presents numerous features characteristic of active star formation regions. Gatley et al. (1981) discovered the first extragalactic protostar here, and Caswell & Haynes (1981) the first Type I extragalactic OH maser. It is known that N159 hosts massive embedded young stellar objects (YSOs), a maser source, and several ultracompact Hii regions (Chen et al., 2010). The carbon in the gaseous phase of the whole complex was studied in detail by Bolatto et al. (2008), whereas Mizuno et al. (2010) studied the warm dense molecular gas. Recently Fukui et al. (2015), using ALMA CO J=2–1 observations, discovered the first extragalactic protostellar molecular outflows towards this region.
N166 located in projection about 550 pc south-east of 30 Dor, between 30 Dor and N159 to the east of the CO molecular ridge. This region, associated with the molecular cloud DEM 310 (Davies et al., 1976) and the giant molecular Complex-37 (Garay et al., 2002), was cataloged as type II GMC (a GMC with Hii regions only) in the NANTEN catalog. Minamidani et al. (2008) studied the CO J=1–0 and J=3–2 emission towards five clumps in N166, and suggest that this region is in a younger phase of star formation than N159 as density has not yet reached high enough to start the born of massive stars.
N132 located in projection about 1200 pc south-west of 30 Dor, on the northern edge of LMC bar, is associated with the molecular clouds DEM 172, 173 and 186 (Davies et al., 1976; Kawamura et al., 2009). As in the case of N166 this region is also a GMC Type II. This region has not been particularly studied apart of a global characterization in which the H column density is estimated and the H–CO ratio determined (Israel, 1997).
In this paper we present the study we have carried out with new observations made towards the LMC Hii regions N159, N132, and N166 in a set of molecular lines in the 345 GHz window: CO and CO J=3–2 and the unexplored lines (in N159) HCN, HNC, HCO, and CH (in the J=4–3 transition), CS J=7–6, and CO J=3–2.
## 2 Observations and data reduction
The molecular observations were performed between July and August 2010 with the 10 m ASTE telescope (Ezawa et al., 2004). The CATS345 GHz band receiver, a two-single band SIS receiver remotely tunable in the LO frequency range of 324-372 GHz, was used. The XF digital spectrometer was set to a bandwidth and spectral resolution of 128 MHz and 125 kHz, respectively. The spectral velocity resolution was 0.11 km s and the half-power beamwidth (HPBW) was 22 at 345 GHz. The system temperature varied from T to 250 K and the main-beam efficiency was . The conversion factor to convert from Kelvin to Jansky is 78.3 (from T).
The data were reduced with NEWSTAR111Reduction software based on AIPS developed at NRAO, extended to treat single dish data with a graphical user interface (GUI). and the spectra processed using the XSpec software package222XSpec is a spectral line reduction package for astronomy which has been developed by Per Bergman at Onsala Space Observatory.. The spectra were Hanning smoothed to improve the signal-to-noise ratio, and in some cases, a boxcar smoothing was also applied. Polynomials between first and third order were used for baseline fitting.
Several molecular lines in the 345 GHz window were observed towards the regions N159, N132, and N166. The observed positions are indicated in Table 1. In the case of N166 two different positions were observed, which are indicated as A and B in the Table. These observations were performed in position switching mode. Additionally, we mapped a 24 24 region towards N159 centred at R.A.5:39:38.3, dec.69:45:19.6 (J2000) in the CO J=3–2 line. This observation was performed in on-the-fly mapping mode achieving an angular sampling of 6.
## 3 Results
Figure 2 is a three-colour image displaying the mid/far-IR emission in the N159 area where the mapped region in the CO J=3–2 line is indicated with a yellow square. The CO J=3–2 emission integrated between 225 and 250 km s is presented in contours. The surveyed region corresponds to the molecular cloud N159-W (Johansson et al., 1998; Bolatto et al., 2000) which hosts several massive young stellar objects (MYSOs) (Chen et al., 2010). Moreover, recently Fukui et al. (2015), using ALMA CO J=2–1 observations, discovered the first extragalactic protostellar molecular outflows towards this region.
From the CO J=3–2 map of N59-W and by assuming local thermodynamic equilibrium (LTE) we roughly estimate the molecular mass following the same procedure as done in Paper I. From the CO J=3–2 peak temperature (see Table 2) we derive an excitation temperature of T K. Using the peak CO and CO temperatures ratio, we obtain the optical depths and , which shows that the CO J=3–2 line is optically thin. Once obtained the CO column density (see Equation 3 in Paper I), we assumed an abundance ratio of [CO/H] (Heikkilä et al., 1999) to derive the H column density. Finally, the molecular mass was estimated from:
M=μ mH∑i[D2 Ωi Ni(H2)],
where is the solid angle subtended by the beam size, is the distance (50 kpc), is the hydrogen mass, and is the mean molecular weight, assumed to be 2.8 by taking into account a relative helium abundance of 25 %. The summation was performed over all the beam positions belonging to the molecular structure delimited by the 7 K km s contour displayed in Fig. 2. The obtained mass is M M.
Figure 3 shows the spectra of the molecular lines observed towards N159-W (red cross in Fig. 2). This position is about 4 close to the location of the MYSO 053937.56-694525.4 catalogued in Chen et al. (2010) which is very likely the responsible of the molecular outflows detected by Fukui et al. (2015). Despite the high noise in the CO J=3–2 line (about 25 mK), we included the spectrum as the signal is still quite evident. Figures 4 and 5 show the CO isotopes spectra observed towards N132 and N166. The line parameters from these spectra are presented in Table 2. The peak main-beam temperature, the central velocity, and the FWHM line width (Cols. 3–5) were obtained from Gaussians fits (red curves in the spectra figures). Col. 6 lists the integrated line intensity. The CH J=4–3 line presents two peaks due to its fine structure components. One peak should correspond to the blended CH (4–3) J=9/2–7/2 F=5–4 and 4–3 lines, and the other to the blended (4–3) J=7/2–5/2 F=4–3 and 3–2 lines (see Paper I where it is presented the same detection towards N133, and the NIST data base). The HCN J=4–3 emission was fitted with two Gaussians, probably due to a fine structure component, however, in Table 2 the integrated line intensity refers to the entire line.
Table 3 presents the integrated intensity ratios for some of the lines presented in Table 2. For comparison, the ratios obtained towards N159-W from the J=1–0 line by Chin et al. (1997) are also included. Table 3 lists as well the ratios obtained towards N113 from Paper I.
### 3.1 Non-LTE analysis of N159
Using the CO and CO J=1–0 line parameters from Chin et al. (1997), the convolved CO and CO J=2–1 from Johansson et al. (1994), our results of the CO and CO J=3–2 lines, and the line parameters of CO J=4–3 and J=6–5 towards the same position of our observation point at N159-W (T K, v km s and T K, v km s; data kindly provided by Okada Y., see Okada et al. 2015) we performed a non-LTE study of the CO using the RADEX code (van der Tak et al., 2007). The main-beam temperatures were corrected for beam dilution by calculating T T/. Following Okada et al. (2015), we used a beam filling factor of . Figure 6 presents the results obtained for kinetic temperatures of 20 and 80 K, displaying the expected H density and the column density pairs corresponding to a given T. The kinetic temperature values are due to consider both the presence of a cold gas component (Ott et al. (2010) obtained a T K from ammonia lines), and a warmer one (likely due to the star forming processes and the radiation from massive stars). In the warmer case the code was run for a grid of kinetic temperatures between 20 and 100 K. The selected model was that in which the intersection of the curves is more tight (this was the model with T 80 K). To perform this analysis, we assumed that the lower CO transitions arise mainly from the cold gas component, while the higher ones from the warmer one. Given that it is likely that the J=3–2 transition arises from both components, a fifty percent of its emission was roughly assigned to each component.
The same non-LTE analysis was done for the CS, HCO, HCN, and HNC. The parameters for the lowest transition of these molecular species were obtained from Chin et al. (1997) who observed a point located at 6 from our observation. The results are presented in Figure 7. In the cold gas possibility the results for the HCO do not converge, and in addition, the intersections of all curves are not so tight as in the 80 K case. Table 4 presents the results for all analysed molecules.
## 4 Discussion
### 4.1 N159
Our CO J=3–2 map of N159-W is very similar to the map recently presented by Okada et al. (2015), which shows that the molecular peak is shifted southwest compared to the peak of the IR emission. It is important to note that we are presenting the first detections of HCO, HCN, HNC, and CH in the J=4–3 transition, CS in the J=7–6, and tentatively the CO J=3–2 line towards N159-W, precisely towards an IR peak, revealing that this region has the physical conditions needed to excite these lines (for instance the critical densities of HCO, HCN J=4–3, and CS J=7–6 are 1.8, 8.5, and 2.9 cm, respectively (Greve et al., 2009)). Concerning to the CH J=4–3 line, as done in N113 (Paper I), we analyse the measured FWHM v of the peaks and compare with the analysis presented in Beuther et al. (2008) towards a Galactic sample of star-forming regions in different evolutionary stages. The authors showed that the CH J=4–3 lines towards ultracompact Hii regions are significantly broader (v 5.5 km s in average) than those obtained towards infrared dark clouds and high-mass protostellar objects, i.e. sources representing earlier stages in star formation. Our CH v values are indeed broad, which is consistent with the position of the molecular observation, almost in coincidence with a bright compact radio continuum source at R.A.5:39:37.48, dec.:45:26.10 (J2000) (Hunt & Whiteoak, 1994; Indebetouw et al., 2004), catalogued as a compact Hii region likely generated by two O4V/O5V stars (Martín-Hernández et al., 2005). Thus, we are presenting more evidence supporting that the chemistry involving the CH in compact and/or UCHii regions in the LMC should be similar to that in Galactic ones.
The comparison between the ratios from higher transition and those obtained from the lower one presented in Table 3 shows the same trend for both N159 and N113, i.e. ratios from J=4–3 are lower than those from J=1–0 except for in both regions. The discrepancy between both ratios is more pronounced in N159 than in N133. On the other side, the ratio in the J=4–3 line in N159 is almost twice the value derived in N113. This may suggest an overabundance of HCO in N159 that is not evidenced in the lower line ratio, probably due to line saturation effect. However, as none excitation effects are taking into account in the line ratios, the overabundance statement is far to be conclusive.
The RADEX results suggest the presence of both cool and warm gas in the analysed region. Indeed, the CO emission at the observed position likely arises from both, gas at 20 K with a density about cm, and gas at 80 K with densities between and cm. The CO column density in the warm gas component is times lesser than in the colder one. Our results for the warm gas component are in close agreement with what was obtained by Pineda et al. (2008), who used the CO and CO J=7–6, 4–3 and 1–0 lines, and some lines of [CI] and [CII]. Additionally, the RADEX results for the CS, HCO, HCN, and HNC indicate that it is very likely that their emissions arise mainly from warm gas with densities between to some cm, which is in agreement with the CO warm results.
It is known that the abundance ratio depends on kinetic temperature (Schilke et al., 1992). From the obtained column densities we derived , which is compatible with warm gas (T between 50 and 100 K) (Helmich & van Dishoeck, 1997). In addition, a ratio greater than 1 agrees with an star-forming scenario (Schöier et al., 2002), and in particular, our value is very similar to the values obtained towards active cores in galactic infrared dark clouds (Jin et al., 2015). The ratio greater than 1 can be explained by the rapid C HNC C HCN reaction that works as long as the carbon atom abundance is still high (Loison et al., 2014), which seems to be the case, since according to Okada et al. (2015) N159-W has the highest C column density within the N159 complex. This ratio could confirm, in an independent way, the existence of warm gas in the studied region. However, we should be cautious with such ratios because the obtained column densities are dependent on the assumed beam filling factor.
### 4.2 N132 and N166
In the case of N132 and N166 only CO and CO were detected. The non-detection of higher density gas tracers such as CO, NH, and DCO (which were observed in our observation run with integration times of 560 and 1440 sec) may indicate that the density of the molecular gas in these regions is not so high. This is in agreement with what Minamidani et al. (2008) concluded for N166 and is consistent with the higher CO/CO integrated intensity ratio compared with the denser regions N159 and N113 (see Table 3). The ratios obtained in N166 are in good agreement with the ratios presented in Garay et al. (2002) using the J=1–0 line for several clouds of the giant molecular Complex-37. N166-A and -B are about 19 and 27 close to N166-Clump 2 and N166-Clump 1 belonging to Complex-37 (Minamidani et al., 2008). From an LVG analysis the authors point out that the studied clumps in N166 have a density between some 10 to a few 10 cm with kinetic temperatures between 25 and 150 K. By assuming LTE we obtain similar values of T (between 24 and 30 K) and CO and CO optical depths ( and ) for N132 and N166, suggesting that the physical conditions should be similar in both regions.
## 5 Summary
We have performed a molecular line study towards the LMC Hii regions N159, N132, and N166 in the 345 GHz window using ASTE. We mapped a 24 24 region towards the molecular cloud N159-W in the CO J=3–2 line and several molecular lines as single pointings at an IR peak, about 4 close to the position of a MYSO. In addition, several molecular lines were also observed towards two positions in N166 and one position in N132, resulting in positive detections only the CO and CO J=3–2. Our main results can be summarized as follows:
(a) Our CO J=3–2 map of N159-W is very similar to the map recently presented by Okada et al. (2015) and shows that the molecular peak is shifted southwest compared to the peak of the IR emission. We estimated the LTE mass of the molecular clump in M.
(b) Towards the IR peak position of N159-W we detected emission from HCN, HNC, HCO, CH J=4–3, CS J=7–6, and tentatively CO J=3–2, being the first reported detection of these molecular lines in this region. In addition it was obtained an spectrum of CO and CO J=3–2 towards this position. The detection of the mentioned molecular species in the 345 GHz window proves the presence of high-density gas and shows the usefulness of performing surveys in this wavelength window to increase our knowledge about the physical and chemical conditions of the ISM in the LMC.
(c) The detection and the line width of CH J=4–3 towards N159-W is compatible with an environment affected by the action of an Hii region. Following our previous study in N113, we conclude that we are presenting more evidence supporting that the chemistry involving this molecular species in compact and/or UCHii regions in the LMC should be similar to that in Galactic ones.
(d) Using our observed CO lines and several lines of this molecule from the literature we performed a non-LTE study which suggests that the CO emission likely arises from both, gas at 20 K with a density about cm, and gas at 80 K with densities between and cm. The same non-LTE analysis for the CS, HCO, HCN, and HNC shows that we are indeed probing high-density gas ( to some cm) and it is very likely that their emissions arise mainly from warm gas, which is in agreement with the CO warm results.
(e) Using the column densities derived from the non-LTE study we obtained a abundance ratio greater than 1, which is compatible with warm gas and with an star-forming scenario. This is in agreement with the presence of MYSOs in the studied region, one of them driving molecular outflows.
(f) Based on the CO line analysis and the non-detection of higher density tracers we suggest that N132 and N166 should have similar physical conditions, with densities between some 10 to a few 10 cm and kinetic temperatures between 25 and 150 K.
## Acknowledgments
We acknowledge the anonymous referee for her/his helpful comments and suggestions. We wish to thank to Y. Okada for kindly provide us with the CO higher transitions data. Thanks to Bastiaan Zinsmeister for his contribution to this paper. The ASTE project is led by Nobeyama Radio Observatory (NRO), a branch of National Astronomical Observatory of Japan (NAOJ), in collaboration with University of Chile, and Japanese institutes including University of Tokyo, Nagoya University, Osaka Prefecture University, Ibaraki University, Hokkaido University, and the Joetsu University of Education. S.P. and M.O. are members of the Carrera del investigador científico of CONICET, Argentina. This work was partially supported by grants awarded by CONICET, ANPCYT and UBA (UBACyT) from Argentina. M.R. wishes to acknowledge support from FONDECYT(CHILE) grant N1140839.
## References
• Beuther et al. (2008) Beuther, H., Semenov, D., Henning, T., & Linz, H. 2008, ApJL, 675, L33
• Bolatto et al. (2000) Bolatto, A. D., Jackson, J. M., Israel, F. P., Zhang, X., & Kim, S. 2000, ApJ, 545, 234
• Bolatto et al. (2008) Bolatto, A. D., Leroy, A. K., Rosolowsky, E., Walter, F., & Blitz, L. 2008, ApJ, 686, 948
• Bonnarel et al. (2000) Bonnarel, F., Fernique, P., Bienaymé, O., et al. 2000, A&AS, 143, 33
• Caswell & Haynes (1981) Caswell, J. L., & Haynes, R. F. 1981, MNRAS, 194, 33P
• Chen et al. (2010) Chen, C.-H. R., Indebetouw, R., Chu, Y.-H., et al. 2010, ApJ, 721, 1206
• Chin et al. (1997) Chin, Y.-N., Henkel, C., Whiteoak, J. B., et al. 1997, A&A, 317, 548
• Cohen et al. (1988) Cohen, R. S., Dame, T. M., Garay, G., et al. 1988, ApJL, 331, L95
• Davies et al. (1976) Davies, R. D., Elliott, K. H., & Meaburn, J. 1976, MemRAS, 81, 89
• Ezawa et al. (2004) Ezawa H., Kawabe R., Kohno K., Yamamoto S., 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, Vol. 5489, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Oschmann Jr. J. M., ed., pp. 763–772
• Fariña et al. (2009) Fariña, C., Bosch, G. L., Morrell, N. I., Barbá, R. H., & Walborn, N. R. 2009, AJ, 138, 510
• Fukui et al. (2008) Fukui, Y., Kawamura, A., Minamidani, T., et al. 2008, ApJS, 178, 56
• Fukui et al. (2015) Fukui, Y., Harada, R., Tokuda, K., et al. 2015, ApJL, 807, L4
• Garay et al. (2002) Garay, G., Johansson, L. E. B., Nyman, L.-Å., et al. 2002, A&A, 389, 977
• Gatley et al. (1981) Gatley, I., Becklin, E. E., Hyland, A. R., & Jones, T. J. 1981, MNRAS, 197, 17P
• Greve et al. (2009) Greve, T. R., Papadopoulos, P. P., Gao, Y., & Radford, S. J. E. 2009, ApJ, 692, 1432
• Heikkilä et al. (1999) Heikkilä, A., Johansson, L. E. B., & Olofsson, H. 1999, A&A, 344, 817
• Helmich & van Dishoeck (1997) Helmich, F. P., & van Dishoeck, E. F. 1997, A&AS, 124, 205
• Hughes et al. (2010) Hughes, A., Wong, T., Ott, J., et al. 2010, MNRAS, 406, 2065
• Hunt & Whiteoak (1994) Hunt, M. R., & Whiteoak, J. B. 1994, Proceedings of the Astronomical Society of Australia, 11, 68
• Indebetouw et al. (2004) Indebetouw, R., Johnson, K. E., & Conti, P. 2004, AJ, 128, 2206
• Israel et al. (1993) Israel, F. P., Johansson, L. E. B., Lequeux, J., et al. 1993, A&A, 276, 25
• Israel (1997) Israel, F. P. 1997, A&A, 328, 471
• Israel et al. (2003) Israel, F. P., Johansson, L. E. B., Rubio, M., et al. 2003, A&A, 406, 817
• Jin et al. (2015) Jin, M., Lee, J.-E., & Kim, K.-T. 2015, ApJS, 219, 2
• Johansson et al. (1994) Johansson, L. E. B., Olofsson, H., Hjalmarson, A., Gredel, R., & Black, J. H. 1994, A&A, 291, 89
• Johansson et al. (1998) Johansson, L. E. B., Greve, A., Booth, R. S., et al. 1998, A&A, 331, 857
• Kawamura et al. (2009) Kawamura, A., Mizuno, Y., Minamidani, T., et al. 2009, ApJS, 184, 1
• Loison et al. (2014) Loison, J.-C., Wakelam, V., & Hickson, K. M. 2014, MNRAS, 443, 398
• MacLaren et al. (1988) MacLaren, I., Richardson, K. M., & Wolfendale, A. W. 1988, ApJ, 333, 821
• Martín-Hernández et al. (2005) Martín-Hernández, N. L., Vermeij, R., & van der Hulst, J. M. 2005, A&A, 433, 205
• Meixner et al. (2006) Meixner, M., Gordon, K. D., Indebetouw, R., et al. 2006, AJ, 132, 2268
• Minamidani et al. (2008) Minamidani, T., Mizuno, N., Mizuno, Y., et al. 2008, ApJS, 175, 485
• Minamidani et al. (2011) Minamidani, T., Tanaka, T., Mizuno, Y., et al. 2011, AJ, 141, 73
• Mizuno et al. (2010) Mizuno, Y., Kawamura, A., Onishi, T., et al. 2010, PASJ, 62, 51
• Okada et al. (2015) Okada, Y., Requena-Torres, M. A., Güsten, R., et al. 2015, A&A, 580, A54
• Ott et al. (2008) Ott, J., Wong, T., Pineda, J. L., et al. 2008, PASA, 25, 129
• Ott et al. (2010) Ott, J., Henkel, C., Staveley-Smith, L., & Wei, A. 2010, ApJ, 710, 105
• Paron et al. (2014) Paron, S., Ortega, M. E., Cunningham, M., et al. 2014, A&A, 572, A56
• Pietrzyński et al. (2013) Pietrzyński, G., Graczyk, D., Gieren, W., et al. 2013, Nature, 495, 76
• Pineda et al. (2008) Pineda, J. L., Mizuno, N., Stutzki, J., et al. 2008, A&A, 482, 197
• Roberts et al. (2011) Roberts, H., van der Tak, F. F. S., Fuller, G. A., Plume, R., & Bayet, E. 2011, A&A, 525, A107
• Rubio et al. (1991) Rubio, M., Garay, G., Montani, J., & Thaddeus, P. 1991, ApJ, 368, 173
• Schilke et al. (1992) Schilke, P., Walmsley, C. M., Pineau Des Forets, G., et al. 1992, A&A, 256, 595
• Schöier et al. (2002) Schöier, F. L., Jørgensen, J. K., van Dishoeck, E. F., & Blake, G. A. 2002, A&A, 390, 1001
• van der Tak et al. (2007) van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, E. F. 2007, A&A, 468, 627
• Wong et al. (2011) Wong, T., Hughes, A., Ott, J., et al. 2011, ApJS, 197, 16
• Yamada et al. (2013) Yamada, S., Suda, T., Komiya, Y., Aoki, W., & Fujimoto, M. Y. 2013, MNRAS, 436, 1362
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
2021-01-23 13:28:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489879369735718, "perplexity": 4040.6154880134677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538082.57/warc/CC-MAIN-20210123125715-20210123155715-00374.warc.gz"}
|
https://support.bioconductor.org/p/113497/
|
Question: dynamicTreeCut and WGCNA show clusters overlapping. Why is this?
1
14 months ago by
jol.espinoz20
jol.espinoz20 wrote:
Can somebody explain why I am seeing certain clusters overlap? I tried reproducing the results in Python and in R and I'm seeing a similar pattern.
I have 2 main questions:
(1) Do the colors from WGCNA get repeated? It looks like the 2nd red cluster from the left should be a different color based on my Python plots. Did I plot something incorrectly or is this a known bug of either WGCNA or dynamicTreeCut?
(2) If not (1), why are the clusters overlapping? Is there a parameter I can adjust that controls for this?
Here's my Rcode:
library(dynamicTreeCut)
library(fastcluster)
library(WGCNA)
return(df)
}
# Convert to dissimilarity
Z = hclust(as.dist(df_dism), method="ward.D2")
# Cut the dendrogram
treecut_output = cutreeDynamic(
dendro=Z,
method="hybrid",
distM=df_dism,
minClusterSize = 10,
deepSplit=2,
)
# Plot dendrogram
plotDendroAndColors(
dendro=Z,
colors=treecut_output,
)
Here is my python representation using the same parameters:
|
2019-12-13 01:24:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32524052262306213, "perplexity": 3391.757950105397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00022.warc.gz"}
|