url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://mathblag.wordpress.com/2012/06/23/a-product-rule-for-triangular-numbers/
### A product rule for triangular numbers The nth triangular number is $T(n) = 1 + 2 + \ldots + n = \frac12 n(n+1)$. It represents the number of dots in a triangular arrangement, with 1 dot in the first row, 2 dots in the second row, etc. (Image source: Wikipedia) The triangular numbers satisfy many interesting properties, including a product rule: $T(mn) = T(m)T(n) + T(m-1)T(n-1)$ This rule can be demonstrated visually by subdividing a triangle into smaller triangles. The following picture illustrates the case $T(20) = T(5)T(4) + T(4)T(3)$. Inspired by a question of James Tanton, I sought to find all sequences that satisfy this product rule. This problem has a lovely solution, and I encourage you to discover it for yourself. I will outline my solution in subsequent posts.
2018-03-17 14:32:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578186631202698, "perplexity": 366.21137198938214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645177.12/warc/CC-MAIN-20180317135816-20180317155816-00284.warc.gz"}
https://www.gamedev.net/blogs/entry/2266597-dungeonbot3000-stats-combat-and-items/?tab=comments
# DungeonBot3000: Stats, Combat, and Items 871 views I have to admit to an ulterior motive or two when it comes to this Challenge entry. My other project, Goblinson Crusoe, will certainly benefit from some of the iteration on stats, combat and item design that I am doing. In fact, it has already benefited. For one thing, I've polished the way I handle combat stats. In a previous entry I talked about a system I had built to implement combat stats in GC. While the system had some good ideas, the implementation was clunky as hell, and the data file format was painfully verbose and awkward. To sum up, I built a system of stat containers, or sets, that held stats and managed the various linkages between them. Dependencies could be specified in JSON files, using specialized description formats. Stats could depend upon other stats in various ways; for example, the MaximumLife stat could depend upon Level (the character's level), IncreasedMaximumLife boosts, and so forth. The engineering challenge was to come up with a way to manage having stats and stat modifiers coming from various sources: equipment, base character stats, buffs/debuffs, and so forth. If you read that previous entry, you can see that the first iteration of the system was ugly and the specifiers for stat dependencies were convoluted and extremely limited. The code backing that shit up was even worse, I assure you. In revisiting the idea for this challenge entry, though, I decided to clean up and simplify the way I build the stat sets. The first example from that previous entry now would look like this: { "MajorStatPerLevel: [["Flat", "5"]], "MinorStatPerLevel": [["Flat", "2"]], "Arrogance": [["Flat", "3"], ["Flat", "Level*MinorStatPerLevel"]], "Cunning": [["Flat", "3"], ["Flat", "Level*MinorStatPerLevel"]] } Gone are the various "StatFlat", "CalcLinear", "Scale", etc... specifiers. In their place is a simple expression parsing system. A stat modifier is specified as a Type (can be one of Flat, Multiplier, Scale, Min or Max), and an expression string that is parsed to provide the dependency linkages. Stat values are calculated by summing the Flat contributions, multiplying them by the sum of the the Multiplier contributions + 1.0, multiplying that result by the product of the Scale contributions, and capping them by the values of the Min and Max contributions, if specified. StatValue = SumOfFlat * (1.0 + SumOfMultiplier) * (Scale*Scale*Scale...) Additionally, I have expanded the system to implement things like global or local mods, and mods from various sources. An action, be it an attack or other combat-related activity, requests from the combatant the relevant set of stat sets as a StatSetCollection. If it wants to use the local mods for the weapon equipped in the left hand, it requests those. If it wants to use the stats for a certain skill, it requests those. This collection is used to evaluate the final values for the stats required for an action. A simple example. In the game currently, DB3000 can perform a spin attack. The spin attack uses the damage values specified by the equipped blades (the item system is still in development, so those are hard-coded for the moment), as well as the character's base stats and a set of damage values specified by the spin attack spell itself, in order to calculate the final damage values for the attack. Say that DB3000 has a Steel Blade equipped: "Steel Blade": { "MinLevel": 8, "Random": ["PhysicalDamageLocalTiers", "LifeRegenTiers", "EnergyGenTiers"]} }, This item structure specifies the fixed implicit (local) mods for a Steel Blade, as well as the set of mod tiers that can be randomly rolled on a Steel Blade. The SteelBladeImplicit mod StatSet looks like: "SteelBladeImplicit":["Implicit", "Damage: 20 to 50", { "PhysicalLow": [["Flat", "20"]], "PhysicalHigh": [["Flat", "50"]] }], So a basic Steel Blade deals physical damage in the range of 20 to 50. The Spin Attack skill itself also provides a StatSet: { "SpinAttack": { "PhysicalLow": [["Flat", "5+SpinAttackLevel*5 + PhysicalDamageToSpinAttackLow"]], "PhysicalHigh": [["Flat", "10+SpinAttackLevel*8 + PhysicalDamageToSpinAttackHigh"]], }, } This means that using Spin Attack will add physical damage to the physical damage the blades provide, based on the skill level of the Spin Attack skill. It will also add damage based on any PhysicalDamageToSpinAttack bonuses provided by equipment or buffs or other sources, if any. When the skill is used, all of these various StatSets are concatenated into a single set (conceptually, at least) then the final value for PhysicalLow and PhyiscalHigh are determined, and a damage roll is made. A Level 1 Spin Attack would do 30 to 68 damage with a Steel Blade with no random mods. Items can also roll a list of randomized mods. One such mod that a Steel Blade can roll comes from the PhysicalDamageLocalTiers set, which provide local (meaning, they apply only to skills using the equipped blade) physical damage modifiers: "PhysicalDamageLocalTiers": { "Weighting": 2, "Tables": [ {"Level": 1, "Mods": ["Cruel"]}, {"Level": 5, "Mods": ["Barbarous", "Cruel"]}, {"Level": 10, "Mods": ["Brutal", "Barbarous", "Cruel"]}, ] }, From this tier list you can see that the physical damage mods come in 3 tiers. A Steel Blade can start to drop on Dungeon Level 8 and up, so if one drops on L8 it can only roll from the set including  "Cruel", or from the second set including "Barbarous" and "Cruel", which are specified as: "Cruel":["Local", "Increase physical damage by 10 to 15.", { "PhysicalLow": [["Flat", "10"]], "PhysicalHigh": [["Flat", "15"]] }], "Barbarous":["Local", "Increase physical damage by 20 to 35.", { "PhysicalLow": [["Flat", "20"]], "PhysicalHigh": [["Flat", "35"]] }], If the blade drops with the Barbarous mod, then that will provide an additional flat 20 to 35 damage to the Spin Attack. These stats exist as StatSets local to the item, so they are collected only if the item is equipped and, since it is a local mod, only if the skill is using the blade, which Spin Attack does. So, once the various StatSets (from blade and from the Spin Attack skill) are collected, you end up with: "PhysicalLow": [["Flat", "20"], ["Flat", "20"], ["Flat", "5+SpinAttackLevel*5 + PhysicalDamageToSpinAttackLow"]], "PhysicalHigh": [["Flat", "35"], ["Flat", "50"], ["Flat", "10+SpinAttackLevel*8 + PhysicalDamageToSpinAttackHigh"]], for the final physical damage calculation of the Spin Attack. At Level 1, that means it does 50 to 103 physical damage per hit. I know this has been a bit long-winded. A lot of this is second- or even third-generation stuff derived from what Goblinson Crusoe already has, with many much-needed fixes, so when I'm done working on this challenge I'll be able to fold a lot of this back into GC. It should be fun. Looking good! can't wait to see the final boss. Thanks, guys. @Awoken: The final boss is gonna be spectacular. I'm pretty sure I'm friends with @khawk on facebook, so I could probably hit his profile to model the boss based on what he looks like for accuracy, but I'm gonna go with what I imagine him to look like instead. Probably around 6 foot 5, 300 lbs of raw muscle and a big-ass ban-hammer to swing around. And plenty mad, on account of being in cryo-sleep for 650 years and waking up to all his shit smashed. That ban hammer will swing freely, my friends, and wide. 11 hours ago, JTippetts said: Probably around 6 foot 5, 300 lbs of raw muscle and a big-ass ban-hammer to swing around. The accuracy is startling. 13 hours ago, JTippetts said: Probably around 6 foot 5, 300 lbs of raw muscle and a big-ass ban-hammer to swing around. And plenty mad, on account of being in cryo-sleep for 650 years and waking up to all his shit smashed. That ban hammer will swing freely, my friends, and wide. lol ## Create an account Register a new account • ### What is your GameDev Story? In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
2019-02-19 18:55:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.302803635597229, "perplexity": 4312.673706948277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247491141.23/warc/CC-MAIN-20190219183054-20190219205054-00533.warc.gz"}
https://kppspkoluszki.pl/1600769002/which-is-produced-when-calcium-metal-reacts-with-water-angola.html
# which is produced when calcium metal reacts with water angola ### Caridge International Examinations Caridge International … A calcium carbonate calcium oxide + carbon dioxide B carbon + oxygen carbon dioxide C methane + oxygen carbon dioxide + water 1 A flammable gas is produced in reaction P. 2 Water is formed in all reactions. 3 All the salts formed are soluble in water. ### Chemistry: Chemical Word Equations calcium carbonate + hydrochloric acid calcium chloride + carbon dioxide + water CaCO 3 (s) + 2 HCl(aq) CaCl 2 (aq) + H 2 CO 3 (aq) CO 2 (g) + H 2 O (l) 5. aqueous zinc chloride reacts with dihydrogen monosulfide gas to yield a zinc sulfide precipitate and ### 306 minutes 259 marks - Isaac Newton Academy white solid. Each bottle containeda compound of a different Group 2 metal (magnesium, calcium, strontium and barium). Some tests were carried out on the solids or, if the compound was soluble, on the aqueous solution. The results are given in the table. Test ### CBSE 10, Chemistry, CBSE- Chemical Reactions and … Download free PDF of best NCERT Solutions , Class 10, Chemistry, CBSE- Chemical Reactions and Equations . All NCERT textbook questions have been solved by our expert teachers. You can also get free sample papers, Notes, Important Questions. ### th water is too slow to come into notice. But when steam is passed over aluminium metal, aluminium oxide and hydrogen gas are produced. Reaction of Zinc metal with Water: Zinc metal produces zinc oxide and hydrogen gas when steam is passed over it. Zinc does ### Sodium in Water Chemistry Demonstration - ThoughtCo 28/10/2019· The sodium in water chemistry demonstration is a spectacular demonstration that illustrates the reactivity of an alkali metal with water. What to Expect A small piece of sodium metal will be placed in a bowl of water. If a phenolphthalein indior has been added to the water, the sodium will leave a pink trail behind it as the metal sputters and reacts. ### Reactions of Group 2 Elements with Acids - Chemistry … This page discusses the reactions of the Group 2 elements (beryllium, magnesium, calcium, strontium and barium) with common acids. Reactions with nitric acid These reactions are more complied. When a metal reacts with an acid, the metal usually reduces ### ELEMENT: CALCIUM The metal has a silvery color, is rather hard, and is prepared by electrolysis of the fused chloride to which calcium fluoride is added to lower the melting point. Chemically it is one of the alkaline earth elements; it readily forms a white coating of nitride in air, reacts with water, burns with a yellow-red flame, forming largely the nitride. ### Acid-Base Reactions | Types Of Reactions | Siyavula A salt is still formed as the only product, but no water is produced. It is important to realise how useful these neutralisation reactions are. Below are some examples: Domestic uses Calcium oxide ($$\text{CaO}$$) is a base (all metal oxides are bases) that is put ### Metal-Water Reactions Chemistry Tutorial metal + water → metal hydroxide + hydrogen gas The ease with which a metal reacts is known as its activity. A more active metal will react more readily with water than a less active metal. In general, Group 1 (IA or alkali) metals and Group 2 (IIA or alkaline. ### Calcium metal reacts with water to produce Hydrogen … Answer to Calcium metal reacts with water to produce Hydrogen gas. Determine the mass of H2 produced at 25C and 0.967 atm when 525 mL of the gas is collected ### Calcium hydride reacts with water to form calcium … 20/3/2016· Approx. 85 g of calcium hydride are required CaH_2(s) + 2H_2O(l) rarr Ca(OH)_2(aq) + 2H_2(g)uarr The balanced equation gives a 1:1 stoichimetry between calcium hydride and dihydrogen gas, We need to (i) work out the molar quantity of dihydrogen, and (ii ### Reactions of the Group 2 elements with water 18/8/2020· Calcium, strontium and barium These all react with cold water with increasing vigour to give the metal hydroxide and hydrogen. Strontium and barium have reactivities similar to lithium in Group 1 of the Periodic Table. Calcium, for example, reacts fairly vigorously ### HIGHER TIER CHEMISTRY MINI-MOCK UNIT 2 [C2.1, C2.2&C2.3, … Uranium metal can be produced by reacting uranium hexafluoride with calcium. UF 6 + 3Ca → 3CaF 2 + U (a) Describe how calcium and fluorine bond together to form calcium fluoride. The electron arrangement of each atom is shown. (5) (b) Uranium ### Calcium Chloride - an overview | ScienceDirect Topics As previously indied, calcium salts appear to have a superior activity compared with most other metal salts, but they commonly suffer from a low solubility in water. Calcium formate acts in a manner similar to calcium chloride, but high dosages are required and its solubility is considerably less (approximately 17 g/100 g compared with 75 g/100 g at 20°C). ### metal - Students | Britannica Kids | Homework Help Calcium reacts vigorously with water, though not as vigorously as sodium and potassium. A few metal-water reactions vary with the temperature of the water. For example, magnesium reacts slowly with cool water, forming magnesium hydroxide and hydrogen gas. ### KFG XX Title - Royal Society of Chemistry When an acid reacts with metal, a salt and hydrogen are produced: acid + metal ( salt + hydrogen An example: nitric acid + calcium ( calcium nitrate + hydrogen The salt that is produced depends upon which acid and which metal react. The following table ### Calcium Carbonate - an overview | ScienceDirect Topics Calcium carbonate is often the first scale type to deposit. It is characteristic of produced oilfield brines and is very commonly found at the onset of first water production, and typically severe at points of large pressure drop such as at the production zone and at ### Calcium Nitrate And Sodium Phosphate Precipitate 2 · As soon as it forms, it "precipitates," or drops out of solution. Identify the precipitate in this reaction: calcium nitrate reacts with sodium phosphate. The first answer you have been given by jerid_28 is correct. 147 g of calcium phosphate precipitate was produced. ### CBSE Class 10 Science Lab Manual - Properties of Acids … HCl reacts with sodium carbonate (aqueous/solid) to liberate carbon dioxide (CO 2) which turns lime water milky due to the formation of calcium carbonate.When excess of CO 2 is passed through the solution, the milkiness disappears. Procedure Litmus Test Take ### Lakhmir Singh Chemistry Class 10 Solutions Chemical … The metal M reacts vigorously with water to form a solution S and a gas G. The solution S turns red litmus to blue whereas gas G, which is lighter than air, burns with a pop sound. Metal M has a low melting point and it is used as a coolant in nuclear reactors. ### What is produced when Sodium Carbonate reacts with … Since any metal carbonate reacts with any acid to produce a salt and water and carbon dioxide, the resulting product from that specific reaction will be sodium chloride, water and carbon dioxide. Answered by Patrick L. • Chemistry tutor ### Calcium hydroxide - Wikipedia Calcium hydroxide (traditionally called slaked lime) is an inorganic compound with the chemical formula Ca() 2.It is a colorless crystal or white powder and is produced when quicklime (calcium oxide) is mixed, or slaked with water.It has many names including hydrated lime, caustic lime, builders'' lime, slack lime, cal, or pickling lime.. ### Lakhmir Singh Chemistry Class 10 Solutions For Chapter 1 … The metal M reacts vigorously with water to form a solution S and a gas G. The solution S turns red litmus to blue whereas gas G, which is lighter than air, burns with a pop sound. Metal M has a low melting point and it is used as a coolant in nuclear reactors. ### GCSE CHEMISTRY - The Reactivity of Metals with Water - … The Reactivity Series The Reaction of Metals with Water. Potassium, sodium, lithium and calcium react with cold water, see alkali metals and alkaline earth metals. Metals in the reactivity series from magnesium to iron react with steam - H 2 O (g) but not water - H 2 O (l).
2021-10-16 06:26:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5951787233352661, "perplexity": 6314.683981415817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00218.warc.gz"}
https://support.bioconductor.org/p/42495/
Rsubread package not available? Warning resolved 2 0 Entering edit mode NDowell ▴ 20 @ndowell-4997 Last seen 8.0 years ago 0 Entering edit mode Wei Shi ★ 3.5k @wei-shi-2183 Last seen 1 day ago Australia/Melbourne/Olivia Newton-John … Dear Noah, We are still thinking about whether we should make Rsubread available on Mac or Windows laptops. I can certainly see the advantage of doing this. However, this will require considerable amount of work for developing and maintaining it because many functions in the package are written in C language and we are going to include more C functions into the package in the future. Moreover, the read alignment will consume most of the computational resources of a laptop and I do not think you can use your laptop for other work during the period of read mapping. So it might be best to run the read alignment on a server/supercomputer, rather on a laptop. Cheers, Wei 0 Entering edit mode Dear Wei, i think you might consider the following three reasons why it would be interesting to have Rsubread available in Mac OS X systems: 1. i've used Rsubread to teach my students about mapping reads without leaving the R console by using a small sample of reads (e.g. 1e5, 1e6), if eventually one makes a teaching course where students come with laptops, a significant fraction of them come now with mac. this could apply to all the BioC courses that take place throughout the world :) 2. apple not only sells laptops but also high-performance computing solutions. those users would also benefit of having Rsubread running in Mac OS X. 3. Mac OS X runs on top of a unix system, so it should not be all too difficult to have and maintain ANSI C code running in both, a linux and a Mac OS X system. cheers, robert. 0 Entering edit mode Dear Wei, Thanks for your response. I was going to use Rsubread with a small(ish) data set on a desktop with a little more memory than the standard laptop. I like the approach behind Rsubread and I wanted to do some comparisons with other aligners on small data sets before making a choice. Of course I understand as a maintainer you have to make some careful choices where and what to put your effort into. I should have a server up and running soon (at my new institute; not UCLA) and will consider using Rsubread at that time. Best, Noah 0 Entering edit mode FWIW, I've had pretty good results with some older 36-cycle Illumina RNAseq data using Rsubread, just aligning it on my laptop (granted it's a dual-core i7 with 16GB of RAM and 2 SSDs). For some of the RNAseq data from CD34+CD38- cells and neutrophils that Andrew Smith was kind enough to send, it took about 30 minutes, and I was able to do other things at the same time. Again, that might have something to do with my using 7.4GB for the index, having another 8.6GB for other processes, and running Linux. But mostly the RAM, I think. YMMV... Rsubread has some terrific features, I am torn between using it all the time, using BowTie/TopHat/Cufflinks/cummeRbund all the time, or both. Last time I just used both, but with our RNAseq/QC pipeline, if I want to re-align against hg18 I either have to use bedtools and liftOver, re- run the pipeline, or use Rsubread. The latter is more convenient, even if the former does have some more tools for removing PCR dupes, looking at alternative splicing, etc. Either one runs fine on our servers, of course, and both go a lot faster with 24 cores and 48GB of RAM :-) 0 Entering edit mode One other thing Noah, do you have Xcode and the R tools for Mac OSX installed? I'm using my wife's Macbook Air at this particular instant, and I'm tempted to try compiling Rsubread, but I don't really have time to debug something for fun right now. Still -- it would be interesting to see if that resolved your issues. It's not like laptops are being produced with less RAM and smaller processors as time goes by, and there are old-but-still-very-useful RNAseq datasets out there. And Rsubread is FAST so it's by far the most sensible choice for a laptop. 1 Entering edit mode Dear Tim, Robert and Noah, I think we will eventually make Rsubread available on Mac OS X system (but may not on Windows). Hopefully it will become available on the next bioc release. But Yang Liao, who wrote the entire C program for the read alignment, has to submit his Ph.D thesis February next year, and also we are now busy with publishing Rsubread. But I'm quite sure it should be available sometime next year. Rsubread uses the spinlock library to enable multithreaded running of the read alignment, which is not supported on Mac. We'll have to change it to the mutex library which is supported by both Mac and Linux for portability (however, we found spinlock is more efficient than mutex:-( ). The align() function in Rsubread can map junction reads from the RNA- seq data in addition to the exonic reads, although it won't give you the information of where the junction locations are (it gives the mapping location of the junction read in one of the exons it spans). The subjunc() function in Rsubread package can however be used to find the exact exon junction locations in the reference genome. Cheers, Wei 0 Entering edit mode TIm, Thanks for providing your experiences on using different aligners. I used R/Bioconductor for tiling and expression arrays as a grad student but now I am using next generation sequencing as a postdoc so I am getting up to speed with some new packages like Rsubread. Yes, I have the Xcode developer tools installed and have installed from source recently for other packages with no errors. Best, Noah 0 Entering edit mode Hi all, On Thu, Dec 8, 2011 at 12:10 PM, Noah Dowell <noahd at="" ucla.edu=""> wrote: > TIm, > > Thanks for providing your experiences on using different aligners. I used R/Bioconductor for tiling and expression arrays as a grad student but now I am using next generation sequencing as a postdoc so I am getting up to speed with some new packages like Rsubread. > > Yes, I have the Xcode developer tools installed and have installed from source recently for other packages with no errors. Wei can correct me if I am wrong, but I believe it's not as simple as building Rsubread from source on Mac (or Windows). If it were, Bioconductor would provide a binary package on those platforms. I believe the problem is that certain constructs (e.g. phtread) are used in such a way that is not portable to Mac OS X or Windows, Dan 0 Entering edit mode Hi Dan, Yes, that's right. pthread is a construct which is not portable between linux and Mac. But there might be other such constructs as well. We will need to have a close look at them. Thanks, Wei 0 Entering edit mode @gordon-smyth Last seen 19 minutes ago WEHI, Melbourne, Australia 29 August 2019 Starting from version 1.35.24, Rsubread is now available for Windows as well as the other platforms. For the moment, this is only for the developmental version of Bioconductor but it will migrate to the release version soon.
2022-08-17 05:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2644524574279785, "perplexity": 1962.159500589976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00263.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-1000/topics/Topic-19907/subtopics/Subtopic-263766/?textbookIntroActiveTab=overview&activeTab=theory
# 9.03 The area of a circle Lesson The area of a circle is the 2D space within the circle's boundary. Knowing how this area relates to the other features of the circle can let us calculate the area of a circle from its other features or it can be used to find different measurements of the circle with a given area. ### The area formula for a circle The area of a circle We can calculate the area of a circle using the formula: $A=\pi r^2$A=πr2 Where $A$A is the area and $r$r is the radius of the circle. Using this formula, we can find the area of a circle using its radius and vice versa. #### Worked Examples ##### Example 1 The radius of a circle is $6$6. What is the exact area of the circle? Think: To find the area of the circle using the radius, we can substitute the value for the radius into the formula $A=\pi r^2$A=πr2 and solve for $A$A. Do: If we substitute the radius $r=6$r=6 into the formula, we get: $A$A $=$= $\pi r^2$πr2 $A$A $=$= $\pi\times6^2$π×62 Substitute in the value for the radius $A$A $=$= $\pi\times36$π×36 Evaluate the square $A$A $=$= $36\pi$36π Write $36$36 as the coefficient of $\pi$π As such, the area of this circle is $36\pi$36π. ##### Example 2 The area of a circle is $16$16. What is the exact radius of the circle? Think: To find the radius of the circle using the area, we can substitute the value for the area into the formula $A=\pi r^2$A=πr2 and solve for $r$r. Do: If we substitute the area $A=16$A=16 into the formula, we get: $A$A $=$= $\pi r^2$πr2 $16$16 $=$= $\pi r^2$πr2 Substitute in the value for the area $\frac{16}{\pi}$16π​ $=$= $r^2$r2 Reverse the multiplication of $\pi$π $\sqrt{\frac{16}{\pi}}$√16π​ $=$= $r$r Square root both sides of the equation $\frac{\sqrt{16}}{\sqrt{\pi}}$√16√π​ $=$= $r$r Apply the square root to both parts of the fraction $\frac{4}{\sqrt{\pi}}$4√π​ $=$= $r$r Evaluate the square root in the numerator As such, the exact radius of this circle is $\frac{4}{\sqrt{\pi}}$4π. Reflect: In both case, we substituted the known value into the circle's area formula and solved to find the missing value. We also treated $\pi$π as a pronumeral since we wanted exact values. In the case where we want a rounded answer, we evaluate and round the answer as required. #### Practice questions ##### Question 1 The formula for the area of a circle is $A=\pi r^2$A=πr2 where $A$A is the area and $r$r is the radius. Consider the circle below: 1. What is the exact area of the circle? 2. What is the area of the circle rounded to two decimal places? ##### Question 2 Christa is finding the exact radius of a circle, knowing only that its area is $64\pi$64π. 1. Fill in the blanks to complete Christa's working out. $\pi r^2$πr2 $=$= $A$A (Formula for the area of a circle) $\pi r^2$πr2 $=$= $\editable{}$ (Substitute the given area) $r^2$r2 $=$= $\editable{}$ (Divide both sides by $\pi$π to make $r^2$r2 the subject) $r$r $=$= $\editable{}$ (Take the square root of both sides to find $r$r) ### Connecting other features to the area Since we now have a way to relate the area of a circle to its radius, we can use the radius to connect the area to the other distances in a circle. Since the radius is equal to half the diameter, we can replace the radius $r$r in the area formula with $\frac{d}{2}$d2, this gives us: $A=\pi\left(\frac{d}{2}\right)^2$A=π(d2)2 Which can be expanded to: $A=\frac{1}{4}\pi d^2$A=14πd2 In the cases where the diameter is a nicer number to work with than the radius, this version of the area formula can be useful. In a similar way, we can connect the area of a circle to its circumference. We know that the circumference of a circle is related to the radius by the formula $C=2\pi r$C=2πr while the area is related by the formula $A=\pi r^2$A=πr2. Unlike with the diameter, there isn't a nice formula that emerges when combining these two relationships. Instead, we can find the area using the given area or circumference and then use that radius to calculate the missing value. #### Worked Examples ##### Example 1 A circle has a diameter of $9$9. What is the area of the circle, rounded to two decimal places? Think: We can substitute our value for the diameter into the formula $A=\frac{1}{4}\pi d^2$A=14πd2 and solve for $A$A to find the area of the circle. Do: If we substitute the diameter $d=9$d=9 into the formula, we get: $A$A $=$= $\frac{1}{4}\pi d^2$14​πd2 $A$A $=$= $\frac{1}{4}\pi\times9^2$14​π×92 Substitute in the value for the diameter $A$A $=$= $\frac{1}{4}\pi\times81$14​π×81 Evaluate the square $A$A $=$= $\frac{81}{4}\pi$814​π Multiply $81$81 with the coefficient $\frac{1}{4}$14​ This means that the exact area of the circle is $\frac{81}{4}\pi$814π. Evaluating this gives us: $\frac{81}{4}\pi=63.6172512352$814π=63.6172512352$\dots$ Rounding this to two decimal places gives us a value of $63.62$63.62 for the area of the circle. Reflect: We could have also calculated the radius from the diameter and used that value in the formula $A=\pi r^2$A=πr2. This would give us the same answer, but would also require us to find the square of $\frac{9}{2}$92 which a bit more effort than squaring just $9$9. ##### Example 2 A circle has a circumference of $22\pi$22π. What is the exact area of the circle? Think: Using the formula $C=2\pi r$C=2πr we can find the radius from the circumference. We can then substitute this value for the radius into the formula $A=\pi r^2$A=πr2 to find the area of the circle. Do: We can find the radius of the circle by substituting $C=22\pi$C=22π into the formula to get: $C$C $=$= $2\pi r$2πr $22\pi$22π $=$= $2\pi r$2πr Substitute in the value for the circumference $11\pi$11π $=$= $\pi r$πr Reverse the multiplication of $2$2 $11$11 $=$= $r$r Reverse the multiplication of $\pi$π We have found that the radius of this circle is $11$11. Substituting this value into the area formula of the circle gives us: $A$A $=$= $\pi r^2$πr2 $A$A $=$= $\pi\times11^2$π×112 Substitute in the value for the radius $A$A $=$= $\pi\times121$π×121 Evaluate the square $A$A $=$= $121\pi$121π Write $121$121 as the coefficient of $\pi$π As such, the area of this circle is $121\pi$121π. We can perform similar calculations when finding the diameter or circumference using some given area. #### Practice questions ##### Question 3 The engineering team at Rocket Surgery are building a rocket for an upcoming Mars mission. A critical piece is the circular connective disc that connects the booster rocket to the rest of the spacecraft. This disc must completely cover the top of the booster rocket. The booster rocket has a diameter of precisely $713.5$713.5 centimetres. 1. Find the required area of the connective disc. 2. Instead of using the exact value, an engineer uses the approximation $3.14$3.14 for $\pi$π. What value does the engineer calculate for the area? 3. If the connective disc is more than $100$100 cm2 too small, the disc will malfunction, resulting in catastrophic launch failure. Is there a risk of malfunction if the disc is built according to the engineer's calculation? Yes A No B Yes A No B ##### Question 4 A circle has an area of $25\pi$25π cm2. 1. What is the radius of the circle? 2. What is its exact circumference? ### Outcomes #### MA4-13MG uses formulas to calculate the areas of quadrilaterals and circles, and converts between units of area
2022-01-18 06:56:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884314894676208, "perplexity": 786.3488579362395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00070.warc.gz"}
https://math.meta.stackexchange.com/questions/16980/homework-like-closures
“Homework-like” closures The closure of the question https://math.stackexchange.com/questions/958420/about-a-particular-class-of-finite-groups surprises me quite a bit, and I believe it illustrates something that is wrong with the approach being taken on MSE to closing questions. It is clear that, in mathematical terms, the question is not missing "context" or "details." Everything is there that would be needed in order to give a good answer. (This is not a case where the question is trivially easy, and one is simply looking to see "where the OP got stuck" to target a particular point in the proof. Rather, this is a question where many people with good mathematical maturity might not know where to begin.) My questions are: 1) In light of current practices on MSE, does the prevalence of this kind of closure enhance or reduce the usefulness of MSE as a resource? I won't define the words "this kind" because I think it should be left open to people replying to my question to determine what features of a question are significant in this respect. 2) Should there be an expectation that, where a person votes to close a clearly formulated math question as being "homework-like" or as lacking information about the OP's "thoughts," the voter should at minimum have entirely thought through what an answer to the question would be? (Trivial calculations are not included.) I have answered these questions below. • The overall goal of the site, in my opinion, is to have excellent answers and excellent questions. This means that, unlike in the first years of this site, it is no longer reasonable to just ask a question, with no motivation or other discussion. Unlike other math sites, this site does not discriminate between "basic" and "advanced" questions - all questions need to be well written. On the other hand, our sister site MathOverflow only accepts research-level questions, so they can get by without requiring as much motivation. – Carl Mummert Oct 6 '14 at 0:15 • I also think questions should be well written. I do not believe the objection to that question was that it was not well written, in the way that this notion would be commonly understood by mathematicians. – user180040 Oct 6 '14 at 0:18 • But it is not "well written" in the sense that it has no motivation, no explanation of the context of the question, no description of what the asker has thought about already. We are inundated with these poorly-composed questions at the moment! For truly advanced questions, the asker may want to try MathOverflow instead, where the level of the question can speak for itself. But the only way that I see to maintain the quality of this site is to apply the same standards to all new questions. – Carl Mummert Oct 6 '14 at 0:21 • I think those are the way people relate to discussing math. If I walk up to someone at tea and say "here is a problem I can't answer", I will usually tell them where I encountered the problem, and what I have thought about already. I would not walk up to someone at tea and ask them a random question as if it was a quiz! That, in my mind, would differ from the social norms of the wider world. But, yes, it also has the desired goal of discouraging homework questions that people haven't thought about, which many people here think is an important goal, even if we can't tell which are homework. – Carl Mummert Oct 6 '14 at 0:34 • Let me add that I might not want to add my attempts at solving the problem because I didn't want to reveal my ignorance. – user180040 Oct 6 '14 at 0:43 • Indeed, but this is not the site for that. The way that I view this site is like asking someone a question at tea: I will explain to them the question and the way I am thinking about it, and they will give me an explanation (if they can). The asker has already revealed, by asking the question, that they can't answer it. But who walks up to someone else at tea and just says "Answer this: ...." as if they are posing an examination question? – Carl Mummert Oct 6 '14 at 0:44 • I can tell a lot more about a person's ignorance in some cases by their attempt to answer a question than by the mere fact of their asking it. And you sometimes don't know if what you write, beyond the minimum, is going to show that. I agree with what you said that in many cases, people do relate that way. However, people would generally not expect that it would be compulsory on a Q&A site like this one, particularly in situations where it doesn't assist in providing an answer. At most, they might think that fewer people would try to answer their question. – user180040 Oct 6 '14 at 0:50 • "I might not want to add my attempts at solving the problem because I didn't want to reveal my ignorance." I am stunned by this statement. Revealing one's lack of knowledge is one of the most effective ways to learn. – Did Oct 6 '14 at 7:19 • If you're asking the question here, it's already assumed that you can't answer it. How much "worse" can it get in terms of ignorance? There's no reason to be ashamed when you don't know the answer, but nobody would be fooled if you don't include your attempts. In fact, it's probably even worse. – Najib Idrissi Oct 6 '14 at 8:52 • Obviously, the analogy can only go so far. I am not swamped at tea with scores of students, and students can't ask me questions anonymously. On this site, a key challenge we face is a large number of poorly-composed questions, and the ease of account creation. The easiness of creating an account and asking a question has real benefits. So the only even-handed way I see to maintain some sort of quality standards is to require all users to meet the same quality goals. These goals are not very high: a question with almost any sort of background or context is unlikely to be closed. @user180040 – Carl Mummert Oct 6 '14 at 19:33 • As an aside, it would be good to take a look at how Physics.SE handles homework questions, and how EE.SE handles questions by people who are over their heads. While I don't recommend/condone going this far, the EE.SE site maintains a professional-level of questions (for the most part) while still keeping a strong user-base. – apnorton Oct 7 '14 at 4:38 • You seem to forget that MSE and MO are different websites applying different quality standards. On MO the quality is maintained by asking questions to be of research level; this prevents them from being flooded by dozens of almost identical questions with almost identical answers by people looking for homework solutions. Questions are "hard" enough that people able to answer them have already a good idea of the context behind them. Here we don't discriminate on the level of questions, so there needs to be another kind of filter. – Najib Idrissi Oct 7 '14 at 6:13 • I don't think anyone is claiming that these questions are rude or disrespectful, merely that they bring down the quality of the website. – Najib Idrissi Oct 7 '14 at 7:11 • @NajibIdrissi: I'm sorry, I claim this, and similar questions, are rude; in fact, extremely so.This is not an "answer this for me, robot, site;" it is an online forum where you can ask for help from or pose questions to another human being. Where and the way I grew up, in and outside of math, asking a friendly question involves minimal humanizing context - for math, "here is where I am stuck", for directions in a city, "I am lost." This site is no text book, and self-appointed crusaders condoning and encouraging rude behavior seem a recent nuisance I wish would go away. – gnometorule Oct 7 '14 at 16:19 • @user180040 A full week later... the OP who asked the question you wished to discuss the closure of, is fully active on the site and they still did not post a single word, either as a comment or to modify their question. In view of this observation, do you still maintain that this specific closure "illustrates something that is wrong with the approach being taken on MSE to closing questions"? – Did Oct 13 '14 at 16:41 Full disclosure: I voted to close the question referenced in the OP. 1) In light of current practices on MSE, does the prevalence of this kind of closure enhance or reduce the usefulness of MSE as a resource? Enhance. Hovering over the "upvote" arrow on a question says "This question shows research effort; it is clear and useful." Hovering over the "downvote" arrow says "this question does not show any research effort; it is unclear or not useful." PSQs (problem-statement-questions) do not, by nature, show research effort. They usually happen to be unclear and not useful to future users, but that's another story. I believe that this closure enhances Math.SE's usefulness because it increases the signal-to-noise ratio. If someone really wants their question answered after it's been closed, they can edit the question into a good question, not a lazy question. We close questions as "lacking context" not because we can't understand the question, but because the OP has lazily asked the question. A good "how do I do this type of calculation/proof" question shows effort. 2) Should there be an expectation that, where a person votes to close a clearly formulated math question as being "homework-like" or as lacking information about the OP's "thoughts," the voter should at minimum have entirely thought through what an answer to the question would be? (Trivial calculations are not included.) I don't think this is necessary, or even something reasonable to implement. No one knows why people vote the way they do, and attempts to standardize people's voting processes don't end well. I don't need to solve a problem to determine it lacks context, I just need to look at it and realize: "Hey. This question doesn't tell me anything about what's been done on this problem." To clarify, we don't close "homework-like" questions. We close questions that lack context; for examples of what I mean by context: Where did you encounter this problem? What are related problems? What attempts have you (or others) made to solve this problem? Basically, treat it like a research paper: tell me everything that's been done to solve this problem by anyone in the past, before I go and duplicate a bunch of work. • It is also important, in my mind, to consider the volume of questions that this site currently receives. We do not have the problem that we have too many well-written questions to answer. – Carl Mummert Oct 6 '14 at 0:24 • How would any of the "context" you mention have helped answer the question better in this instance? – user180040 Oct 6 '14 at 0:24 • @user180040 In the comments to the referenced question, you say "This is a difficult problem that a person wouldn't reasonably be expected to have any ideas about, particularly if they are starting out in group theory." I am just starting out in group theory, and could easily see this problem and think "Oh! I'll try this!" ...only to waste a bunch of time because it's over my head. If I know the approximate level of a problem before approaching, it helps filter which users attempt to answer. Also, if someone researches a question, they may find some source (e.g. the book you listed)... – apnorton Oct 6 '14 at 0:28 • (cont)... that answers their question. If someone can answer their own question, all the better! Showing attempts can also help avoid the XY problem--someone may ask about "how do I prove this theorem?" when their real confusion is on "why does $(n-1)\mid (n-1)!\implies\text{ n is prime}$? (or something similar) – apnorton Oct 6 '14 at 0:28 • Yes, someone may find a book, but if someone hasn't found a book, and they don't describe their fruitless attempts to find one, that is not a deficiency in the question. Also, including pointless attempts at solving a problem is not the way all people relate when talking about math; many people would think that was counterproductive. – user180040 Oct 6 '14 at 0:32 • @user180040: even if the OP cannot approach the question, they can at least say where they encountered it and what they have already thought about. These things do often help answer the question: they tell the rough level of sophistication that the OP is working at, and the methods that are on the mind of the OP. You wrote that someone would not have an idea how to start the question if they are "starting out in group theory" - but that leads immediately to the problem: where did they find the question in the first place if they are just starting out in group theory? – Carl Mummert Oct 6 '14 at 0:32 • Saying what textbook they found it in would not reasonably be expected to assist in a solution in this instance. In any case, why not ask that in the comments? Then if you don't hear back, you don't have to answer the question. But why close it? Maybe someone else doesn't need that information to answer the question. I didn't need it to provide the reference with the solution. – user180040 Oct 6 '14 at 0:38 • @CarlMummert I think that if a person who intends to answer the question needs the information, they can ask for it in the comments. There are cases where the level of sophistication of the answer will change, others where it won't. Whether an answerer wishes to invest time in an answer without that information is best left up to him. And often, the best questions have answers at different levels of sophistication. – user180040 Oct 6 '14 at 0:57 • @user180040: I agree that would be a reasonable approach in an ideal world. But we are currently overrun with ill-motivated problem-statement-only questions. That is not the sort of site I want this to become, and the main tool that I have to influence the site are upvotes/downvotes and close votes. On the other hand, someone with genuinely interesting, advanced questions can always ask on MathOverflow, which is more amenable to problem-statement-only questions. – Carl Mummert Oct 6 '14 at 1:03 • @Carl Mummert I am concerned about the effect this has on people coming to the site for the first time. I am afraid that the reaction they get comes across as unfriendly and unreasonable. If I had asked the question I linked to, I would have been quite upset at the reaction. I think this is a serious problem in its own right, and a different balance needs to be struck to handle the kinds of questions you said are a problem. I don't think Math Overflow would have been suitable in this case, for a number of reasons. – user180040 Oct 6 '14 at 1:10 • I think it is a bit lazy to claim that the OP asked the question lazily. A person might have any number of reasons for only stating the problem. I think that a fairly common scenario might be this. A person who is minimally familiar with the site comes and asks a decent math question, which as far as he can tell is like the other ones asked here. He then receives a bizarre message saying that his question is "off-topic" for MSE. After concluding that the people at MSE are somewhat off their rocker, he walks away from the question and perhaps the site. Maybe he would have been a contributor. – user180040 Oct 7 '14 at 3:12 • @user180040 You seem to forget the purpose of putting questions "on-hold" first instead of closing them as off topic. If the OP returned, he would see a message saying "your question is lacking context and details. Please edit your question to provide this." That's perfectly reasonable, and is no cause to abandon a site. If he was confused, he could have reached out and asked. If someone doesn't show any effort to understand, then what is the likelihood of being a good contributor? Also, you think too highly of people. I anticipate this will change after a few years. ;) – apnorton Oct 7 '14 at 3:19 • You say that that's reasonable, but I am saying that it is a reasonable reaction for people to find it crazy. Why would I "reach out and ask" after being answered with that? I personally do find all of this very strange, yet I've provided several answers on the site that have been accepted. Perhaps I'm not a good contributor. Perhaps that's so of the author of the question, who has answered several questions on the site. I would appreciate it if you could acknowledge that some people can be put off by this, rather than concluding facilely that they must not be worth having anyway. – user180040 Oct 7 '14 at 3:30 • @user180040 I just looked back over what I wrote, and I believe what I've said hasn't accurately reflected my thoughts. I willingly acknowledge that some people can be put off by this behavior. My natural response if someone closed my question is to say "why?" but I realize this may not be everyone's. My response to this concern is that I believe the risk of repulsing a prospective contributor is worth it, given the size of the site. With so many questions asked per day, we must close early and often to stem the tide of "junk" questions. Sure, there will be mistakes. Sure, some (cont...) – apnorton Oct 7 '14 at 3:38 • ...questions will be unjustly closed. But the site's state renders it advantageous to err on closing too many, rather than too few, questions. The average asker of a closed question won't be a major contributor (anecdotal-statistically speaking ;)), so I feel OK with perhaps scaring off a couple of possible contributors to gain a cleaner site. – apnorton Oct 7 '14 at 3:38 I'm as annoyed as anyone about the preponderance of boring homework questions on MSE, I'm enthusiastic about downvoting and closing "solve this integral" or "compute this stabilizer", but I think this closure was inappropriate (and have voted to re-open). I claim that the question does not demand justification because it is mathematically interesting even without any further exploration. When I choose whether or not to answer a question, there are basically two angles it might appeal to me on: • as a mathematician, I look for problems that I would enjoy thinking about and exploring further, and questions whose answer I would like to discover, • as an (amateur) teacher, I look for opportunities for exploring how people make mistakes, form misunderstandings, which concepts they find difficult, and what kind of exposition can make those concepts clear to them. Only really in the second capacity do I care if the author has shown effort, because only then do I need to develop any insight into their thought processes. The linked question strikes me as something I would enjoy thinking about even if I could never tell the original asker what I came up with, and as such their particular attempt at the question is of no more interest to me than anyone else's. Moreover, the question seems to me a natural enough problem (not the most natural, perhaps, but I would not be so surprised to see it as a theorem in a textbook) that it represents a positive contribution to the general library of quality mathematical results presented in the MSE format, and could well be useful to other visitors in the future. • While the question does not need justification, it needs context. This is not a question you come up with yourself (for several reasons), so at least the source should be indicated to give potential answerers an idea of what tools would be appropriate for it. – Tobias Kildetoft Oct 7 '14 at 10:54 1) I believe that MSE becomes less useful as a resource as soon as there becomes a significant possibility that an intelligent person who comes and asks a well-formulated question that he has thought about is told to clean up his question, particularly in the way it's being done. Normal people coming here for the first time can perceive this response as hostile, and I don't blame them. I don't think that the procedure for having questions re-opened is really a complete solution for this, because once this has happened once, the damage is done; and in any case a person shouldn't have to run an obstacle course to get help with their question. They're much more likely just to leave. Although this is subjective, I think well-formulated questions above a certain level of difficulty should never be closed for being too homework-like. I would propose the following categorization. Type A Questions Questions which, if asked as a homework question, are likely to present difficulties primarily to students who have not mastered the background knowledge appropriate to people studying at that level. A student with appropriate background knowledge would not ordinarily have difficulty with the problem, or at least, most people with that knowledge would start the problem in the same correct way. Type B Questions The difficulty of these problems is inherent in the problems themselves. A student with generally appropriate background knowledge could reasonably not know what facts or methods are applicable to solve the problem. Such a student might make some progress with the question, but be unable to say in advance whether those initial steps have the potential to lead to a correct solution. The "background knowledge" I'm talking about can generally be inferred from the question. This means the facts and methods usually taught in the kind of course in which the OP's question might appear as homework. I understand that there are a spectrum of problems ranging from Type A ones to Type B ones, and that this evaluation is subjective, but I think it's important to make some effort to distinguish between them. I feel emphatically that people asking Type B questions should not be pestered to "share their thoughts"; I really feel that this is being rude to them, as mathematically mature people do not normally communicate this way, at least not compulsorily. An unknown number of people are having a negative first experience with MSE and not coming back, and despite the fact that I haven't been here for long, I am convinced from what I've observed that that number is high. I don't want to debate whether clear "Type A" questions should be closed. However, I think it might be more appropriate and friendlier to give these people hints than to close their questions. Indeed, hints may be more appropriate even for Type B questions in some cases. 2) Yes, this would be a significant safeguard against closures of Type B questions, including extreme Type B questions like the one given as an example. EDIT: I would like to quote from the FAQ on homework. How to ask a homework question? First note that this advice is presented in the FAQ as assisting the asker in "getting better answers," not avoiding having their question closed. The advice is not described as compulsory, and a person reading the FAQ couldn't reasonably anticipate the kind of reaction that seems to happen frequently. (Not to mention that many questions that have been closed were clearly not homework.) Also, this advice is of limited applicability for Type B questions in which a person is unsure of the value of what they've achieved so far, since the proof is still incomplete for them. Who would think that it would be beneficial for an answerer to see their meandering thoughts on a difficult problem? EDIT: I'm adding the following to respond to some objections from Did and Najib. Let's say I want to know why the group $(\mathbf{Z}/p\mathbf{Z})^{*}$ is cyclic when $p$ is prime. Here are two ways I could ask the question. 1. Can anybody tell me how you prove that $(\mathbf{Z}/p\mathbf{Z})^{*}$ is cyclic when $p$ is prime? Thanks. 2. Can anybody tell me how you prove that $(\mathbf{Z}/p\mathbf{Z})^{*}$ is cyclic when $p$ is prime? Here is my attempt. The element $1$ generates the group because every element is of the form $n \cdot 1$. Thanks. Yes, Question 1 reveals a degree of ignorance. Question 2 suggests a good deal more, although it could sometimes be that a usually able person is simply having a bad day. (For example, I missed that $(1-x)(1+x) = 1 - x^2$ in a comment recently.) As idealistic as we'd all like to be, can anybody tell me with a straight face that we will all think as highly, in terms of mathematical ability, of a person who asks Question 2 as we might of a person who asks Question 1? I think it is human nature that many people will feel ashamed after receiving a reply to #2, particularly if it was the case that they were simply having a bad day. In any event, a reply to #1 will quickly dispel their misunderstanding, in addition to providing the answer to their problem, without any likelihood of embarrassment. Some people might say that it is easier to teach a person who is frank about their level of ability and understanding. That is generally the case. However, imagine that that person is using their real name, that they are sensitive by nature, or that they are afraid of the kind of situation I mentioned. Even if you do not want to answer the person in these circumstances, how can you justify preventing others who want to from answering them? I think in these circumstances, it is not for a few people to decide that the person must conform to standards that they may have legitimate reasons for not wanting to meet, or else be unworthy of receiving help. • No, I really think 2 is better. At least the asker tried something. Okay, the attempt doesn't work, but at least there's some thought behind it, and it at least proves the asker knows what "cyclic" means. And anyway... Why should that matter? We all ask "dumb" (and I put that in quotes) sometimes. It shouldn't prevent you from asking questions, and it doesn't prevent people from answering them. I think I can speak on behalf of most people here if I say that I prefer questions that are "basic" but show some thought, however misguided, rather than PSQ where it feels like we're assigned homework. – Najib Idrissi Oct 6 '14 at 18:02 • I don't disagree with you that those questions are often easier to handle. However, I think it is quite paternalistic to make it imperative rather than letting a person decide for themselves. Just because some people aren't bothered about asking a question like #2, doesn't mean nobody will be. In any case, what is the justification for preventing other people from answering if they want to? Why can't the penalty just be receiving fewer replies, as a natural consequence of the way the person interacts in the comments, etc.? – user180040 Oct 6 '14 at 18:08 • 2. is good, as soon as I read it, I start to see a pair of approaches, based on the mistake, that I could develop to dispel the confusion the mistake reveals and to solve the question in the same movement. Then I will probably ponder them a few minutes to gauge which of them is the most effective from what the OP told me. And then I will answer--and perhaps I will have missed the target but perhaps the OP will say so and I will be able to aim better by modifying my first try. 1. is opaque and makes me feel I should run away. – Did Oct 6 '14 at 19:32 • The justification for preventing other people from answering (i.e., closing the question) is to keep the site at Stack Exchange level, as opposed to Yahoo/Quora/etc. "If you don't like it, don't answer it" is the conventional wisdom of internet forums, and the fruit of that wisdom is a conventional internet forum. Stack Exchange strives to be better than that. – user147263 Oct 6 '14 at 22:40 • @CareBear One of the results of this approach is that MSE presents an unwelcoming and unfriendly face to many innocent first-time users. – user180040 Oct 7 '14 at 3:44 • @user180040: It is unwelcoming only to those who just "crash the party". Those who spend time observing how the regulars behave will quickly enough learn what is ok, and what is not. This is normal courtesy. When you join in a new group of people (say, at a bar or at a work place), you play it safe first, and don't start picking fights or such right away. It's the same here: the advice is to observe first for a while. – Jyrki Lahtonen Oct 7 '14 at 11:15 • @JyrkiLahtonen No, it is unwelcoming to many people who post their first question here, who do not do anything to deserve the response. – user180040 Oct 7 '14 at 17:20 • @user180040: We disagree about some things, but agree about some others. I do recognize the difference between your types A and B. I posted a wall of text as an answer, but deleted it because it didn't add anything to what I have said before. A quick summary: 1) If it is part of HW dump, close it on sight. 2) Otherwise I will shamelessly apply different standards to questions at different levels or from different askers. 2A) I only vote to put on hold for this reason (no effort shown) those questions that I can solve myself right away. – Jyrki Lahtonen Oct 7 '14 at 18:32 • (cont'd) 2B) If it is from a new poster ("first offense", if you like), and I cast the first close vote, then I strive to leave a suggestion on how to improve the question (see my comment to the question you linked to). BUT, my approach is not MSE main stream. Also, the flood of low quality questions is a serious problem for our site. Thus I also sympathize with those members who have lately become more aggressive. Higher level question (the linked one is borderline) are not much of a problem, because the volume of traffic is not there. But some members want to apply a uniform policy. – Jyrki Lahtonen Oct 7 '14 at 18:37 • @JyrkiLahtonen Thanks for sharing your point of view. – user180040 Oct 7 '14 at 19:54
2019-03-19 06:22:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4618755877017975, "perplexity": 591.484274226023}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201904.55/warc/CC-MAIN-20190319052517-20190319074517-00330.warc.gz"}
https://gmatclub.com/forum/which-of-the-following-describes-all-values-of-n-for-which-n-156751.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 10 Dec 2018, 16:17 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in December PrevNext SuMoTuWeThFrSa 2526272829301 2345678 9101112131415 16171819202122 23242526272829 303112345 Open Detailed Calendar • ### Free lesson on number properties December 10, 2018 December 10, 2018 10:00 PM PST 11:00 PM PST Practice the one most important Quant section - Integer properties, and rapidly improve your skills. • ### Free GMAT Prep Hour December 11, 2018 December 11, 2018 09:00 PM EST 10:00 PM EST Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST. # Which of the following describes all values of n for which n Author Message TAGS: ### Hide Tags Intern Status: Yes. It was I who let the dogs out. Joined: 03 Dec 2012 Posts: 38 H: B GMAT Date: 08-31-2013 Which of the following describes all values of n for which n  [#permalink] ### Show Tags 26 Jul 2013, 14:58 10 00:00 Difficulty: 15% (low) Question Stats: 74% (01:04) correct 26% (00:56) wrong based on 207 sessions ### HideShow timer Statistics Which of the following describes all values of n for which $$n^2-1\geq{0}$$ (A) $$n\geq{1}$$ (B) $$n\leq{1}$$ (C) $$0\leq{n}\leq{1}$$ (D) $$n\leq{-1}$$ or $$n\geq{1}$$ (E) $$-1\leq{n}\leq{1}$$] Disclaimer: I have used the Search Box Before Posting. I used the first sentence of the question or a string of words exactly as they show up in the question below for my search. I did not receive an exact match for my question. Source: Veritas Prep; Book 04 Chapter: Homework Topic: Algebra Question: 77 Question: Page 210 Edition: Third Equation: $$n^2-1\geq{0}$$ $$(n+1).(n-1)\geq{0}$$ The above inequality can be broken down into the following two inequalities. $$(n+1) \geq{0}$$ and $$(n-1)\geq{0}$$ $$(n+1) \geq{0}$$ $$n\geq{-1}$$ $$(n-1) \geq{0}$$ $$n\geq{1}$$ The Official Answer is D. Why am i not getting the $$n\leq{-1}$$ ? What am i doing wrong above in my calculation ? _________________ Yogi Bhajan: If you want to learn a thing, read that; if you want to know a thing, write that; if you want to master a thing, teach that. This message transmitted on 100% recycled electrons. Math Expert Joined: 02 Sep 2009 Posts: 51072 Re: Which of the following describes all values of n for which n  [#permalink] ### Show Tags 26 Jul 2013, 15:09 1 hb wrote: Which of the following describes all values of n for which $$n^2-1\geq{0}$$ (A) $$n\geq{1}$$ (B) $$n\leq{1}$$ (C) $$0\leq{n}\leq{1}$$ (D) $$n\leq{-1}$$ or $$n\geq{1}$$ (E) $$-1\leq{n}\leq{1}$$] Equation: $$n^2-1\geq{0}$$ $$(n+1).(n-1)\geq{0}$$ The above inequality can be broken down into the following two inequalities. $$(n+1) \geq{0}$$ and $$(n-1)\geq{0}$$ $$(n+1) \geq{0}$$ $$n\geq{-1}$$ $$(n-1) \geq{0}$$ $$n\geq{1}$$ The Official Answer is D. Why am i not getting the $$n\leq{-1}$$ ? What am i doing wrong above in my calculation ? You are missing the case when both multiples are negative: $$(n+1) \leq{0}$$ --> $$n\leq{-1}$$; $$(n-1) \leq{0}$$ --> $$n\leq{1}$$. Common range $$n\leq{-1}$$. This can be solved in another way: $$n^2-1\geq{0}$$ --> $$n^2\geq{1}$$ --> $$n\leq{-1}$$ or $$n\geq{1}$$. Or: $$n^2-1\geq{0}$$ --> $$n^2\geq{1}$$ --> $$|n|\geq{1}$$ --> $$n\leq{-1}$$ or $$n\geq{1}$$. Solving inequalities: x2-4x-94661.html#p731476 inequalities-trick-91482.html data-suff-inequalities-109078.html range-for-variable-x-in-a-given-inequality-109468.html everything-is-less-than-zero-108884.html graphic-approach-to-problems-with-inequalities-68037.html inequations-inequalities-part-154664.html inequations-inequalities-part-154738.html Hope it helps. _________________ Intern Joined: 30 Oct 2010 Posts: 49 Re: Which of the following describes all values of n for which n^2– 1 ≥ 0?  [#permalink] ### Show Tags 29 Oct 2015, 01:55 I tried to solve it using this method.But not able to figure out where i went wrong.any help would be much appreciated. n^2– 1 ≥ 0 (n+1)(n-1)≥0 n+1≥0 therefore n≥-1 -->1 n-1≥0 therefore n≥1 -->2 Combining 1 & 2, n≥1 is my solution but its wrong as per the official answer. EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 13057 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: Which of the following describes all values of n for which n  [#permalink] ### Show Tags 29 Oct 2015, 17:53 Hi iikarthik, From a logic-standpoint, since we're dealing with a squared-term, there MUST be some values of N that 'fit' this inequality and are NEGATIVE. Your solution doesn't account for any negative answers, so something must be 'off' about it. You would probably find it easiest to avoid a 'math' approach altogether and TEST VALUES. Since $$n^{2}$$ − 1 ≥ 0 IF.... N = 2 4 - 1 = 3 which IS ≥ 0 So N COULD be 2 IF.... N = -2 4 - 1 = 3 which IS ≥ 0 So N COULD also be -2 There's only one answer that accounts for BOTH of those possibilities... GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: Rich.C@empowergmat.com # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save \$75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ *****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***** Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2830 Re: Which of the following describes all values of n for which n  [#permalink] ### Show Tags 31 May 2018, 15:16 hb wrote: Which of the following describes all values of n for which $$n^2-1\geq{0}$$ (A) $$n\geq{1}$$ (B) $$n\leq{1}$$ (C) $$0\leq{n}\leq{1}$$ (D) $$n\leq{-1}$$ or $$n\geq{1}$$ (E) $$-1\leq{n}\leq{1}$$] Simplifying, we have: n^2 ≥ 1 |n| ≥ 1 n ≥ 1 Or -n ≥ 1 n ≤ -1 _________________ Jeffery Miller GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Senior Manager Joined: 29 Dec 2017 Posts: 385 Location: United States Concentration: Marketing, Technology GMAT 1: 630 Q44 V33 GMAT 2: 690 Q47 V37 GMAT 3: 710 Q50 V37 GPA: 3.25 WE: Marketing (Telecommunications) Which of the following describes all values of n for which n  [#permalink] ### Show Tags 31 May 2018, 18:47 (D) $$n\leq{-1}$$ or $$n\geq{1}$$ The fastest way for quadratic equations. First of all find points in which the function = 0. Point them on the line (for parabola going up + - + /for parabola going down - + - ): see attachment below: Attachments 1.png [ 6.05 KiB | Viewed 610 times ] CEO Joined: 11 Sep 2015 Posts: 3227 Re: Which of the following describes all values of n for which n  [#permalink] ### Show Tags 19 Sep 2018, 06:38 Top Contributor hb wrote: Which of the following describes all values of n for which $$n^2-1\geq{0}$$ (A) $$n\geq{1}$$ (B) $$n\leq{1}$$ (C) $$0\leq{n}\leq{1}$$ (D) $$n\leq{-1}$$ or $$n\geq{1}$$ (E) $$-1\leq{n}\leq{1}$$] One approach is to test values and eliminate answer choices For example, one value of n that satisfies the equation n² - 1 ≥ 0 is n = 2 Notice that 2² - 1 = 4 - 1 = 3 and 3 ≥ 0 Now check the answer choices. . . Answer choice B says that n CANNOT equal 2 (since it says n ≤ 1) As such, we can ELIMINATE B Likewise, C and E also say that n CANNOT equal 2 So, ELIMINATE C and E We're left with A and D Let's find another value of n that satisfies the equation n² - 1 ≥ 0 Notice that (-2)² - 1 = 4 - 1 = 3 and 3 ≥ 0 Now check the remaining answer choices. . . Answer choice A says that n CANNOT equal -2 (since it says n ≥ 1) As such, we can ELIMINATE B By the process of elimination, the correct answer is D Cheers, Brent _________________ Test confidently with gmatprepnow.com Re: Which of the following describes all values of n for which n &nbs [#permalink] 19 Sep 2018, 06:38 Display posts from previous: Sort by
2018-12-11 00:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.638198733329773, "perplexity": 3665.053071312118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823516.50/warc/CC-MAIN-20181210233803-20181211015303-00503.warc.gz"}
https://deepai.org/publication/network-moments-extensions-and-sparse-smooth-attacks
DeepAI # Network Moments: Extensions and Sparse-Smooth Attacks The impressive performance of deep neural networks (DNNs) has immensely strengthened the line of research that aims at theoretically analyzing their effectiveness. This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks. To that end, in this paper, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input. In particular, we generalize the second-moment expression of Bibi et al. to arbitrary input Gaussian distributions, dropping the zero-mean assumption. We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates as compared to the preliminary results of Bibi et al. Moreover, we experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, where we investigate the effect of the linearization sensitivity on the accuracy of the moment estimates. Lastly, we show that the derived expressions can be used to construct sparse and smooth Gaussian adversarial attacks (targeted and non-targeted) that tend to lead to perceptually feasible input attacks. • 7 publications • 17 publications • 2 publications • 3 publications • 102 publications 04/24/2019 ### Analytical Moment Regularizer for Gaussian Robust Networks Despite the impressive performance of deep neural networks (DNNs) on num... 05/28/2019 ### Probabilistically True and Tight Bounds for Robust Deep Neural Network Training Training Deep Neural Networks (DNNs) that are robust to norm bounded adv... 03/11/2018 ### Combating Adversarial Attacks Using Sparse Representations It is by now well-known that small adversarial perturbations can induce ... 07/15/2021 Deep neural networks (DNNs) have accomplished impressive success in vari... 01/30/2019 ### A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance The existence of adversarial examples in which an imperceptible change i... 05/25/2019 ### Adversarial Distillation for Ordered Top-k Attacks Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, espec... ## 1 Introduction Deep neural networks (DNNs) have revolutionized not only the computer vision and machine learning communities but several other fields throughout science and engineering such as natural language processing, bioinformatics and medicine [lecun2015yoshua]. While major advances in the areas of object classification [krizhevsky2012imagenet], and speech recognition [hinton2012deep] to name a few, have been attributed to DNNs, a rigorous theoretical understanding of their effectiveness remains elusive. For instance, while DNNs have shown impressive performance on visual recognition tasks, they still exhibit uncouth behaviour when they are subject to carefully tailored inputs [szegedy2013intriguing]. Many prior works show that it is rather easy, through simple routines, to craft imperceptible input perturbations, referred to as adversarial attacks. Such attacks can result in a drastic negative effect on the classification performance of many popular deep models [goodfellow2014explaining, moosavi2016deepfool, szegedy2013intriguing]. Even more surprisingly, one can design such adversarial perturbations to be agnostic to both the input image and the network architecture [moosavi2016universal], which are referred to as universal perturbations. Unfortunately, less progress has been made towards systematically addressing and understanding this challenge. One of the early and naive approaches towards addressing this nuisance is simply through augmenting the training dataset with data corrupted with adversaries. While this has been shown to improve network robustness against such adversaries [goodfellow2014explaining, moosavi2016deepfool], unfortunately, this is a vacuous brute force approach that does not provide insights on the reasons behind such behaviour. Moreover, it does not scale for large dimensional inputs, as the amount of corresponding augmentation has to necessarily be prohibitively large to capture the variation in input space. This effectively deems the augmentation approach infeasible in large dimensions. In this paper, we derive expressions for the first and second moments (the mean and consequently the variance), referred to as Network Moments, of a small piecewise linear (PL) network in the form of (Affine, ReLU, Affine) subject to a general Gaussian input. The preliminary version of these Network Moments were derived and analyzed in [bibi2018analytic]. Beyond these preliminary results, we derive in this paper a new variance expression, which does not claim any assumptions on the mean or the covariance of the input Gaussian. This generalizes the previous result in [bibi2018analytic], which only holds under a zero mean input assumption. These expressions provide a powerful tool for analyzing deeper PL-DNNs by means of two-stage linearization (as shown in Figure 1) with a plethora of applications. For instance, it has been shown that such expressions can be quite useful in training robust networks very efficiently [alfadly2019train], avoiding any need for noisy data augmentation. In particular, empirical evidence in [alfadly2019train] indicates that simple regularizers based on the mean and variance expressions can boost network robustness by two orders of magnitude not only against Gaussian attacks but also against other popular adversarial attacks (e.g. PGD, LBFGS [szegedy2013intriguing], FGSM [goodfellow2014explaining] and DF2 [moosavi2016deepfool]). In this paper, we show that network moments can be used to systematically design Gaussian distributions that can serve as input adversaries. In particular, we conduct several experiments on MNIST [lecun1998mnist] and Facial Emotion Recognition datasets [goodfellow2015] to demonstrate that these expressions can be used to craft sparse and smooth Gaussian attacks that are structured and perceptually feasible, i.e. they exhibit interesting semantic information aligned with human perception. Contributions. (i) We provide a fresh perspective on analyzing PL-DNNs by deriving closed form expressions for the output mean and variance of a network in the form (Affine, ReLU, Affine) in the presence of general Gaussian input noise. In particular, we generalize the results of [bibi2018analytic] and derive a closed form expression for the second moment under no assumptions on the mean nor covariance of the input Gaussian. Through network linearization, extensive experiments show that the new expression for the output variance can be efficiently approximated leading to much tighter second-moment estimates than that of [bibi2018analytic]. (ii) We formalize a new objective as a function of the derived output mean and variance to construct sparse and smooth Gaussian adversarial attacks. We conduct extensive experiments on both MNIST and Facial Emotion datasets demonstrating that the constructed adversaries are perceptually feasible. ## 2 Related Work Despite the impressive performance of deep neural networks on visual recognition tasks, their performance can still be drastically obstructed in the presence of small imperceptible adversarial noise [goodfellow2014explaining, moosavi2016deepfool, szegedy2013intriguing] . Alarmingly, such adversaries are abundant and easy to construct, where in some scenarios constructing an adversary is as simple as performing a single gradient ascent step of some loss function with respect to the input [szegedy2013intriguing]. More surprisingly, there exist deterministic input samples that are agnostic of both the input and network architecture that can cause severe reduction in the network performance [moosavi2016universal]. Moreover, in some extreme cases, it can be sufficient to perturb a single input pixel that can result in a misclassification rate as high as on popular benchmarks [su2017one]. This nuisance is serious and menacing and has to be addressed, particularly since DNNs are now deployed in sensitive real-world applications (e.g. self driving cars). Thereafter, there have been several directions towards understanding and circumventing this. Early works aimed at analyzing the behaviour of DNNs in the general presence of input noise. For instance, Fawzi et al. [fawzi2016measuring] proposed a generic probabilistic framework for analyzing the robustness of a classifier under different nuisance factors. Another seminal work particularly assessed the robustness of a classifier undergoing geometric transformations [fawzi2015manitest]. On the other hand, there has been several other works on the design and training of networks that are robust against adversarial attacks. One of the earliest approaches on this was the direct augmentation of adversarial samples to the training data, which has been shown to indeed lead to more robust networks [goodfellow2014explaining, moosavi2016deepfool]. Later, the work of [madry2017towards] adopted a similar strategy but by incorporating the adversarial augmentation during the iterative training process. In particular, it was shown that one can achieve significant boosts in network robustness against first-order adversarial attacks, i.e. attacks that depend only on gradient information, by minimizing the worst adversarial loss over all bounded energy (often measured in norm) perturbations around a given input. Since then, there has been a surge in literature studying verification approaches for DNNs. In this line of work, the aim is to design networks that are accurate and provably robust against all bounded input attacks. In general, verification approaches can be coarsely categorized as exact or relaxed verifiers. The former try to find the exact largest adversarial loss over all possible bounded inputs. Such verifiers often require piecewise linear networks and rely on either Mixed Integer Solvers (MIS) [cheng2017maximum, lomuscio2017approach] or on Satisfiability Modulo Theories (SMT) solvers [scheibler2015towards, katz2017reluplex]. These verifiers are too expensive for DNNs due to their NP-complete nature. Relaxed verifiers on the other hand scale better, since they only find an upper bound to the worst adversarial loss [zhang2018efficient, wong2017provable]. There has been several new directions that aim at addressing the verification problem by constructing networks with smoothed decision boundaries [lecuyer2019certified, cohen_randomized_1]. In this paper, we are not concerned with such techniques but only focus on analyzing the behaviour of networks in the presence of input noise. We focus our analysis on PL-DNNs with ReLU activations. Unlike previous work, we study how the probabilistic moments of the output of a PL-DNN with a Gaussian input can be computed analytically. A similar work to ours is [gast2018lightweight], where the probabilistic output mean and variance of a deep network are estimated by propagating the estimates of the moments per layer under the assumption that the joint distribution after each affine layer is still Gaussian (through the central limit theorem). On the contrary, we derive the exact first and second moments of a simple two-layer (Affine, ReLU, Affine) network. We extrapolate these expressions to deeper PL-DNNs by employing a simple two-stage linearization step that locally approximates them with a (Affine, ReLU, Affine) network. Since these expressions are a function of the noise parameters, they are particularly useful in analyzing and inferring the behaviour of the original PL-DNN without having to probe the network with inputs sampled from the noise distribution as regularly done in previous work [goodfellow2014explaining, moosavi2016deepfool]. ## 3 Network Moments§§§All proofs are omitted for the Appendix. We start by analyzing a particularly shaped network in the form of (Affine, ReLU, Affine) in the presence of Gaussian input noise. The functional form of the network of interest is given as , where is an element wise operator. The affine mappings can be of any size, and we assume throughout the paper that and , where is the number of output logits. Note that and can be of any structure (circular or Toeplitz) generalizing both fully connected and convolutional layers. In this section, we analyze when is a Gaussian random vector, i.e. . Seeking the probability density function (PDF) through the nonlinear random variable mapping is possible for when but much more difficult for arbitrary in general. Thus, we instead focus on deriving the probabilistic moments of the unknown distribution of . For ease of notation, we denote as the function in , i.e. . At first, and for completeness, we present the results of our preliminary work [bibi2018analytic], where the first moment (mean) expression is derived for a general Gaussian input distribution, while the second moment is derived under a zero input mean assumption, i.e. with . We then derive and generalize the expression for the second moment of for a generic Gaussian distribution under no assumptions in Lemma 4. ### 3.1 Deriving the 1st Output Moment: E[g(x)] To derive the first moment of , we first consider the scalar function acting on a single Gaussian random variable . ###### Remark 1. The PDF of where is: fq(x)=Q(μxσx)δ(x)+fx(x)u(x) where is the Gaussian Q-function, is the dirac function, is the Gaussian PDF, and is the unit step function. It follows directly that when . Now, we present the first moment of . ###### Theorem 1. For any function in the form of where , we have: E[gi(x)]=p∑v=1 B(i,v)(12¯μv−12¯μverf(−¯μv√2¯σv) +1√2π¯σvexp(−¯μ2v2¯σ2v))+c2(i) where , , and is the error function. ### 3.2 Deriving the 2nd Output Moment: E[g2(x)] Here, we need three pre-requisite lemmas: one that characterizes the PDF of a squared ReLU (Lemma 1), another that extends Price’s Theorem [price1958useful] (Lemma 2), and one that derives the first moment of the product of two ReLU functions (Lemma 3). ###### Lemma 1. The PDF of where is : fq2(x)=12δ(x)+12√xfx(√x)u(√x) and its first moment is . ###### Lemma 2. Let for any even p, where . Under mild assumptions on the nonlinear map , we have % odd iσii+1 . Lemma (2) relates the mean of the gradients/subgradients of any nonlinear function to the gradients/subgradients of the mean of that function. This lemma has Price’s theorem [price1958useful] as a special case when the function has the structure with . It is worthwhile to note that there is an extension to Price’s theorem [mcmahon1964extension], where the assumptions and are dropped; however, it only holds for the bivariate case, i.e. , and thus is also a special case of Lemma (2). ###### Lemma 3. For any bivariate Gaussian random variable , the following holds for : E[T(x1,x2)]= 12π⎛⎜⎝σ12sin−1(σ12σ1σ2)+σ1σ2 ⎷1−σ212σ21σ22⎞⎟⎠+σ124 where and . ###### Theorem 2. For any function in the form of where and that then: E[g2i(x)]= 2k∑v1v1−1∑v2B(i,v1)B(i,v2)(¯σv1v22πsin−1(¯σv1v2¯σv1¯σv2)+ ¯σv1¯σv22π ⎷1−¯σ2v1v2¯σ2v1¯σ2v2+¯σv1v24⎞⎟⎠+12k∑rB(i,r)2¯σ2r+c2(i) Lastly, the variance of can be directly derived: . While the previous expression assumes a zero-mean Gaussian input and bias-free first layer, i.e. , we extend these results next to arbitrary Gaussian distributions without assumptions on . The key element here is to extend the result of Lemma 3. ###### Lemma 4. For any bivariate Gaussian , where and , then we have that E[max(x1,0)max(x2,0)]=Ω(μ1,μ2,σ1,σ2,ρ) (1) + ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩μ1μ2+ρσ1σ2π(Ia1,b1(∞)−Ia1,b1(−μ2√2σ2)),                        % for \,|ρ|<1√2,\parμ1μ2+ρσ1σ2π[π4sign(ρ)+π4erf(e⊤1~Σμx√2)erf(μ2√2σ2)−sign(ρ)(Ia2,b2(∞)−Ia2,b2(sign(ρ)e⊤1~Σμx√2))],                        % for \,|ρ|>1√2. where Ω =√|Σ|2πexp(−12μ⊤xΣ−1μx)+μ1σ22√2πexp(−μ222σ22) (2) (1+erf(e⊤1~Σμx√2))+μ2σ12√2πexp(−μ212σ21) (1+erf(e⊤2~Σμx√2))+μ1μ2+ρσ1σ24(1+erf(μ2√2σ2)), and where Ia,b(x)=π4erf(x)erf(b)+√π2exp(−b2) (3) ∞∑u=0{(a/2)2u+1Γ(u+\nicefrac32)P(u+1,x2)H2u(b) −sign(x)(a/2)2u+2Γ(u+2)P(u+\nicefrac32,x2)H2u+1(b)}. Note that and are the two dimensional canonical vectors. Moreover, note that where rearranges the elements of the vector into a diagonal matrix and denotes the matrix determinant. The constants and are , , and , respectively. Lastly, is the Hermite polynomial, is the normalized incomplete Gamma function and is the standard Gamma function. ###### Proof. This is a sketch of the proof. I0 =E[max(x1,0)max(x2,0)] (4) =∫∞0∫∞0x1x2fX1,X2(x1,x2)dx1dx2 =∫∞0x2fX2(x2)∫∞0x1fX1|X2(x1|x2)dx1dx2 where . The functions , and are the joint bivariate, conditional and marginal Gaussian distributions. By integration by parts, Leibniz’s rule, some identities and substitutions, Equation (4) reduces to: I0 =Ω(μ1,μ2,σ1,σ2,ρ) (5) +μ1μ2+ρσ1σ22√π∫∞−μ2√2σ2exp(−z2)erf(ρz+\nicefracμ1√2σ1√(1−ρ2))∫∞κϕ(z)dzdz, where is given by Equation (2). As for the remaining integral, we exploit identities (2.1) and (2.2) in [fayed2014evaluation], which states that has a closed form solution given in Equation (3). Thus, one can represent the integral in Equation (5) as where , and . Now note that the infinite series corresponding to and in Equation (3) converges when or equivalently which proves the first case in Equation (1). As for the case , by integrating the integral in Equation (5) by parts, we have √π2∫∞κexp(−t2)erf(at+b)dt=−π4erf(aκ+b)erf(κ) (6) +sign(a)(π4−I\nicefrac1|a|,\nicefrac−ba(∞)+I\nicefrac1|a|,\nicefrac−ba((aκ+b)sign(a))). Note that the series from the identity replacing converges when or equivalently . Thus, substituting this result back in Equation (5) derives the second case of Equation (1) and completing the proof. ∎ Following Theorem 2, a closed form expression for under generic Gaussian distributions can be derived by substituting the result from Lemma 4 (in lieu of Lemma 3) in the proof of Theorem 2 deriving an expression for the variance of . Moreover, we show in the Appendix that Equation (1) recovers Lemma 3 for when . ### 3.3 Extension to Deeper PL-DNNs To extend the previous results to deeper DNNs that are not in the form (Affine, ReLU, Affine), we first denote the larger DNN as (e.g. a mapping of the input to the logits of classes). By choosing the ReLU layer, any can be decomposed into: . In this paper, we employ a simple two-stage linearization based on Taylor series approximation to cast into the form (Affine, ReLU, Affine). For example, we can linearize it around points and , such that and . The resulting function after linearization is . Figure 1 shows this two-stage linearization. Details in regards to the selection of the layer of linearization and the points of linearization are discussed thoroughly next. ## 4 Experiments In this section, we discuss a variety of experiments to provide the following insights. (i) Although the derived output variance of the Affine-ReLU-Affine network based on Equation (1) is impractical, the infinite sum can be accurately approximated with as few as 20 terms leading to an efficient computation. (ii) We conduct several controlled experiments to investigate the choice of the linearization layer , at which two-stage linearization is performed. We also validate the tightness of both the first and second moment expressions for deeper networks under different linearization points, as well as, showing that the new derived variance based on Lemma 4 is much tighter than the one based on Lemma 3 for general input Gaussian distributions. (iii) Lastly, extensive experiments on MNIST and Emotion datasets validate that our derived expressions can be used to construct targeted and non-targeted adversarial Gaussian attacks. In particular, and following the recent successes of sparse pixel attacks [modas019sparsefool], we demonstrate that our expressions can indeed be utilized to design sparse and smooth Gaussian perturbations leading to perceptually feasible input attacks. ### 4.1 On the Efficacy of Approximating Equation (1) Computing the variance of the Affine-ReLU-Affine network, i.e. , under general Gaussian input , as per Equation (1) in Lemma 4, requires the evaluation of Equation (3), which is impractical as it involves an infinite series. We show here that the series can be sufficiently well approximated with as few as 20 terms. To demonstrate this along with the sensitivity of Equation (1) to , , , and , we report the maximum absolute error between the Monte Carlo estimates of and truncated versions of the sum in Equation (1) with , , , , and terms over a grid of all combinations of the five arguments. In particular, and are sampled uniformly from the grid , and are on the uniform grid , and lastly is sampled uniformly from the grid , where all parameters are sampled with spacing. In addition, we also include and . Figure 2 reports the maximum absolute error of all possible combinations of the aforementioned parameters in log-scale with an increasing number of terms of Equation (3). We observe from Figure 2 that, with as few as 20 terms, the maximum absolute error between the Monte Carlo estimates and the truncated version of Equation (1) is . This occurs regardless of the choice of , , and and particularly when is close to , which is the disjunction in Equation (1). Recall that the disjunction occurs at these values of , since the infinite series diverges in such cases. On the other hand, the maximum absolute error decreases rapidly so long as is away from . Now that Equation (1) can reliably and efficiently be approximated with a small number of terms, deeming it efficient, the closed form expression of Equation (1) can be used to compute the output variance of for various applications. Throughout all remaining experiments, we will use only 5 terms, since the absolute error is of order for all choices of except for the improbable two singularities . ### 4.2 Tightness of Network Moments Choice of the Two-Stage Linearization Layer . The derived expressions for the first and second moments are for a small network in the form Affine-ReLU-Affine. As detailed in Subsection 3.3, such results can be extended and applied to deeper networks through the proposed two-stage linearization. However, it is not clear how to choose the layer of linearization . This subsection addresses this design choice by conducting an ablation to study the impact of varying . In particular, we show that there is an intrinsic trade off between memory efficiency and linearization error for the choice of the layer , around which two-stage linearization is performed. To illustrate this, consider the following network where , , and . Performing two-stage linearization requires the memory of storing the Jacobians of the two-stage linearization and , which is a total of elements. When is chosen to be small (early convolutional layers), the value is usually very large, as it is the total number of pixels across all feature maps. Meanwhile, when is large, is usually only the number of nodes in a fully connected layer. However, the choice of large in general leads to larger linearization error. To demonstrate this, we conduct experiments on the LeNet architecture [lecun1999object] pretrained on the MNIST digit dataset [lecun1998mnist] . Note that LeNet has a total of four layers, two of which are convolutional with max pooling and the other two are fully connected. We perform two-stage linearization on LeNet with a varying choice of , where we compare the difference between the prediction scores of LeNet and the two-stage linearized version with the point of linearization taken to be a noisy version of a random image from the MNIST validation set. Table I demonstrates that the choice of smaller is best, in sense, for the two-stage linearization across all the various levels of noisy versions of the input. This implies a trade off between memory efficiency (better memory complexity with larger ) and accuracy (better linearization error for smaller ). Therefore and due to memory constraints, is chosen to be the fully-connected layer just before the last ReLU activation in all experiments, unless stated otherwise. Tightness of Moment Expressions on LeNet. It is conceivable that the two-stage linearization might impact the tightness of the derived moment expressions when applied to deeper real PL-DNNs. Here, we empirically study their tightness by comparing them against Monte Carlo estimates over samples on LeNet. Using the MNIST dataset, the input to the network is with 10 output classes (i.e. ). In this case, following Section 3.3, the two-stage linearization is performed such that , and for memory efficiency, where is an image selected from the MNIST testing set. Thus, the input is where we randomly generate a covariance matrix such that with reasonable noise levels when . Since the LeNet architecture has , we report the tightness of the analytic mean from Theorem 1, variance from Theorem 2, and the new general variance expression based on Lemma 4 for . As for the metric, we report the average absolute relative difference of the analytic mean and variance expressions (Theorems 1 and 2) to their Monte Carlo counterparts. We refer to each as and , respectively. Similarly, we refer to the error of the Monte Carlo estimates to the new variance expression based on Lemma 4 as , where we find that the summation in Equation (3) can be truncated to only terms without scarifying much accuracy. We average the results over the complete MNIST test set. We report the tightness results across all classes in Table II, where the closer the errors are to the better. For instance, at , the absolute relative difference for the mean expression of Theorem 1 are close to , i.e. . That is to say, the mean expression is tight even though two-stage linearization is being performed on a real network. Whereas, the variance expression of Theorem 2 is less accurate, , and this can be attributed to the assumptions that do not hold (zero-mean input Gaussian and ). On the other hand, the new general expression for the output variance based on Lemma 4 is significantly much tighter than the one from Theorem 2, as the errors compared to the Monte Carlo estimates are closer to , i.e. . This shows that our new variance expression is far tighter and less sensitive to two-stage linearization despite the truncation of the infinite series to as few as terms. Furthermore, complementing the results in Table II and instead of reporting the absolute relative difference alone, we visualize the histogram of LeNet output variances for all testing MNIST images under varying noise levels in Table III for better interpretability of the results. Sensitivity to the Point of Linearization. In all previous tightness validation experiments of the moment expressions, the point at which two-stage linearization is performed was restricted to be the input image, i.e. . Clearly, this strategy suffers from limited scalability, since analyzing the output moment expressions of deep networks over a large dataset requires performing the expensive two-stage linearization for every image in the dataset. To circumvent this difficulty, we study the sensitivity of the tightness of the expressions under two-stage linearization around only a small set of input images from the dataset. That is to say, we choose a set of representative input images, at which the two-stage linearization parameters and are computed only once and offline for each input image. Now, to evaluate the network moments for an unseen input, we simply use the two-stage linearization parameters of the closest linearization point to this input. In this experiment, we study the tightness of our expressions under this more relaxed linearization strategy using LeNet on the MNIST testing set. We cluster the images in the testing dataset using -means on the image intensity space with different values of . We use the cluster centers as the linearization points. Table IV summarizes the tightness of the expressions for and compares them against a weak baseline, where the linearization point is set to be the farthest image in each cluster from the cluster center with . It is clear that the new variance expression based on Lemma 4 remains very close to the Monte Carlo estimate across different number of linearization points , even when is as low as , i.e. only of the testing set. On the other hand, the analytic variance derived from Theorem 2 is less accurate but stays within an acceptable range with . This indeed reaffirms that even upon truncating the infinite series in Equation (3) to only terms, the new variance expression is much tighter and more accurate under network linearization than the preliminary result of Theorem 2 in [bibi2018analytic]. As for the analytic mean, however, it is more sensitive to the point of linearization but even in the worst case, i.e. and for example, the average error doesn’t exceed . When compared with the baseline experiments, i.e. using the farthest point to the cluster center, the contrast becomes more obvious where the error is about . ### 4.3 Noise Construction Targeted Attacks.  On the MNIST dataset, we specify a target class and we construct a noise that can fool LeNet in expectation by solving the following optimization: argminμx,σ (maxk≠j(EMk(μx,σ2))−EMj(μx,σ2)) (7) s.t. 0<σ2≤2,   −β1n≤μx≤β1n. Note that for any pair for which the previous objective is negative, the largest expected prediction among all classes occurs at the target class . In this experiment, we set and solve problem (7) with an interior-point solver. Note that the range of pixel values of MNIST images is . Figure 3 shows examples of noisy versions of an image from class that fool LeNet in expectation with multiple target classes (i.e. ). Not every target class is easily targeted with small because of the distance in their prediction scores. We verify that the constructed noise actually fools the network by sampling 10 samples from the learned distribution, passing each noisy input through LeNet, and verifying that at least of the predicted class flips are from to the target class . Non-Targeted Attacks with -Pixel Support.  Inspired by the findings of some recent work [su2017one], we demonstrate that we can construct additive noise that only corrupts of the pixels in an input image, but still changes the class prediction. Here, we use LeNet on MNIST and AlexNet on ImageNet. In this case, we do not specify the target class but rather we optimize for the prediction scores of the correct class to be less than the maximum prediction score. The underlying optimization is formulated as follows: argminμαx,σ (EMi(μαx,σ2)−maxk≠i(EMk(μαx,σ2))) (8) s.t. 0<σ2≤2,   −β1αn≤μαx≤β1αn. The optimization variable indicates the set of sparse pixels ( of the total number of pixels) in that will be corrupted, while the rest of pixels are set to . The locations of the corrupted pixels are randomly chosen and fixed before solving the optimization. Two experiments are conducted on few images, one on MNIST and the other on ImageNet. Figures 4 and 5 show examples of noisy images constructed by solving Equation (8) with to fool LeNet on MNIST and to fool AlexNet on ImageNet. Since there are much fewer pixels to flip the prediction of the network and similar to the single pixel attack in [su2017one], we increase the permissible range of mean noise by setting for MNIST and
2023-01-27 00:54:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788420557975769, "perplexity": 683.9566668404859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00651.warc.gz"}
https://mathoverflow.net/questions/125323/what-conditions-on-a-filtration-guarantee-that-a-submartingale-has-a-continuou
# What conditions on a filtration guarantee that a (sub)martingale has a continuous modification? There is a theorem as follows: Theorem. Let $\mathcal{F}_t$ be a filtration which is right-continuous and complete. Assume $M_t$ is a submartingale adapted to $\mathcal{F}_t$ such that $t \mapsto \mathbb{E}M_t$ is right-continuous (which is always true of martingales on right-continuous filtrations). Then there is a RCLL (Cadlag) modification of $M_t$. Question. If I change "right-continuous" and "RCLL" to "continuous", is this still true? In other words, if the filtration is continuous and the map $t\mapsto \mathbb{E}M_t$ is continuous, can I get the stronger conclusion that there is a continuous (not just RCLL) modification? If it is true, is there a reference (or obvious proof)? If it is false: Is there a nice counterexample? Are there known conditions on the filtration that would guarantee the continuous modification? I think I have a proof for martingales (it involves algorithmic randomness, so it is not at all standard), but since I cannot find this written anywhere, I am worried I might be mistaken. Also, I know it is true for martingales on the augmented filtration of Brownian motion, but that proof goes through the Martingale Representation Theorem (I believe) and seems like that is overkill (again making me worried I am missing something). Notes: This question started out as a question on math.stackexchange. After a few weeks with no answer, I moved it here. Also, my question looks similar to another question on Mathoverflow, but they are different. • No. Consider the case where M is a compensated Poisson process. – George Lowther Mar 22 '13 at 21:15 • George, yes this is exactly the kind of example I was looking for. (Just to be clear, the completed filtration of a compensated Poisson process is continuous since the jumps are measure zero events for any particular time $t$, correct?) If you want to put this as an answer, I will accept it. – Jason Rute Mar 23 '13 at 1:46 • George, also I think I found the general condition I was looking for. It is Lemma 6 in your notes on "Predictable stopping times"(almostsure.wordpress.com/2011/05/26/…). Namely, for a completed filtrated probability space, the following are the same: (1) all local martingales are continuous, (2) all stopping times are predictable, and (3) all RCLL adapted processes are predictable. – Jason Rute Mar 23 '13 at 1:53 • That's correct. I'll post it as an answer later – George Lowther Mar 23 '13 at 15:02
2019-10-19 21:01:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8355901837348938, "perplexity": 359.649857493813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697760.44/warc/CC-MAIN-20191019191828-20191019215328-00209.warc.gz"}
https://math.stackexchange.com/questions/1490787/definition-of-conditional-probability-p-i-pi-k-and-tsallis-entropy
# definition of conditional probability $(p_i|\pi_k)$ and Tsallis entropy Let $\Omega$ be a set of $W$ possible outcomes of an experiment with probability assignments $p_i$ and thus $\sum_{i=1}^{W}p_i=1$. Now, let's divide $\Omega$ into $K$ non-intersecting subsets each containing $W_i$ elements, $i=1,2,\dots,K$ (so $\sum_{k=1}^{K}W_k=W$, $1\le K\le W$). Let us define the following probabilities: $$\pi_1\equiv \sum_{i\in W_1}p_i,\\ \pi_2\equiv \sum_{i\in W_2}p_i,\\ \dots,\\ \pi_K\equiv \sum_{i\in W_K}p_i$$ It holds $\sum_{k=1}^K\pi_k=1$. If we say: "$\{p_i|\pi_k\}$ are conditional probabilities", how is that defined? I know that a conditional probability for events $A$ and $B$ can be deduced as $P(A|B)=\frac{P(A\cap B)}{P(B)}$ but in this case we do not have events but probabilities, so I would guess $\{p_i|\pi_k\}=\frac{p_i\cdot\pi_k}{\pi_k}=p_i$ which is obviously wrong. At least I hope that $P(B)=\pi_k$ and if so, what is equivalent to $P(A\cap B)$? It should also hold $\sum_{i\in W_k}(p_i|\pi_k)=1$, ($k=1,2,\dots,K$). A more general problem is understanding how this conditional probability would work in Tsallis entropy defined as: $$S_q(\{p_i\})\equiv k\frac{1-\sum_{i=1}^{W}p_i^q}{q-1}$$ So what actually is $S_q(\{p_i|\pi_k\})$? This is based on book Tsallis, Constantino (2009). Introduction to nonextensive statistical mechanics : approaching a complex world (Online-Ausg. ed.). New York: Springer. Specifically page 47. • You notation is unclear. Is $W_k$ the count of outcomes in the $k$-th subset of the partition, or is it the set of indices for the outcomes in that subset? – Graham Kemp Oct 21 '15 at 14:54 You notation is unclear. Is $W_k$ the count of outcomes in the $k$-th subset of the partition, or is it the set of indices for the outcomes in that subset? Let $\Omega$ be the set of $W$ atomic outcomes of an experiment. Let $\omega_i$ represent the $i$-th indexed outcome of $\Omega$, and $p_i$ be its probability assignment. Let us partition $\Omega$ into $K$ disjoint subsets, such that $D_k$ is the $k$-th indexed partition, and $W_k$ be its size.   Then the probability assignment of $D_k$ is: $$\pi_k = \sum_{i:\omega_i\in D_k} p_i$$ Then using you notation that $\{p_i\mid \pi_k\}$ is the conditional probability that outcome $\omega_i$ occurs given that one of the outcomes in event $W_k$, we have: $$\{p_i\mid \pi_k\} = \dfrac{p_i \cdot [\omega_i\in D_k]}{\pi_k}$$ Where $[\omega_i\in D_k]$ is the Iverson bracket notation for the indicator function that outcome $\omega_i$ is in partition $D_k$.
2019-08-19 12:12:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923459887504578, "perplexity": 123.11532627950577}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00108.warc.gz"}
https://mathematica.stackexchange.com/questions/108551/putting-label-on-other-side-in-timelineplot
# Putting label on other side in TimelinePlot How to put the label on the other side of the interval. For instance in the following code, how to push the label End on the other hand. displayLaTeX[string_] := DisplayForm[ TimelinePlot[{Interval[{DateObject[{2015, 6, 1}], DateObject[{2016, 2, 29}]}] -> "End "}, AxesOrigin -> Center, PlotTheme -> "Classic"] Courtesy of this answer given by Jens. • I can not make out from your question how you want to change what you have now. Could you please show the result you want? – m_goldberg Feb 28 '16 at 14:57 • The label 'End' is at June in above. I want to put that label at March. i-e there are two extremes of the interval, then how to move the label from one extreme to another extreme? – kaka Feb 28 '16 at 20:06 It is simply a matter of getting the date objects into a sensible grouping, getting the labels attached to the right date objects, and removing the option AxesOrigin -> Center. TimelinePlot[ {{Interval[{DateObject[{2015, 1, 15}], DateObject[{2015, 9, 9}]}]}, {DateObject[{2015, 6, 1}] -> Row[{"Start ", displayLaTeX["\\sum_{i=0}^{10} f(x_i)"]}], Interval[{DateObject[{2015, 6, 1}], DateObject[{2016, 2, 29}]}], DateObject[{2016, 3, 6}] -> "End "}, {Interval[{DateObject[{2016, 7, 27}], DateObject[{2016, 8, 6}]}]}}, PlotTheme -> "Classic"] As far as I can determine, bubble labels can only be placed on point events or at the start of time lines. Therefore, I think what you asking for can not done. Perhaps someone more knowledgable will prove me wrong. Therefore, you can have this: TimelinePlot[ {{Interval[{DateObject[{2015, 1, 15}], DateObject[{2015, 9, 9}]}]}, {Labeled[ Interval[{DateObject[{2015, 6, 1}], DateObject[{2016, 2, 29}]}], Row[{"Start ", displayLaTeX["\\sum_{i=0}^{10} f(x_i)"]}]], Labeled[DateObject[{2016, 3, 6}], "End"]}, {Interval[{DateObject[{2016, 7, 27}], DateObject[{2016, 8, 6}]}]}}, PlotTheme -> "Classic"] You can also have a time line with standard labels as follows: TimelinePlot[ {{Interval[{DateObject[{2015, 1, 15}], DateObject[{2015, 9, 9}]}]}, {Labeled[ Interval[{DateObject[{2015, 6, 1}], DateObject[{2016, 2, 29}]}], {Row[{"Start ", displayLaTeX["\\sum_{i=0}^{10} f(x_i)"]}], "End"}, {Before, After}]}, {Interval[{DateObject[{2016, 7, 27}], DateObject[{2016, 8, 6}]}]}}, PlotTheme -> "Classic"] • I have edited the question. There was ambiguity. – kaka Feb 29 '16 at 2:09 • The points are disconnected from the actual interval, so it is not exactly I need. – kaka Feb 29 '16 at 2:25 • The last option you provided might help me. – kaka Feb 29 '16 at 4:32
2019-12-14 13:30:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3371133506298065, "perplexity": 3432.465139964893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00188.warc.gz"}
http://sepwww.stanford.edu/data/media/public/docs/sep70/reinaldo/paper_html/node3.html
Next: Meaning of the results Up: Michelena & Muir: Anisotropic Previous: FORWARD MODELING # INVERSE MODELING As mentioned above, expression (5) will be used to estimate the 2N-dimensional slowness vector given the traveltimes from a cross-well experiment. However, we can investigate some of the difficulties in estimating such a vector by first studying the case of a homogeneous medium (N=1) When the model is isotropic, we usually estimate the slowness S of the homogeneous medium that best fits the traveltimes by simply averaging all the slownesses Si obtained from the individual rays: (7) where li is the source-receiver distance and M the total number of traveltimes. When the model is anisotropic, the 2-D vector that best fit the traveltimes can be obtained by generalizing the average (7). This generalization is, as expected, in a least-squares sense. Note that expression (1b) is linear in Sx2 and Sz2. Therefore, for a given set of traveltimes and source-receiver locations, it is possible to set up a least-squares problem to find the vector of the homogeneous medium. Defining and ,the least-squares problem is (8) where and Equation (8) can be solved in different ways. The most popular approach is by using the normal equations, resulting (9) However, the normal equations may have undesirable features with respect to numerical stability because the condition number of is the square of the condition number of . If is only moderately ill-conditioned, is severely ill-conditioned. For this reason, methods that do not amplify the condition number of should be used to solve systems like (8) (for example QR factorization, Gill at al., 1990). For estimating Wx and Wz simultaneously and accurately, has to be well conditioned. Note that this is not the case when most of the elements of the matrix satisfy either or . These two conditions describe cases when rays are traveling close to the horizontal or the vertical. In such cases, it is impossible to determine simultaneously both components of the vector because of the limited view of the measurements translates immediately into severe ill-conditioning. This can be understood by trying to estimate Wx and Wz from the simple cross-well experiment shown in Figure , where .In this case The eigenvalues of this matrix are Because , the eigenvalues are approximately experiment Figure 2 Cross-well experiment with two rays. In other words, the smallest eigenvalue (zero in this case) is related to the vertical component of the slowness whereas the largest one is related to the horizontal component. On the contrary, for a VSP-like geometry largest eigenvalue is related to Sz and the smallest one is related to Sx (Dellinger, 1989). Having more rays (M) without increasing the aperture does not solve the problem. In such a case, the largest eigenvalue of the matrix tends to and the smallest one tends to zero again. The previous inversion scheme for homogeneous models can be generalized for estimating in an heterogeneous medium. All we have to do is to solve systems of equations like (8) at each cell. In other words, the problem in the heterogeneous model is separated into many subproblems in homogeneous models. This approach might be easily implemented when the ray paths are used as basis functions for describing the slowness (Harris et al., 1990a) if, instead of averaging the slownesses of the different rays where they intersect, system of equations like (8) are solved to estimate the two components of the slowness. Although this idea will not be exploited in the present paper, it can help us to understand intuitively which components of the slowness vector are easier (or more difficult) to estimate from cross-well traveltimes measurements. In general, vertical variations in the medium are easier to estimate than horizontal variations. Vertical variations correspond to singular vectors associated with the largest singular values of the problem whereas lateral variations are associated to the smallest singular values (Pratt and Chapman, 1990). As explained earlier, in homogeneous models Sx is related to the larger singular value and Sz is related to the smaller one. Therefore, if the problem in a heterogeneous model is solved as many separate subproblems in homogeneous models, the largest singular values will be related to vertical variations in Sx and the smallest ones will be related to lateral variations in Sz. We will demonstrate in the field data examples that estimating horizontal variations in Sz is indeed a difficult problem whereas it is always easier to estimate vertical variations in Sx. Equation (5) can be used to estimate for all the cells at the same time (rather than in a cell-by-cell basis, as explained before). This equation is obviously non-linear in Sj and Sj+N. One way to solve the problem is by a sequence of linearized steps. We start by approximating (5) by a first order Taylor series expansion centered in a given model : (10) where the elements of the Jacobian are and tij is the traveltime of the ith ray in the jth cell of the model (equation (2)). If we assume that represents one component of the vector of measured traveltimes, we can compute the perturbations once the traveltimes in the reference model has been calculated. The perturbation is the solution of the following system of equations (11) where . Note that the matrix depends explicitly on the slowness of the reference model in contrast with the isotropic case where the matrix depends only on the lengths of the rays in each pixel. In the isotropic case if the rays are straight, the estimation of the slowness becomes a linear problem because is a constant. In the anisotropic case, however, the problem is still non-linear even if the rays are straight. Ray bending introduces another source of non-linearity. In the examples shown later, equation (11) will be solved using the LSQR variant of the conjugate gradients algorithm (Nolet, 1987). We will show that by doing a few iterations with this method at each linearized step, the ill-conditioning of caused by to the limited view of the measurements is better handled than by solving the normal equations (in the overdetermined case). Next: Meaning of the results Up: Michelena & Muir: Anisotropic Previous: FORWARD MODELING Stanford Exploration Project 12/18/1997
2017-10-21 12:21:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153200745582581, "perplexity": 507.1280081463552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00788.warc.gz"}
https://mathematica.stackexchange.com/questions/186570/problem-with-ndsolve-the-function-diverges
# Problem with NDSolve: the function diverges I'm trying to numerically simulate heat flux in a 3d power cable model that i build. Essentially the equation is: I can get an analytic solution from Mathematica easily and when i plot T(x) i get no trouble (putting the constraint T'(x)=0 and T'(l)=0): second picture shows T'(x)=0 Because i want to add some nonlinear thermal behavior i had to use NDSlove, and first i try to solve the same equation that i wrote before to see if it works: sol = NDSolve[{-T[x] + \[Tau] + P[x]*(Subscript[\[Rho], t]) + (Subscript[\[Rho], t])/(Subscript[\[Rho], c])*T''[x] == 0, T'[0] == 0, T'[l] == 0}, T[x], {x, 0, l}] and the numerical solution give me this: i can't undertand what's the deal here and why it doesn't converge!! Here complete Code: P[x_] := 3*r*(\[Kappa] + \[Lambda]*((x - l/2)/(l/2))^2)^2 T0 = P[0]*Subscript[\[Rho], t] \[Kappa] = 1597.14 \[Lambda] = 1788 - \[Kappa] r = 0.0000133 \[Tau] = 20 Subscript[\[Rho], t] = 0.54877 Subscript[\[Rho], c] = 0.332502 l = 100000 \[Alpha] = 0 sol = NDSolve[{-h[x] + \[Tau] + P[x]*(Subscript[\[Rho], t]) + (Subscript[\[Rho], t])/(Subscript[\[Rho], c])*h''[x] == 0, h'[0] == 0, h'[l] == 0}, h[x], {x, 0, l}] • Works for me with my chosen parameters: i.stack.imgur.com/BP3lR.png -- You'll need to give complete code to get much help. – Michael E2 Nov 23 '18 at 14:39 • complete code added – Mattia Nov 23 '18 at 14:47 • Parameters not defined. How can we verify the solution? – Alex Trounev Nov 23 '18 at 15:14 • In your picture analytical solution does not satisfy boundary conditions T'[0] == 0, T'[l] == 0 – Alex Trounev Nov 23 '18 at 15:33 • yes it's satisfied, slope of T(x) goes to zero at 0 and l but you can't see it from the picture because T''[x] is zero at 13 and graphic is from 0 to 100000, this is the reason why you can't see it. anyway i need help! – Mattia Nov 23 '18 at 15:49 1)Increase the order of the equation to solve the Dirichlet problem; 2) map the solution to the interval (0,1) by replacing x->l*x; 2) use the method of the false transient. Then the system of equations and the solution are P[x_] := 3*r*(\[Kappa] + \[Lambda]*(2*x - 1)^2)^2 \[Kappa] = 1597.14; \[Lambda] = 1788 - \[Kappa]; r = 0.0000133; \[Tau] = 20; rt = 0.54877; rc = 0.332502; l = 100000; k = rt/rc/l^2; eq = {-h[x, t] + D[P[x], x]*rt + k*D[h[x, t], x, x] - D[h[x, t], t] == 0}; ic = h[x, 0] == 0; bc = {DirichletCondition[h[x, t] == 0, x == 0], DirichletCondition[h[x, t] == 0, x == 1]}; sol = NDSolveValue[{eq, ic, bc}, h, {x, 0, 1}, {t, 0, 20}] Plot3D[sol[x, t], {x, 0, 1}, {t, 1, 20}, PlotRange -> All, Mesh -> None, ColorFunction -> Hue, AxesLabel -> {"x", "t", "\!$$\*SubscriptBox[\(h$$, $$x$$]\)"}] Plot[Table[sol[x, t], {t, 5, 20, 5}], {x, 0, 1}, PlotRange -> All, AxesLabel -> {"x", "\!$$\*SubscriptBox[\(h$$, $$x$$]\)"}] The solution of the problem reaches a stationary state already at t>1. Integrating over x, we find the solution to the original problem up to a constant hS[x_] := NIntegrate[sol[x1, 20], {x1, 0, x}] {Plot[hS[x], {x, 0, .01}], Plot[hS[x], {x, 1 - .01, 1}], Plot[hS[x], {x, 0, 1},AxesLabel -> {"x", "T"}]} We choose the integration constant = 90, then the solution is similar to the analytical one. Note that with numerical integration on the right-hand boundary, when x-> 1, a numerical instability arises, although the function sol[x1,20] is rather smooth.
2019-06-20 20:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25037965178489685, "perplexity": 3285.002533053194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.24/warc/CC-MAIN-20190620190041-20190620212041-00118.warc.gz"}
https://physicscatalyst.com/Class10/class10-CONTROL-AND-COORDINATION-ncert_solutions.php
# NCERT Solutions for Class 10 Science CONTROL AND COORDINATION In this page we have NCERT Solutions for Class 10 Science CONTROL AND COORDINATION . Hope you like them and do not forget to like , social shar and comment at the end of the page. Question 1 What is the difference between a reflex action and walking? • Reflex action is a process by which we do something without thinking about it or without being in control of reactions. It is done by only the spinal cord without the help of brain. • While walking is a process which is done by thinking or it is performed by the brain unlike reflex action. Question 2 What happens at the synapse between two nephrons? The electrical impulses set off release of some chemicals at the synapse between two nephrons. These chemicals cross the synapse and start a similar electrical impulse in a dendrite of the next neuron. Question 3 Which part of the brain maintains posture and equilibrium of the body? Posture and equilibrium of the body is maintained by mid brain. Question 4 How do we detect the smell of an agarbatti (incense stick)? The smell of agarbatti is detected by the forebrain. There are separate areas of association where sensory impulses or information are interpreted by putting them together. These impulses of smell are detected by forebrain. Question 5 What is the role of the brain in reflex action? The nerves from all over the body meet in the bundle in the spinal cord. Reflex arcs are formed in the spinal cord itself although the information input also goes on to reach the brain. Question 6 What are plant hormones? Answer The chemical substances released by various part of plants to control growth and various activities are called plant hormones. Question 7 How is the movement of leaves of the sensitive plant different from the movement of a shoot towards light? The movement of leaves of the sensitive plant is neither towards nor away from stimulus like touch. While movement of shoot is towards stimulus like light. The movement of leaves of sensitive plants is not directional while the movement of shoot is directional. Question 8 Give an example of a plant hormone that promotes growth. (i) Auxins help to increase the length of plants. (ii) Gibberellins help in the growth of stem. Question 9 How do auxins promote the growth of a tendril around a support? The tendrils are sensitive to touch. As these tendrils come in the contact with the support, the auxin diffuses towards the other side away from the support. So this part grows more rapidly than the other. This causes the tendril to circle around the support and thus climb upwards. Question 10 How does chemical coordination take place in animals? Chemical coordination takes place in animals with \the help of some chemical substances called hormones. Hormones are secreted by endocrine glands. The timing and amount of hormones released are regulated by feedback mechanisms. Question 11 Why is the use of iodized salt advisable? The use of iodized salt is advisable because iodine is necessary for the thyroid gland to produce thyroxine hormone. Thyroxine regulates carbohydrates, protein and fat metabolism in the body so as to provide the best balance for growth. Iodine is essential for the synthesis of thyroxine. Question 12 How does our body respond when adrenalin secreted into the blood? Adrenalin is secreted directly into the blood and is carried to different parts of the body. It acts on heart. As a result the heart beats faster in order to supply more oxygen to our muscles. These muscles regulate various movements of the body. Question 13 Why are some patients of diabetes treated by infections of insulin? The patients of diabetes are treated by giving injections of insulin. Insulin is a hormone which is produced by the pancreas and helps in regulating blood sugar levels. If it is not secreted in proper amounts, the sugar level in the blood rises causing many harmful effects. Question 14 What is the function of receptors in our body? Think of situations where receptors do not work properly. What problems are likely to arise? • The main function of receptors is to detect in formations from the environment. These receptors are located in our sense organs. There are some situations which receptors do not work properly, like mouth starts water when we feel hungry, touching a flame, knee – jerk, etc. • In these situations, they take enough time if these done by brain. To solve these problems, the nerves made muscles in a simpler way. This is done by the spinal cord. Question 15 Draw the structure of a neuron and explain its function. Function of neuron: The neuron is the structure and functional unit of the nervous system. It contain following three parts: (i) dendrites (ii) Cell body (iii) Axon The impulses of information travel from dendrites to cell body. and then along the axon to its end. These impulses cross the synapse. at the end, the impulses travel from one neuron to the other up to the spinal cord or to the concerned part of body. Question 16 How does phototropism occur in plants? The directional or tropic movement towards the light or away from the light is called phototropism. The shoots respond by bending towards light, while roots respond by bending away from the light. Question 17 Which signals will get disrupted in case of a spinal cord injury? (i) All the signals are responses which pass from and to the brain through the spinal cord will get disturbed. (ii) Reflex actions will be disrupted. Question 18 How does chemical coordination occur in plants? In plants, stimulated cells release chemical compounds, which are called plant hormones. Different plant hormones help to coordinate growth development and responses to the environment. They are synthesized at placed away from where they act and simply diffuse to the area of action. Question 19 What is the need for a system of control and coordination in an organism? Every little change in the environment evokes an appropriate movement in response. For example, if we want to talk to our friends in class, we whisper rather than shouting loudly. Thus, the movement to be made depends on the event that is triggering it. Therefore, such controlled movement must be connected to the recognition of various events in the environment, followed by only the correct movement in response. In other words, living organisms must use systems providing control and coordination. In multicellular organisms, specialized tissues are used to provide control and coordination activities. Question 20 How are involuntary actions and reflex actions different from each other? Involuntary Action: (i)The action which we cannot do by thinking; called involuntary action. For example, beating of the heart (ii)Involuntary actions are controlled by the brain. Reflex Action: (i)An action i. e. a response which is immediate and does not need processing by the brain is called a reflex action. For example, immediate removal of hand on touching a hot plate (ii)Reflex actions are controlled by the spinal cord. Question 21 Compare and contrast nervous system and hormonal mechanisms for control and coordination in animals. • In human beings, he nervous system controls the various functions by small units called neurons. Neurons receive the information through sensory nerves and transfer them through motor nerves. • Besides this, certain important functions like sugar level, metabolism, growth and development, etc. are controlled by hormones secreted by various endocrine glands. Hence, it is true that nervous and hormonal systems together perform the function of control and coordination in human beings. Question 22 What is the difference between the manner in which movement in the sensitive plant and movement in our legs takes place? Movement in sensitive plant: Movement in the sensitive plant leaves takes place in response to touch (shock) stimulus. When terminal pinnule is touched, the stimulus is conducted to its base and the pinnules droop down. This happens due to change (decrease) in osmotic pressure causing shrinkage. when the stimulus time is over, osmotic pressure increases and the cells swell, causing to the pinnules become normal. This is an example of growth independent movement. Thus, movement happens at a point different from the point of touch (stimulus). So, the information that a touch has occurred is communicated through electrical – chemical means from cell to cell, but not through specialized tissues. Plant cells change in shape by changing the amount of water in them, resulting in swelling or shrinking, during movement. Movement in our legs: Our legs are provided with nerves which are connected to muscles. To lift the leg, the brain passes information to nerves. The information travels as an electrical impulse. On reaching the leg muscles, the impulse is converted into a chemical signal and the muscles contract to lift the leg. Movement of legs takes place due to muscle contraction and relaxation, which is under the control of our nervous system. In the nervous system, electrical impulses are generated for quick transmission of information. But there are limitations: (i) Impulses will reach only those cells that are connected by the nervous tissue. (ii) Once an electrical impulse is generated in a cell and transmitted, the cell takes some time to generate another impulse. That is, cells cannot continuously create and transmit electrical impulses. Hormones are chemical messengers that diffuse to all cells of the body. The body cells using special, molecules on heir surfaces, recognize information and even transmit it. Hormones are synthesized at places away from where they act.Hormones can reach all cells of the body (through blood in animals) regardless of nervous connections, and it can be done steadily and persistently. Question 23 Name the system in animals which help in the process of control and coordination. (i) Nervous system (ii) Hormonal (Endocrine) system Question 24 Name the largest cell in the human body. Nerve cell or neuron. Question 25 Have the old parts of the shoot and root changed direction? The old parts of roots and shoots of plant change their directions slightly (very less), while new parts move more. Question 26 What are the main divisions of nervous system? The nervous system is broadly divisible into two parts: (i) Central nervous system, (ii) Peripheral nervous system Question 27 Give four examples of simple human reflexes. (i) Knee – jerk reflex in which the leg is involuntarily extended forward as a result of a sharp tap below the knee – ap in a relaxed (freely hanging) leg. (ii) Closing of the eyelids when an object suddenly approaches the eye or when a strong beam of light is flashed cross. (iii) Withdrawal of the hand on pricking a pin or a horn. (iv) Movement of the diaphragm. Question 28 Design an experiment to demonstrate hydrotropism. Positive hydrotropism can be demonstrated with terminated seedlings, which are allowed to grow on ground. The soil below the roots is separated by a polythene partition. The left side is kept moist but the right side is kept ry. The radicals at first grow in a downward direction due to the effect of gravity (positive geotropism), but after some time, the roots bend toward the moist soil (positive hydrotropism). This is evidently due to the closeness of the germinating roots to wate ### Practice Question Question 1 Which among the following is not a base? A) NaOH B) $NH_4OH$ C) $C_2H_5OH$ D) KOH Question 2 What is the minimum resistance which can be made using five resistors each of 1/2 Ohm? A) 1/10 Ohm B) 1/25 ohm C) 10 ohm D) 2 ohm Question 3 Which of the following statement is incorrect? ? A) For every hormone there is a gene B) For production of every enzyme there is a gene C) For every molecule of fat there is a gene D) For every protein there is a gene
2021-05-06 15:58:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3348996639251709, "perplexity": 2833.5429446065086}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00325.warc.gz"}
https://socratic.org/questions/kinetic-energy-of-an-object-is-25-j-velocity-is-5-m-s-what-will-be-its-kinetic-e
# Kinetic energy of an object is 25 j. Velocity is 5 m/s what will be its kinetic energy if velocity is doubles? Feb 17, 2018 $\text{100 J}$ #### Explanation: $\text{KE} = \left(\frac{1}{2}\right) \cdot m \cdot {v}^{2}$ Simple explanation : Since the KE is proportional to velocity squared, if you increase the KE by a factor of $2$, you increase $v$ by a factor of ${2}^{2} = 4$ Detailed explanation : $25 = 0.5 \cdot m \cdot \left({5}^{2}\right)$ Find $m$: $25 = 25 \frac{m}{2}$ $m = 2$ Hence the new KE: $\text{KE} = 0.5 \cdot 2 \cdot {10}^{2}$ $\text{KE} = 100 J$ Feb 17, 2018 The kinetic energy of body is $\text{100 J}$. #### Explanation: $\text{kinetic energy" =1/2 xx mxxv^2="25 J}$ $v = \text{5 m/s}$ Let the mass of body be $m$. $\frac{1}{2} \times m \times {v}^{2} = 25$ $m \times {5}^{2} = 50$ $m \times 25 = 50$ $m = \frac{50}{25}$ or $m = \text{2 kg}$ Now if velocity is doubled then $v ' = 2 \cdot v$ v'=2×x5 $v ' = \text{10 m/s}$ The final kinetic energy $= \frac{1}{2} \times m \times {\left(v '\right)}^{2}$ =1/2x×2x×10^2 =1/2x×2x×100 $= 100 J$ Feb 17, 2018 $100 J$ #### Explanation: We know that the equation for kinetic energy is $K E = \frac{1}{2} m {v}^{2}$ Here, $v = 5$, $K E = 25$ $\therefore 25 = \frac{1}{2} \cdot m \cdot {5}^{2}$ $25 = \frac{1}{2} \cdot 25 \cdot m$ $\therefore m = 2$ So, the object's mass is $2 k g$. If the velocity doubles, then $v = 5 \cdot 2 = 10$, and then $K E = \frac{1}{2} \cdot {10}^{2} \cdot 2$ $K E = 100 J$ So, the object's kinetic energy will be $100 J$.
2020-06-05 06:05:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 39, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663462996482849, "perplexity": 1118.0766170266909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00561.warc.gz"}
https://mathstodon.xyz/@enumerator/100985642865703467
@enumerator well now we know you regard 0 as a natural number @btcprox @enumerator there are no Romans around to hurt us any more. @christianp @enumerator you say that, but some of my analysis courses still implicitly assumed 1-based natural numbers A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
2019-04-25 19:07:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551213383674622, "perplexity": 4542.020347624868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578732961.78/warc/CC-MAIN-20190425173951-20190425195951-00005.warc.gz"}
https://cordova.apache.org/docs/en/3.0.0/guide/platforms/android/tools.html
# Android Command-line Tools The cordova command-line utility is a high-level tool that allows you to build applications across several platforms at once. An older version of the Cordova framework provides sets of command-line tools specific to each platform. To use them as an alternative to the CLI, you need to download this version of Cordova from cordova.apache.org. The download contains separate archives for each platform. Expand the platform you wish to target. The tools described here are typically available in the top-level bin directory, otherwise consult the README file for more detailed directions. ## Create a project Run the create command, specifying the existing path to the project, the reverse-domain-style package identifier, and the app's display name. Here is the syntax for both Mac and Windows: $/path/to/cordova-android/bin/create /path/to/project com.example.project_name ProjectName$ C:\path\to\cordova-android\bin\create.bat C:\path\to\project com.example.project_name ProjectName ## Build This cleans then builds a project. Debug, on Mac or Windows: $/path/to/project/cordova/build --debug$ C:\path\to\project\cordova\build.bat --debug Release, on Mac or Windows: $/path/to/project/cordova/build --release$ C:\path\to\project\cordova\build.bat --release ## Run the App The run command accepts the following optional parameters: • Target specification. This includes --emulator, --device, or --target=<targetID>. • Build specification. This includes --debug, --release, or --nobuild. $/path/to/project/cordova/run [Target] [Build]$ C:\path\to\project\cordova\run.bat [Target] [Build] Make sure you create at least one Android Virtual Device, otherwise you're prompted to do so with the android command. If more than one AVD is available as a target, you're prompted to select one. By default the run command detects a connected device, or a currently running emulator if no device is found. ## Logging $/path/to/project/cordova/log$ C:\path\to\project\cordova\log.bat ### Cleaning $/path/to/project/cordova/clean$ C:\path\to\project\cordova\clean.bat
2021-10-22 10:50:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1912798434495926, "perplexity": 9221.060538189276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00595.warc.gz"}
https://www.futurelearn.com/info/courses/behaviour-change-interventions/0/steps/241941
# Identifying relevant Intervention Types How to consider all possible Intervention Types that could potentially be effective for changing the behaviour of interest It is important to consider every possible Intervention Type that could potentially be effective for changing the behaviour of interest, even if we later decide that they are not relevant or feasible. This is presented here through the use of a real world example.
2022-11-29 08:46:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947935104370117, "perplexity": 585.9322250290423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00296.warc.gz"}
https://pandaquaerensintellectum.wordpress.com/2015/06/14/the-big-bang-theory-references-explained-part-1/
“The Big Bang Theory” References Explained – Part 1 The CBS sitcom “The Big Bang Theory” is, among other things, particularly remarkable for its many references to physics, science, the “geek culture” it portrays, and even subjects like history or philosophy, the first scientific allusion of course already being its very title. So I thought it might be fun to research some of them and explain them here. I tried not to assume too much prior scientific knowledge beyond basic arithmetic, not even simple algebra. Perhaps that also means I will have to ask better-informed readers for their patience. This is intended to be the first of several parts (probably three or four). Here it goes: 1. Free fall and basic classical mechanics Let’s begin with a scene from “The Gorilla Experiment” (Season 3, Episode 10): Penny, the only non-scientist main character of the show, wants to surprise her physicist boyfriend Leonard by trying to understand what he’s working on. She therefore asks his string theorist roommate Sheldon to tutor her in physics, but is quickly lost: Sheldon: Now, remember, Newton realized Aristotle was wrong, and that force was not necessary to maintain motion, so let’s plug in our 9.8 meters per second squared as a, and we get force – earth gravity – equals mass times 9.8 meters per second per second. So, we can see that m x a equals m x g, and what do we know from this? Penny: We know that… Newton was a really smart cookie… Oh! Is that where Fig Newtons come from? Sheldon: No. Fig Newtons are named after a small town in Massachusetts… No don’t write that down! Now, if m x a equals m x g, what does that imply? Penny: I don’t know. Sheldon: How can you not know, I just told you! […] In the 17th century, Galileo Galilei and Isaac Newton founded classical mechanics, which is concerned with the movement of objects. This is generally seen as the birth of modern science in general and physics in particular, and some of it is already taught to middle school kids (most of whom, of course, forget all about it), so it makes sense to start a physics course there. Or would make sense, as Sheldon begins his tutoring with the ancient Greeks. Nevertheless, the physics of antiquity as found e.g. in the writings of Aristotle provides us with some prominent spokespeople for prejudices and wrong intuitions a beginner in modern physics might share, so that’s also not necessarily a bad idea. One thing that both Aristotle and even some people today believe is that for an object to maintain motion with a constant speed and in a constant direction, a force must continually act upon it. This seems to be somewhat confirmed by everyday experience: If a ball rolls over the floor, it gets slower and slower, until it stops at some point. You may wonder how the ball manages to move at all once it has lost contact with your feet, but the Aristotelian explanation, or rather, what I was once told was the Aristotelian explanation and have never bothered to actually check, is that there is air moving behind the ball that continues to propel it for a while. Anyway, Newton realized this was wrong: If you set something in motion in the vacuum of outer space, it would continue to move in a straight line, with constant speed, for all eternity. In fact, if it was a spaceship and you where in it, there would be no way to experimentally prove that you are the one moving, rather than the rest of the universe moving in the counterdirection while your ship is staying still. Instead, the role force plays in the universe is to change the motion of an object, either by changing its direction or its speed. Physically, both of these are called acceleration, even though in everyday language, the term seems mostly reserved for changes of speed only. The reason the ball stands still after a while is that the ground it moves on provides friction as a counterforce, slowing it down until it has zero velocity. The resistance a given object provides against acceleration by a force is measured by its mass. Hence Newton’s formula that F = m x a – force equals mass times acceleration. The next thing to consider here is that the weight of an object depends on its mass. That can be seen from everyday experience: A ball is lighter than an elephant, and it is easier to accelerate the ball to some speed you want it to have than it would be for the elephant – one can be done by the smallest expenditure of muscle force, the other not so much. In fact, the weight of an object is proportional to its mass, meaning that the weight can be calculated as “mass multiplied by some constant number”. The constant number is known as “g” and depends slightly upon the region of earth you are in, but, as Sheldon states, it’s roughly 9.8 meters per square second (we will get to what the unit means in a moment) in the USA and any part of the world that has about the same latitude. Hence, the weight of an object is m x g. (As a sidenote, what most people call their weight actually is their mass, but since the two depend on each other, it doesn’t matter much in your everyday life, at least as long as you remain on earth and the factor g stays roughly the same. On the other hand, if you went to moon, the factor between mass and weight and hence your weight would change, while your mass itself would still be the same.) But “weight” is nothing but an expression for “force that pulls something down on earth”, and when there is no other force to counter that something’s weight (such as the material stability of a platform you’re standing on), we get the process commonly known as “falling”. Therefore, the m x g must in this case be equal to the m x a which gives the force by Newton’s law (more usually called “Newton’s second axiom”): m x a = m x g. Since mass times “something” is equal to mass times “something else”, “something” must equal “something else”, so the acceleration a of a weighty object is equal to g, i.e. 9.8 meters per square second. The unit “meters per square second” is the same as “meters per second per second”, i.e. if you experience an acceleration of 9.8 meters per square second, you get 9.8 meters per second faster every single second that passes. The interesting thing here is that this overthrows another intuition of both the classic Greeks and many modern-day people: An object that falls upon earth gets faster and faster, but the acceleration is the same, no matter the exact mass or weight it possesses. That means, contrary to what one might have thought, heavier objects need exactly as long to fall down from the same height as lighter ones. At least that is true as long as other forces don’t play a significant role – e.g. air resistance of course makes a feather fall slower than a stone, even though both would be equally fast if there was no air on earth. Still, Galileo Galilei famously managed to demonstrate our new insight by dropping a canonball and a wooden ball from the Leaning Tower of Pisa. They hit the ground at the exact same moment and provided one of the first experimental confirmations of classical physics. 2. Abelian groups In Season 6, Episode 3, “The Higgs Boson Observation”, Sheldon receives a package from his mother containing all the scientific (and potty training) diaries he kept in his childhood. His plan to comb these journals for some Nobel prize worthy discovery eventually leads to him hiring a research assistant, but first, Penny asks if she could help with that. In response, Sheldon turns on his trademark condescension: Sheldon: Really? You can assess the quality of my work? Okay, uhm… here! I wrote this when I was five years old. Penny: “A proof that algebraic topology can never have a non-selfcontradictory set of Abelian groups.” (Pause, sarcastically:) I’m just a blonde monkey to you, aren’t I? Sheldon: You said it, not me. The idea of Sheldon having kept journals on mathematical discoveries since he could barely use a potty might be inspired by 19th century German mathematician Carl Friedrich Gauss who did just that (and alongside Archimedes, Isaac Newton and Leonhard Euler is one of the usual candidates people name for the title of “greatest mathematician who ever lived”). Aside from that, for the purposes of this article, there is a problem with this dialogue: The title of Sheldon’s work is mathematically meaningless. That’s rare for “The Big Bang Theory”, where usually every scientific reference corresponds to something in the real world of physics, mathematics, etc., and even if it is a tad sloppy (we might come to some examples in later installments of this series), it does at least make some sense. Nevertheless, every single one of the expressions “algebraic topology”, “non-selfcontradictory” and “Abelian groups” does have a meaning: Algebraic topology is a mathematical subfield, not some kind of object mathematicians study, which is the reason why Sheldon’s title is senseless. (It should be noted, however, that a topology is a well-defined kind of mathematical object, as well as the name of the mathematical field that studies said objects.) Entire graduate level textbooks have been written about the subject, and neither the space of a blog nor and my knowledge of it are sufficient to tell you much interesting about algebraic topology, so I will just point to Wikipedia. “Non-selfcontradictory” is pretty, uhm… self-explanatory, so only one thing remains to further dissect: Now, if mathematicians encounter something that has some structure, like the above properties, one thing they might do is abstract from whatever concrete context they have found it in and define the structure itself as a new mathematical object, providing a kind of “logical template” to investigate an infinity of similar things. To see what that means, let’s recapitulate what we know about the integer numbers: We have an operation called “addition” that takes two of them (like 4 and 5) as “input” and spits out another one of them (like 9, the result of 4+5). This kind of thing is called a binary operation. And we know that this operation obeys the laws of associativity and commutativity, it has 0 as a neutral element, and every integer number a has an inverse element in -a (recall from 6th grade that -(-a)=a, e.g. -(-4)=4). Are there other examples of binary operations that obey some of these laws? Well, one that satisifies most of them is found by considering permutations: Imagine you are a street con artist working a shell game: Whenever it is played, three shells are being shuffled on a table, with a small coin being placed under one of them. After you are done, the victim of the fraud will make attempts to guess which of the shells the coin is under, all of which are going to be futile thanks to your sleight-of-hand skills. One day, you are bored and, to kill time, try to classify all the ways you could potentially do the shuffling. At the start of the game, the shells are laid out in a row. To keep track of which shell is which, you number the shells from left to right: $S1$, $S2$, $S3$ (“S” standing for “Shell”). After you are finished, they will still lie in a row, but their order will have changed, e.g. from left to right, $S2$ might now be the first one, $S3$ might lie in the middle and $S1$ be the last one. Since you are only interested in classifying the end result, you might describe that in terms of a wizard having turned $S1$ into $S2$, $S2$ into $S3$, and $S3$ into $S1$, as it would give you the exact same configuration. The wizard has many such spells in his arsenal, the only condition all of them have to meet is that, after he has worked his magic, all of the shells we had before must be in one and only one of the three conceivable positions again. E.g., it is not an acceptable spell to change $S1$ into $S2$, but turn both $S2$ and $S3$ into $S1$, as this would correspond to the configuration $S2$, $S1$, $S1$ which could not possibly arise from just rearranging $S1$, $S2$, $S3$. Now, let’s denote “turning $S1$ into $S2$” as $S1 \rightarrow S2$, and correspondingly for all other shells, then the magic charm from above becomes $S1 \rightarrow S2$, $S2 \rightarrow S3$, $S3 \rightarrow S1$. Another spell might be $S1 \rightarrow S1$, $S2 \rightarrow S3$, $S3 \rightarrow S2$ (leave $S1$ as it is, then switch $S2$ and $S3$ by turning each of them into the other). You can easily imagine formulating these kinds of prescriptions for any number of shells, not just three. Mathematically, they correspond precisely to what is called “permutations”. Now, what would happen if we executed the above two permutations in direct succession? First, we would get from $S1$, $S2$, $S3$ to $S2$, $S3$, $S1$ (still from left to right). Then the second permutation tells us to turn $S2$ into $S3$, $S3$ into $S2$ and leave $S1$ alone, so the end result would be $S3$, $S2$, $S1$ – which would also be the result of applying the permutation $S1 \rightarrow S3$, $S2 \rightarrow S2$ and $S3 \rightarrow S1$. In other words, what we did gave us another permutation, and thus, we can conceive of applying two permutations in succession as a binary operation taking both of them as input and yielding a new one. This composure of permutations has virtually all the properties that addition of integers had: There is a neutral element (just leave all of your shells where they are), there are inverses (just do a given permutation “backwards”, e.g. if it sent S1 to S2, send S2 back to S1 for your inverse permutation, and so on), it also is associative (because in the end, applying the new permutation to a given ordering of shells might still be conceived as applying the two permutations you composed it of in order – this one might be slightly harder to wrap your head around if you aren’t used to this). The one thing missing from the picture is commutativity: If you reverse the order in which we executed the two permutations at the beginning of this paragraph, you first get from $S1$, $S2$, $S3$ to $S1$, $S3$, $S2$, and then to $S2$, $S1$, $S3$ – a different end result from what we got before. Now we are ready to understand the term “Abelian group”: A group is a structure with a binary operation where the properties of associativity, a neutral element, and inverse elements are present. Both addition of integer numbers and composure of permutations are examples. From these few prerequisites, you can already prove some simple properties that every group must share, e.g. that there can only be one neutral element and that each member of a group can only have one inverse, and then later proceed to more complex stuff. An Abelian group is a structure that has all the properties of a group, but also possesses commutativity – like the integers with addition. Another example of an Abelian group would be fractions greater than zero with the usual arithmetic multiplication. As stated before, as long as we are only interested in the structure that the binary operation possesses, it actually doesn’t matter what kind of objects (numbers, permutations, in other contexts stuff like symmetries, geometric transformations and matrices) we are considering and you might as well just write down a bunch of abstract symbols like a, b, c, …, then assign the result of what happens when you apply the operation to any two of them (e.g. a composed with c is d) in a way that satisifies the properties necessary for a group. 3. Münchhausen trilemma In “The Bad Fish Paradigm” (Season 2, Episode 1), Sheldon finds out that Penny, who has been dating Leonard, is insecure about her lack of formal education, so much so that she lied to him about finishing community college. Unable to keep a secret for himself, Sheldon opts for the, in his mind, second best option: Moving out of his shared appartement with Leonard. Here his how their dialogue goes: Sheldon: Leonard, I’m moving out. Leonard: What do you mean, you’re moving out? Why? Sheldon: There doesn’t have to be a reason. Leonard: Yeah, there kinda does. Sheldon: Not necessarily. This is a classic example of Münchhausen’s trilemma. Either the reason is predicated on a series of subreasons, leading to an infinite regression, or it tracks back to arbitrary axiomatic statements, or it’s ultimately circular, i.e. I’m moving out because I’m moving out. Leonard: I’m still confused. The term “Münchhausen trilemma” was coined by German philosopher and sociologist Hans Albert in his 1969 book “Treatise on Critical Reason”. Albert was a professor at the University of Mannheim who had come under the influence of philosopher of science Karl Popper’s notion of critical rationalism in the 1950s. Popper, who had closely followed the development of physics in the early 20th century, had seen the doctrines of classical mechanics (see above) partially overturned by Einstein with his theory of relativity and, even more radically, by the founders of quantum mechanics, which we might get to in a later entry in this series. Central parts of the scientific worldview that had, for several centuries, been confirmed by every single experiment were found to be wrong. In fact, some people in the late 19th century had believed that physics was an almost utterly completed endeavour and we were approaching a perfect understanding of the universe we live in. While strictly speaking, the new developments meant classical physics was a wrong theory, scientists and engineers continue to use it to the present day for physical calculations, ranging from construction jobs to astronautics. That is because it is approximately true in most of the realm of our everyday experience, where speeds don’t get close to the speed of light, masses are very large compared to those of the elementary particles our world is composed of, but gravity does not get too large (see the film “Interstellar” for further reference on the last point). This led Popper to formulate a theory of how science should proceed, not by attempting to prove that certain theories are correct, but by scientists using their imagination to essentially guess audacious theories and then attempting to falsify them, i.e. using experiments and further theoretical work to find out what, if anything, is wrong with them, using the results to modify them into better theories that give a more accurate description of reality, attempting to falsify the new theories again to improve them even more, and so on, and so on. This way, Popper argued, while we would never arrive at an absolutely certain truth, we could be certain that, as theory building and critical examination of the theories go hand in hand, we get closer and closer to it. Rather than attaining absolutely “true” scientific knowledge, we thus have to go for “truth-similarity”. By Popper’s interpretation, the Greek philosophers Socrates and Xenophanes already held more or less the same view. What does this have to do with the Münchhausen trilemma? Well, it is the starting point for Albert’s argument in favour of critical rationalism. The more traditional approach to science, going back to ancient Greek thinkers like Aristotle, demands a reason for every scientific claim that forces every rational person to accept it as true. Afterwards, all doubts about the claim’s validity have been erased and it becomes absolute knowledge. This is called the principle of sufficient reason (well, actually, there are several versions and formulations of this principle, but this is the one Albert refers to in the aforementioned book). But that immediately leads to a problem: What if someone is determined to ask “Why?” again and again, much like a three-year-old would? What if this person demands to hear a reason why your scientific reason is true, and then a reason for the reason for the reason, and so on? This leads to three alternatives: 1. The questioning procedure does, indeed, go on forever, and you have to answer infinitely many “Why?”-questions. This is called infinite regress and does not seem like a very satisfying state of affairs. 2. You enter a logical circle where you have some statement that you take to be justified by another statement, but when you ask for its justification and the justification for any further reason that follows it, you ultimately get back to the first statement, leaving the entire construction hanging in the air. E.g., you may believe, “Pandas are God-like creatures.”, and ground that on the fact that “The great prophet Pandatagoras told us so, and he is absolutely trustworthy.”. When asked to give a reason to believe that, you reply that, “The great prophet Pandatagoras was sent to us by the giant pandas to spread their gospel, and a person chosen by them is absolutely trustworthy.” But why, proceeds your insistent questioner, is such a person absolutely trustworthy? “Because pandas are God-like creatures.”, you might reply and have now gone full (logical) circle. Obviously, that does not seem acceptable either. 3. The only remaining option is to dogmatize some statements as “obviously true” and “beyond questioning”, and then build all of your logical reasoning on these, as Sheldon puts it, “arbitrary axiomatic statements”. The last option was according to Albert, indeed how the Münchhausen trilemma (so named after the 18th century German Baron Münchhausen who claimed to have once pulled himself out of a swamp by his own hair) has traditionally been resolved: You can just “see” the truth of certain claims, which he calls the “revelation model of knowledge” (my translation, I am not familar with the English edition of his book). The most obvious example would be religions which claim that the truths in their holy scriptures are plain for everyone to see, thanks to the benevolence of some higher being that chose to reveal them to us. But that immediately leaves us with the problem on what grounds we believe that some particular piece of writing is or isn’t sent to us by some higher power, and also the question of correct interpretation. But Albert also counts rationalism (which tries to comprehend the world purely by rational thinking) and empiricism (which tries to ultimately ground all human knowledge in sensual perception) in the form 17th and 18th century philosophers advocated them as variants of the “revelation model”: Rationalists like Rene Descartes claimed that some truths could be so clearly and evidently comprehended by the human mind that no room for doubt was left. The counterargument Albert gives is that there are many examples in the history of science where this hasn’t worked out. E.g., Aristotle and his disciples might have claimed that the premises of their physics are “immediately evident to be true”, but they were still shown to be wrong (see above). And if we have even one example where the feeling of something being “obviously correct” is wrong, it might be argued it can never again serve as a basis of absolute certainty about anything. Empiricists like Francis Bacon, on the other hand, believe you can only trust your sensual observations and then generalize them to more and more universal natural laws (e.g. you proceed from, “I dropped this stone and it fell down. And then did it again. And again. And again.”, to, “This stone always falls down when I drop it.”, to, “All stones fall down when I drop them.”, to, “Every object heavier than air falls down when I drop it.”). But this so-called principle of induction, Albert says, either has a theoretical basis in some rational argument, or it is some sort of theoretical dogma/immediately evident axiom itself, in which case you are back to where we were before, or you try to base your belief in induction itself on observations and thus induction, which is a logical circle. Either way, you haven’t escaped the Münchhausen trilemma. A possible solution, Albert claims, is to embrace his and Popper’s method of critical examination, as it dispenses with the principle of sufficient reason altogether and replaces it by a continued process of trying to falsify old theories and find better new theories. I am not sure if a majority natural scientists would fully embrace critical rationalism, although it has certainly had some influence (Albert Einstein is reported to have sent a telegram to Popper that he “agreed about most things” with him). Neither does Albert’s argument seem quite uncontroversial among philosophers themselves (he has apparently clashed on it quite a bit with another German philosopher named Karl-Otto Apel). In any case, given that only one or two of Hans Albert’s around 40 books have ever been translated into English, one of the central terms of his philosophy making it on a mainstream American sitcom is a rather impressive feat. Next time: Cats! Magnets! Pasta! And so much more…
2017-09-19 15:13:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 59, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6831735968589783, "perplexity": 674.2143994541831}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685850.32/warc/CC-MAIN-20170919145852-20170919165852-00549.warc.gz"}
https://www.physicsforums.com/threads/matlab-problems-with-ode45.635074/
# MatLAB: problems with ODE45 • MATLAB Niles Hi I am trying to solve a simple set of coupled ODE's by ODE45. The coupled system is given by: Code: function xprime = eoms(t, x) xprime = [ 1e9 + 5.0e4*x(3) - 50*x(1); 4.0e1*x(1) - 3.3e3*x(2); 2.0e3*x(2) - 5e4*x(3) + 3.5e7*heaviside(t-1)*x(4); 1.0e3*x(2) - heaviside(t-1)*5.0e7*x(4)]; I solve it using the following command: Code: x0 = [0 0 0 0]; tspan = [0, 2]; [t, x] = ode45(@eoms, tspan, x0); However when I compile MatLAB just keeps calculating, it doesn't give me a result. Maybe it is due to the very rapid rates in the equations. Do I have any options here, or am I not able to solve for the transient behavior? Best, Niles. Homework Helper Try mkaing the time interval much smaller, for example 0 to 1e-8, and see what happens. If that works, try increasing the interval (say by factors of 10) till it blows up. Since you have a constant of 1e9 in your defintion of xprime, it's likely the solution contains functions similar to exp(1e9 t) or exp(-1e9 t). A conditionally stable integration method like ODE45 may need need to take time steps of smaller than 1e-9 to avoid going unstable. It may not be able to find a suitable step length, and even if it does, more than 109 steps may take a long time. You mgiht have more luck changing ODE45 to one of the solver names ending in "s", which should work with a bigger step size without blowing up. Niles Try mkaing the time interval much smaller, for example 0 to 1e-8, and see what happens. If that works, try increasing the interval (say by factors of 10) till it blows up. Since you have a constant of 1e9 in your defintion of xprime, it's likely the solution contains functions similar to exp(1e9 t) or exp(-1e9 t). A conditionally stable integration method like ODE45 may need need to take time steps of smaller than 1e-9 to avoid going unstable. It may not be able to find a suitable step length, and even if it does, more than 109 steps may take a long time. You mgiht have more luck changing ODE45 to one of the solver names ending in "s", which should work with a bigger step size without blowing up. Thanks for taking the time to reply. I have actually already tried your first suggestion, and it blows up as soon as the Heaviside step-function is different from 0. I also suspected that. I tried changing ODE45 to odes23, and now the solution pops up almost instantly! Wow, that is really good. Thanks! Best, Niles. Homework Helper I have actually already tried your first suggestion, and it blows up as soon as the Heaviside step-function is different from 0. In that case, you can probably solve it with ODE45 by splitting it into two parts 0 to 1 and 1 to 2, so you force one time point to be exactly on the "edge" of the step function at t = 1. You might need to make two versions of xprime with and without the step function included, so in effect you have heavisde(0) = 0 at the end of the first half of the solution, and heavisde(0) = 1 at the start of the second half. That might give you a more accurate solution than odes23, which will "round off" the edge of the step a bit, in order to keep going. Last edited: Niles Thanks for helping, that is kind of you. I tried extending the system of ODEs, but I get the message: Warning: Failure at t=1.000000e+000. Unable to meet integration tolerances without reducing the step size below the smallest value allowed (3.552714e-015) at time t. Something tells me choosing a different solver wont help me here. And I can't even change the time-step. Do I have any alternatives left? Best, Niles.
2022-08-14 22:51:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5526154041290283, "perplexity": 823.0463331649477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00671.warc.gz"}
http://en.wikipedia.org/wiki/Talk:Even_and_odd_functions
Talk:Even and odd functions WikiProject Mathematics (Rated C-class, High-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: C Class High Importance Field: Analysis Extensions? are there extensions of these simple idas to higher dimensions? --achab 16:44, 28 May 2007 (UTC) This concept is in no way bound to real numbers. The definition can be applied verbatim to any function $f : G \to H$, where $(G,+)$ and $(H,+)$ are arbitrary groups. Thus it also works without any change for vector spaces of any dimension. --131.188.3.21 (talk) 11:47, 16 June 2009 (UTC) Origin Okay, so what *is* the original of the terms even/odd, if not from Taylor series? It's certainly not just "coincidence", as no sane person would keep the term "odd" for the even-powered monomials or vice versa. 216.167.142.196 04:46, 17 November 2005 (UTC) Originally, the word "even" comes from "level", while "odd" comes from "sticking out". [1] says that the first instance of "even function" was in 1727 by Leonhard Euler, "odd function" in 1819 ([2]). — Omegatron 21:56, 17 January 2007 (UTC) Starting with quotients, there is a word missing in the properties. Why? The choice of even and odd seems arbitrary, I've never seen it explained anywhere. Could somebody explain the motivation for defining even and odd functions? --yoshi 05:32, 23 January 2006 (UTC) Even when you divide in half you have mirror image on each side of divisor (so it equals itself). Odd you're one man short which makes people sad. To signify sadness/negativity, we use the minus sign. 69.143.236.33 06:29, 12 October 2007 (UTC) As far as I'm aware, the terms odd and even are derived from the exponents of some basic odd and even functions ; x2 has the property that f(x)=f(-x) -- i.e. x2=(-x)2. Similarly with x4, x6 and so on. Since these have even exponents, all other functions which have this property are referred to as even. The opposite is true for x, x3, x5 and so on, so they are referred to as odd functions.--86.165.254.170 (talk) 16:08, 6 May 2008 (UTC) Negative exponents So is xn an odd function if n is a negative odd integer (even if it's undefined at zero)? — Loadmaster 20:03, 17 January 2007 (UTC) Yes. --Spoon! 03:33, 13 March 2007 (UTC) Proofs The proprieties listed here http://en.wikipedia.org/wiki/Even_and_odd_functions#Basic_properties are quite plain.. Somoene should add a short proof for each propriety.—Preceding unsigned comment added by stdazi (talkcontribs) I'm not sure that's a good idea. The properties are so simple, I think the proofs can be left to the reader. Perhaps a proof or two could be given, but we don't need one for every property. Doctormatt 23:07, 11 August 2007 (UTC) Definitions I think we should make the definitions of odd and even functions more strict. My suggestions are: Let $f:A\to\mathbb{R}$ where $A\subseteq\mathbb{R}$ ƒ is even if and only if $f(x)=f(-x)$ for all $x\in A$ Similarily ƒ is odd if and only if $f(x)=-f(-x)$ for all $x\in A$ DanielEriksson87 15:06, 11 September 2007 (UTC) real-valued The definition in the article restricts f to be real valued. There is no need for this restriction. Actually it is often usefull to also consider complex valued even or odd functions. --131.188.56.77 (talk) 09:16, 16 June 2009 (UTC) I think for Complex function you have to use the conjugate.--Royi A (talk) 20:12, 24 September 2009 (UTC)
2015-05-06 09:37:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529845476150513, "perplexity": 1021.4411749523055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458521872.86/warc/CC-MAIN-20150501053521-00051-ip-10-235-10-82.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/554139/math-equation-goes-out-of-the-right-margin
# math equation goes out of the right margin I have the following latex text written in overleaf document: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \title{test} \author{spanos.nikolaos } \date{July 2020} \begin{document} \maketitle \section{Introduction} \textbf{Assign each array of weights to the relative actor name} $$\textrm{Daniel Craig} = \begin{bmatrix} -0.42056742 & -0.3540595 & -0.25417486 & -0.50596726 & -0.29918054\\ -0.23971583 & -0.39325562 & -0.35581827 & -0.3175518 & -0.2992685\\ -0.26149312 & -0.3268542 & -0.34264958 & -0.50005287 & -0.41450888\\ \ldots (m=60) \end{bmatrix} = shape(5x60)$$ $$\textrm{Tobey Maguire} = \begin{bmatrix} -0.30834767 & -0.26681098 & -0.2173222 & -0.11151562 & -0.27951762\\ -0.1721798 & -0.25406063 & -0.38693774 & -0.19798501 & -0.257399\\ -0.05970115 & -0.2399106 & -0.21202469 & -0.28024384 & -0.2577843\\ \ldots (m=60) \end{bmatrix}= shape(5x60)$$ \end{document} However, both equations go out of the right margin, like is shown in the image below, I know that there are many similar questions on this matter, however, I am new in Latex and I don't understand how to apply most of the solutions I found on my much simple case. Thank you in advance for any suggestion and I apologize if this is a duplicate. Update 1 Based on an answer on the comments section, by using this command \textrm{Tobey Maguire} I managed to overcome the extra space and wrong names problem, so I got this, Update 2 - relative to the reproducible code added I opened a new Overleaf blank page and I added the code above (with packages) And I got the following shot, Since the 2 screenshots added earlier are from a pre-defined template I don't know where the page margins are. Also, I don't know the default Latex margins of Overleaf. Because you can see in the screenshots that the letters are much smaller. How can I modify the page margins in Latex? • Hi and welcome. Please give a fully compilable code. Jul 19 '20 at 9:44 • @AndréC Hey, what do you mean compilable? Like also libraries (packages) I used? Sorry It's my 1st time here. :) Jul 19 '20 at 9:49 • @NikSp: A minimal working example (MWE) also contains the documentclass as well as the relevant packages. Jul 19 '20 at 9:53 • you should not have \ \\ in an equation (it doesn't do anything useful) and words should not be in math italic so \textrm{David Lewis} but we can not tell you how to make it fit if you do not show an example showing how wide your page is. Jul 19 '20 at 9:53 • @NikSp LaTeX is a language that, like C, Pascal and others, is compiled on the contrary of javascript and PHP which are interpreted. Thus, a document is fully compilable when you only need to copy and paste the code to produce the PDF output. This saves users from having to search painfully for the packages your document needs to produce the PDF. Then, as leandriis says, having an MWE is the basis of the questions, see on this subject How to make a minimum example. Jul 19 '20 at 10:56 Do you really need to give the values to so many decimal places? If so you need to shrink the font something like this, but if possible I'd print at normal size but use 3 decimal places or whatever is suitable. \documentclass{article} \usepackage{amsmath} \begin{document} \paragraph{Assign each array of weights to the relative actor name} \small \begin{align} \begin{split} \rlap{\text{David Lewis}}\\ &= \begin{bmatrix} -0.42056742 & -0.3540595 & -0.25417486 & -0.50596726 & -0.29918054\\ -0.23971583 & -0.39325562 & -0.35581827 & -0.3175518 & -0.2992685\\ -0.26149312 & -0.3268542 & -0.34264958 & -0.50005287 & -0.41450888\\ \ldots (m=60) \end{bmatrix}\\ & = \operatorname{shape}(5\times60) \end{split} \\ \begin{split} \rlap{\text{James Gandolfini}}\\ & = \begin{bmatrix} -0.30834767 & -0.26681098 & -0.2173222 & -0.11151562 & -0.27951762\\ -0.1721798 & -0.25406063 & -0.38693774 & -0.19798501 & -0.257399\\ -0.05970115 & -0.2399106 & -0.21202469 & -0.28024384 & -0.2577843\\ \ldots (m=60) \end{bmatrix}\\ &= \operatorname{shape}(5\times60) \end{split} \end{align} %no!!\ \\ \end{document} • Thanks for your answer David. It seems to fit perfectly to my case. Jul 19 '20 at 10:09 • From your answer, I can understand how new I am in Latex, that even there is a shape operator and I was using a word for it. Thanks again David, very well structured answer and actually helps me for similar cases Jul 19 '20 at 10:13 • @NikSp there isn't a shape operator predefined but \operatorname{foobar} makes a roman foobar operator with the same spacing as \log or \sin predefined operators Jul 19 '20 at 10:16 You need to reduce the font size, if you want to fit such big objects. I propose a solution with siunitx and a tabular built in text mode via lrbox. You can choose the font size as an optional argument to weightmatrix. \documentclass{article} \usepackage{amsmath,siunitx} \DeclareMathOperator{\shape}{shape} \newsavebox{\weightmatrixbox} \newenvironment{weightmatrix}[1][\normalsize] {% \left[ \begin{lrbox}{\weightmatrixbox}#1% a size changing command \setlength{\tabcolsep}{0.5\tabcolsep}% \begin{tabular}{@{}*{5}{S[table-format=-1.8]}@{}}% } {\end{tabular}\end{lrbox}\usebox{\weightmatrixbox}\right]} \title{test} \author{spanos.nikolaos} \date{July 2020} \begin{document} \maketitle \section{Introduction} \subsection*{Assign each array of weights to the relative actor name} \begin{align} &\text{Daniel Craig} \notag\\ & = \begin{weightmatrix}[\footnotesize] -0.42056742 & -0.3540595 & -0.25417486 & -0.50596726 & -0.29918054 \\ -0.23971583 & -0.39325562 & -0.35581827 & -0.3175518 & -0.2992685 \\ -0.26149312 & -0.3268542 & -0.34264958 & -0.50005287 & -0.41450888 \\ \ldots (m=60) \end{weightmatrix} \\ & = \shape(5 \times 60) \notag \\[2ex] &\text{Tobey Maguire} \notag\\ & = \begin{weightmatrix}[\footnotesize] -0.30834767 & -0.26681098 & -0.2173222 & -0.11151562 & -0.27951762 \\ -0.1721798 & -0.25406063 & -0.38693774 & -0.19798501 & -0.257399 \\ -0.05970115 & -0.2399106 & -0.21202469 & -0.28024384 & -0.2577843 \\ \ldots (m=60) \end{weightmatrix} \\ &= \shape(5 \times 60) \notag \end{align} \end{document} • Way too much advanced for my level of Latex knowledge. An also exceptional answer. Thanks for your reply egreg! Jul 19 '20 at 11:03 Another possibility is to position the page in landscape mode. \documentclass[landscape]{article} \documentclass[landscape]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage[margin=10mm]{geometry} \title{test} \author{spanos.nikolaos } \date{July 2020} \begin{document} \maketitle \section{Introduction} \textbf{Assign each array of weights to the relative actor name} $$\textrm{Daniel Craig} = \begin{bmatrix} -0.42056742 & -0.3540595 & -0.25417486 & -0.50596726 & -0.29918054\\ -0.23971583 & -0.39325562 & -0.35581827 & -0.3175518 & -0.2992685\\ -0.26149312 & -0.3268542 & -0.34264958 & -0.50005287 & -0.41450888\\ \ldots (m=60) \end{bmatrix} = shape(5x60)$$ $$\textrm{Tobey Maguire} = \begin{bmatrix} -0.30834767 & -0.26681098 & -0.2173222 & -0.11151562 & -0.27951762\\ -0.1721798 & -0.25406063 & -0.38693774 & -0.19798501 & -0.257399\\ -0.05970115 & -0.2399106 & -0.21202469 & -0.28024384 & -0.2577843\\ \ldots (m=60) \end{bmatrix}= shape(5x60)$$ \end{document}
2021-12-03 03:24:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.9072282314300537, "perplexity": 1493.2149787647004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00025.warc.gz"}
https://ireggae.com/1bgg4m/which-one-is-not-used-in-verifying-ohms-law-7fa3eb
Ohms law is used to figure out which resistors are needed. Consider a circuit with a cell and an ohmic resistor, R. If the resistor has a resistance of $$\text{5}$$ $$\text{Ω}$$ and voltage across the resistor is $$\text{5}$$ $$\text{V}$$, then we can use Ohm's Law to calculate the current flowing through the resistor. This is sometimes known as the Ohm's law triangle. That relationship is . The Ohm's law equation is often explored in physics labs using a resistor, a battery pack, an ammeter, and a voltmeter. Calculate the value in each trial. One way to think of this conceptually is that as a current, I, flows across a resistor (or even across a non-perfect conductor, which has some resistance), R, then the current is losing energy. Thus verifying Ohm's law. Neon florescent and sodium are some of the gas use in other applications. Select the best current range: whichever one gives the strongest meter indication without over-ranging the meter. If your circuit is not properly connected, it is possible to damage the electronic equipment used in this lab. Ohm’s law is also not applicable to non – linear elements. They are covered with cotton to avoid short circuiting. It is connected in series. Set up your circuit with the power supply OFF and the output voltage turned DOWN TO ZERO. And another thing we could do is once we plot a graph, from the graph we can calculate what is the resistance of that material. More Problems with solution. IMPORTANT: Special concerns for Ohm’s law experiment 1. V. Use the formula to calculate the resistance of the coil. Such as if you know a given unit is supplied with 110v and draws 15a you divide the 110v by the 15a and you have 7.3 watt. For resistors that are “Ohmic,” that is, they follow Ohm’s Law, there is a relationship between the electric potential difference V across that resistor and the current I passing through that resistor. This law states that the amount of current flowing in a circuit depends upon the amount of voltage in the circuit and the amount of resistance in the circuit. The fundamental relationship among voltage, current, and resistance was discovered by Georg Simon Ohm. Ohm's Law formula to calculate current, voltage and resistance. Use Ohms law to relate resistance, current and voltage. Ohm’s law is not applicable to unilateral networks. Fuse Design. Ohms law tells us the amount of resistance we need to establish a certain current with a certain amount of voltage. One ohm is equal to the resistance of a conductor through which a current of one ampere flows when one volt potential difference is applied across its ends. But in case of bulb temperature will not be constant. The resistance of resistors is indicated using colour-coded bands on the body of the resistor. A megohm is equal to one million (10 6) ohms. Just enter 2 known values and the calculator will solve for the others. Therefore the resistance R is viewed as a constant independent of the voltage and the current. The equipment is downright ancient but I like the physically moving needles. Limitations of ohms law. ohms law allows you to figure out the amp. They are connected in the series in the device. Ans: Thick copper wire has negligible resistance. Share 15. a. You’ll learn the use of voltmeter and ammeter in parallel and series, resistors, dc power supply, wires and all other equipment which is used in doing the practical. We can use Ohm's Law to determine the resistor value that will give us the desired current value: Rearranging for R: Plugging in the values 5 Volts and 0.018 Amps: Solving for the resistance: So, the resistor value we need for R 1 is around 277 ohms to keep the current through the LED under the maximum current rating. Ohms (Ω ) A measure of how difficult it is for water to flow in a pipe. The ammeter has low impedance. Ohm's law synonyms, Ohm's law pronunciation, Ohm's law translation, English dictionary definition of Ohm's law. Your Response. DP Physics Electricity Phet Simulation Team: _____ Use the Phet Circuit simulation and create the given circuits. A means Ampere, unit of current. I hope you have liked this post on the law of Ohm. Georg Simon Ohm (16 March 1789 – 6 July 1854) was a . calculate the current through the lamp. Calculate Power, Current, Voltage or Resistance. So for example, over here, to calculate the resistance, notice all we have to do is calculate voltage divided by the current. Conclusion. Fractional prefix multiplier s are seldom used for resistance or reactances; rarely will you hear or read about a milliohm or a microhm. 5 Practical Applications of Ohm’s Law in Daily Life: Ohm’s Law is a fundamental law of Electrical Engineering. Unilateral networks allow the current to flow in one direction. Hart, MJ, use the other set up in the laboratory to verify Ohms law. voltage or watt when one of them is missing and you need to know for your testing or application. The resistance of a conductor depends on its length, cross-sectional area and material of the conductor. An ammeter is a device used to measure the current at a given location. Today you’ll learn a step by step guide to perform the Ohm’s law Experiment. Voltmeter is a device that is used to measure the potential difference between two points. Limitations of Ohm’s Law. Ohm’s law is also used in dc ammeter and other dc shunts to divert the current. DESCRIPTION OF APPARATUS USED. Share with your friends. In the top corner of the Ohms law triangle is the letter V, in the left hand corner, the letter I, and in the right hand bottom corner, R. Fuses are the protection components that limit the amount of current flowing through the circuit and to establish a certain amount of voltage. Belsin and Pendura work on verifying Ohms law. The device that is used to detect the flow of current is called ammeter. Do not proceed with your experiments until your TA has checked the circuit. (vi) What is the material of wire used for making a rheostat? The number of batteries is increased to push up the voltage and current for a fixed resistance set on the resistance box. This is true as long as the temperature remains constant. Ω m (v) Why are connecting wires thick and covered with cotton thread? Reading Resistor Values. V= i R 12 = 4 R sure enough 3 Damon. Such types of network consist elements like a diode, transistor, etc. Click hereto get an answer to your question ️ In an experiment of verification of Ohm's law, following observations are obtained:Potential difference V (in volt)0.51.01.52.02.5Current I (in ampere)0.20.40.60.81.0From observation table the resistance in circuit is To verify that voltage and current are directly proportional using a 1kΩ resistor. In National 5 Physics calculate the resistance for combinations of resistors in series and parallel. 48 ohms C.3.0 ohms** D.12 ohms Hehe ;) Dec 11, 2017 . MJ plots data in Desmos on his tablet. We are now ready to see how Ohm's Law is used to analyse circuits. Nov 13, 2019 . One fundamental experiment that every engineer will need to complete during a lab class is a validation of Ohm’s law using measurements from a real circuit. A: From the law of Ohm, current i = V/R =( 10 / 4 ) A = 2.5 A. How to use Ohm’s law formula to solve numerical problems . Voltmeter has a high resistance. Respond to this Question. Q: A 10 V battery is connected to a lamp of resistance 4 Ohm. As a high school teacher, Ohm began his research with the new electrochemical cell , (Is a device used . Setup the circuit diagram as shown below: Steps. Dec 11, 2017 . The constant R represents the opposition to a flow of electrical charges in a conductor. 6. One of the fundamental laws describing how electrical circuits behave is Ohm’s law. This article demonstrates the Ohm’s Practical experiment. Nov 13, 2019 . A.8.0 ohms B. First Name. If your multimeter is autoranging, of course, you need not bother with setting ranges. 5. 4) The experiment is repeated for different values of current and the corresponding potential difference is noted. Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points. 277 Ohms is not a common value for off-the-shelf resistors, so for this experiment use a 470 ohm resistor (yellow purple brown) which is the next closest value greater than 277 in the ADALP2000 parts kit. According to Ohm’s law, the current flowing through a conductor is directly proportional to the potential difference applied across the ends of the conductor, provided the physical condition remains the same. In equation form, Ohm’s law is: V = IR. The first three colour bands indicate the value of the resistor in Ohms. Bavarian (German) physicist and mathematician. If it is connected in parallel, it would cause short circuit in the circuit. A C B A False The answers. This act is instructive, but not all engineers will be given an opportunity to conduct the same experiment as a simulation. OBSERVATION TABLE:-Trail no. Extremely small resistances and reactances are usually referred to in terms of conductance. According to Ohm’s law, there is a linear relationship between the voltage drop across a circuit element and the current flowing through it. Objectives of Ohms Law Lab report. Can u give me a answer Anonymous. List of Components. It is equal to the resistance of a wire of length one metre and cross sectional area 1 m 2 (iv) What is the unit of resistivity? Discussion The purpose of this experiment was to verify ohm’s law, which states that the potential difference across a conductor and the current through it are directly proportional. On the right, a resistor used in the electronic industry. Variable DC Power supply; 1kΩ resistor (Color code Brown, Black, Red, Gold) Breadboard; Connecting wires (Jumper wires) Ammeter; Circuit Diagram. Simple to use Ohm's Law Calculator. So this one does obey Ohm's law, but not this one. Ohm’s Law is one of the most frequently used laws in the analysis of the electrical circuits. n. The law stating that the direct current flowing in a conductor is directly proportional to the potential difference between its ends. Circuit #1: Set the voltage to 9 V and use a 33 for resistance Calculate the current using Ohms Law and verify it with an ammeter placed after the resistance. state ohms law and labelled diagram to verify . Ohms law is based on the condition that the temperature is to be constant. 1 Ω = 1 VA-1. It has a large number of practical applications in almost all electrical circuits and electronic components. Below is what your circuit should look like all put together. Ans: Ohm-meter. It is measured in Amps. It is connected in parallel. Record this current value along with the resistance and voltage values previously recorded. To help remember the formula it is possible to use a triangle with one side horizontal and the peak at the top like a pyramid. Following are the limitations of Ohm’s law: Ohm’s law is not applicable for unilateral electrical elements like diodes and transistors as they allow the current to flow through in one direction only. These values will be found to be a constant. Electric current-is the flow of electrons through a conductor. This one does not obey Ohm's law because it's clearly not a straight line. You can find the lab report, reading, observations, and theory here. Verify that values on the amp meter and volt meter by using Ohms Law for the calculations. Use Ohm’s Law to determine the resistance in a circuit if the voltage is 12.0 volts and the current is 4.0 amps. , reading, observations, and resistance conductor depends on its length, area! A pipe ) ohms ohms ( Ω ) a measure of how difficult it is to! Terms of conductance circuit simulation and create the given circuits or application: whichever gives! Case of bulb temperature will not be constant use ohms law for the.! Below is What your circuit is not properly connected, it is possible to damage electronic. Physically moving needles repeated for different values of current is called ammeter that! A device that is used to which one is not used in verifying ohms law the current parallel, it is for water to in..., and theory here for a fixed resistance set on the amp meter and volt by. ( Ω ) a = 2.5 a is also not applicable to –... In National 5 Physics calculate the resistance box circuit is not applicable to unilateral networks all... To use Ohm ’ s law is a device that is used to detect the flow of electrons a... High school teacher, Ohm 's law translation, English dictionary definition of Ohm ’ law. As the Ohm ’ s Practical experiment ohms ( Ω ) a = 2.5 a step guide perform... Law because it 's clearly not a straight line V = IR detect the flow of is! Select the best current range: whichever one gives the strongest meter indication over-ranging! The temperature is to be constant to relate resistance, current and voltage values previously.... ( 16 March 1789 – 6 July 1854 ) was a 16 1789. Not obey Ohm 's law synonyms, Ohm 's law formula to calculate the resistance of a conductor along... A straight line repeated for different values of current flowing in a pipe in terms of conductance and the... Three colour bands indicate the value of the gas use in other applications not obey Ohm 's states. ’ ll learn a step by step guide to perform the Ohm 's law reactances ; rarely will hear! Are directly proportional to the voltage across the two points the equipment is downright but. March 1789 – 6 July 1854 ) was a values and the corresponding potential difference between ends! Because it 's clearly not a straight line simulation Team: _____ use the Phet simulation... Law stating that the current fuses are the protection components that limit the amount of voltage by using ohms tells... Avoid short circuiting cotton to avoid short circuiting to damage the electronic used! Indication without over-ranging the meter the conductor a large number of batteries is increased push... Resistors is indicated using colour-coded bands on the amp repeated for different of... Voltage and current are directly proportional using a 1kΩ resistor push up the voltage and the current of difficult! Use Ohm ’ s law is also used in the analysis of the use. Current at a given location case of bulb temperature will not be.... In other applications, reading, observations, and resistance this current value along with the new electrochemical,. Connected, it is for water to flow in one direction networks allow the through. Are seldom used for resistance or reactances ; rarely will you hear or read about milliohm... Proceed with your experiments until your TA has checked the circuit parallel, it would cause short circuit the. Gives the strongest meter indication without over-ranging the meter report, reading, observations, resistance..., 2017, voltage and the corresponding potential difference between two points is also applicable... Connecting wires thick and covered with cotton to avoid short circuiting a device that is used to detect the of... How to use Ohm ’ s law setup the circuit diagram as shown below: Steps on! 'S clearly not a straight line ) Dec 11, 2017 v. use the Phet simulation! Current are directly proportional using a 1kΩ resistor we need to establish a certain with... The electrical circuits behave is Ohm ’ s law is not properly which one is not used in verifying ohms law it... Connected to a lamp of resistance 4 Ohm ’ ll learn a step by step guide to perform Ohm! Battery is connected to a lamp of resistance 4 Ohm dc ammeter and other dc shunts to the. Current value along with the resistance R is viewed as a high school teacher, Ohm his!, voltage and resistance is called ammeter numerical problems of Ohm ’ s law previously recorded current!
2021-04-17 04:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5934339165687561, "perplexity": 730.5605046995595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00270.warc.gz"}
https://kar.kent.ac.uk/19038/
Skip to main content # Growth Orders Occurring In Expansions Of Hardy-Field Solutions Of Algebraic Differential-Equations Shackell, John (1995) Growth Orders Occurring In Expansions Of Hardy-Field Solutions Of Algebraic Differential-Equations. Annales de l'institut Fourier, 45 (1). pp. 183-221. ISSN 0373-0956. (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:19038) The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. ## Abstract We consider the asymptotic growth of Hardy-field solutions of algebraic differential equations, e.g. solutions with no oscillatory component, and prove that no 'sub-term' occurring in a nested expansion of such can tend to zero more rapidly than a fixed rate depending on the order of the differential equation. We also consider series expansions. An example of the results obtained may be stated as follows. Let g be an element of a Hardy field which has an asymptotic series expansion in x, e(x) and lambda, where lambda tends to zero at least as rapidly as some negative power of exp(e(x)). If lambda actually occurs in the expansion, then g cannot satisfy a first-order algebraic differential equation over R(x). Item Type: Article ASYMPTOTICS; DIFFERENTIAL EQUATIONS; HARDY FIELDS; DIFFERENTIAL ALGEBRA Q Science Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Mathematics, Statistics and Actuarial Science I.T. Ekpo 25 Oct 2009 09:36 UTC 16 Nov 2021 09:57 UTC https://kar.kent.ac.uk/id/eprint/19038 (The current URI for this page, for reference purposes) • Depositors only (login required):
2023-01-27 04:54:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221558332443237, "perplexity": 1331.8781089414567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00267.warc.gz"}
https://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand/100391
# Not especially famous, long-open problems which anyone can understand Question: I'm asking for a big list of not especially famous, long open problems that anyone can understand. Community wiki, so one problem per answer, please. Motivation: I plan to use this list in my teaching, to motivate general education undergraduates, and early year majors, suggesting to them an idea of what research mathematicians do. Meaning of "not too famous" Examples of problems that are too famous might be the Goldbach conjecture, the $3x+1$-problem, the twin-prime conjecture, or the chromatic number of the unit-distance graph on ${\Bbb R}^2$. Roughly, if there exists a whole monograph already dedicated to the problem (or narrow circle of problems), no need to mention it again here. I'm looking for problems that, with high probability, a mathematician working outside the particular area has never encountered. Meaning of: anyone can understand The statement (in some appropriate, but reasonably terse formulation) shouldn't involve concepts beyond (American) K-12 mathematics. For example, if it weren't already too famous, I would say that the conjecture that "finite projective planes have prime power order" does have barely acceptable articulations. Meaning of: long open The problem should occur in the literature or have a solid history as folklore. So I do not mean to call here for the invention of new problems or to collect everybody's laundry list of private-research-impeding unproved elementary technical lemmas. There should already exist at least of small community of mathematicians who will care if one of these problems gets solved. I hope I have reduced subjectivity to a minimum, but I can't eliminate all fuzziness -- so if in doubt please don't hesitate to post! To get started, here's a problem that I only learned of recently and that I've actually enjoyed describing to general education students. http://en.wikipedia.org/wiki/Union-closed_sets_conjecture Edit: I'm primarily interested in conjectures - yes-no questions, rather than classification problems, quests for algorithms, etc. - You might get more success if you sampled certain open problem lists and indicated which ones fit your list and which ones did not. I could mention various combinatorial problems such as integer complexity, determinant spectrum, covering design optimization, but I can't tell from your description if they would be suitable for you. Gerhard "They Are Suitable For Me" Paseman, 2012.06.21 –  Gerhard Paseman Jun 21 '12 at 19:11 Here is some collection of some other "collect open problems" quests. on MO: mathoverflow.net/questions/96202/… PS Nice question ! PSPS may be add tag "open-problems" –  Alexander Chervov Jun 21 '12 at 20:53 Nice question!! –  Suvrit Jun 22 '12 at 3:25 To save the search for explanation of cryptic acronyms for those of us outside US, K-12 means high school. @Mahmud: You are using a wrong meaning of the word “problem”. The TSP is not an unproved mathematical statement, it is a computational task. –  Emil Jeřábek Jun 22 '12 at 12:05 More precisely, K-12 means anything up to high school (K = Kindergarten, 12 = 12th grade, and K-12 covers this range). –  Henry Cohn Jun 22 '12 at 13:05 One problem which I think is mentioned in Guy's book is the integer block problem: does there exist a cuboid (aka "brick") where the width, height, breadth, length of diagonals on each face, and the length of the main diagonal are all integers? update 2012-07-12 Since the question has returned to the front page, I'm taking the liberty to add some links that I found after Scott Carnahan's comments. (Scott deserves the credit, really, but I thought the links belonged in the answer rather than in the comments.) - Because so much has been known about Pythagorean triples for so long, I'm shocked that this problem is open. Is there an intuitive explanation of why the problem is so hard? –  Vectornaut Jun 22 '12 at 5:38 I'm afraid I have no idea (mind you, I can think of no intuitive reason why it wouldn't be hard). Further details at mathworld.wolfram.com/PerfectCuboid.html –  Yemon Choi Jun 22 '12 at 5:49 The solution space forms an algebraic surface in the projectivized space of box dimensions. The surface has rather high degree, and in fact van Luijk showed that it is of general type, (and therefore rather resistant to standard methods). –  S. Carnahan Jun 22 '12 at 6:03 Arguably, the brick violates the "outside mathematician" condition. I even tried to convince some crank (may one say this word here? :-) not to waste as many of his lifetime to the problem as I did. –  Hauke Reddmann May 14 at 14:49 Can we cover a unit square with $\dfrac1k \times \dfrac1{k+1}$ rectangles, where $k \in \mathbb{N}$? (Note that the areas sum to $1$ since $\displaystyle \sum_{k \in \mathbb{N}}\dfrac1{k(k+1)} = 1$) Here is an MO thread discussing some of the progress on this problem. - The moving sofa problem: What rigid two-dimensional shape has the largest area $A$ that can be maneuvered through an L-shaped planar region with legs of unit width? So far the best results are $2.219531669\lt A\lt 2.8284$ (approximately). - The Casas-Alvero conjecture: let the characteristic of the field $k$ be $0$. If a monic polynomial $f\in k[X]$ of degree $n$ has a common root with each of its derivatives $f',\ldots,f^{(n-1)}$, then $f(X)=(X-a)^n$ for some $a\in k$. - I guess $k$ must be of characteristic $0$ –  Joël Jun 22 '12 at 19:35 @Joel. Right! If $k$ is of finite characteristic $p$, then $X^{2p}+X^p$ does share a root with every derivative, but is not a monomial. –  Denis Serre Jun 22 '12 at 20:38 For those interested in this conjecture, here is what I believe the current state of knowledge on the conjecture : arxiv.org/abs/math/0605090 The first open case is $n=12$. Interestingly, the proofs in the known cases use scheme theory (over $\mathbf{Z}$). –  François Brunault Jun 22 '12 at 22:57 @François Brunault. Some months ago I asked this question mathoverflow.net/questions/94838/… with the Casas-Alvero conjecture in mind. It appeared from the answers that instead of the argument using scheme theory, the simpler Lefschetz principle ( proofwiki.org/wiki/Lefschetz_Principle_(First-Order) ) can be used. (answering to my question, Qiaochu Yuan also indicated an ultraproduct construction which is even simpler than the Lefschetz Principle, since no completeness result is used). –  js21 Jun 25 '12 at 6:22 arxiv.org/abs/1504.00274 –  Andreas Thom Jul 31 at 7:34 The lonely runner conjecture. As Wikipedia puts it: Consider $k + 1$ runners on a circular track of unit length. At $t = 0$, all runners are at the same position and start to run; the runners' speeds are pairwise distinct. A runner is said to be lonely if at distance of at least $1/(k + 1)$ from each other runner. The lonely runner conjecture states that every runner gets lonely at some time. - Also, I suspect this is equivalent to the lonely starting post conjecture, which is the conjecture above except that one of the runners has speed 0 and the statement is that he/she gets lonely. Gerhard "Ask Me About Going Slow" Paseman, 2012.06.22 –  Gerhard Paseman Jun 22 '12 at 18:59 Observe that the human condition implies that everyone gets lonely at some time. In particular the runners get lonely. –  Asaf Karagila Jun 22 '12 at 20:59 This is open for $k\geq 7$. The proof for $k=6$ was done by Barajas and Serra using elaborate computer-assisted casework, and many simplifications that rely on the fact that $6+1$ is prime. It is worth noting that when the ratio of two speeds is irrational, the problem is made easier by density arguments, so the essentially hardest case is when all the speeds are integers. Therefore this is a combinatorial number theory question disguised as basic calculus. –  Andrew Dudzik Jul 2 '12 at 2:14 This is the second time I've seen this question on mathoverflow and this will be the second time I'vve posted this answer. Singmaster's conjecture says there is a finite upper bound on the number of times a number (other than the $1$s on the edge) can appear in Pascal's triangle. The upper bound may be as low as $8$. If so, then no number (besides those $1$s) appears more than eight times in Pascal's triangle. Only one number is known to appear that many times: $$\binom{3003}{1} = \binom{78}{2} = \binom{15}{5} = \binom{14}{6}$$ It has been proved that infinitely many numbers appear twice; similarly three times, four times, and six times. It is unknown whether any number appears five times or seven times. Singmaster states that Erdős said the conjecture is probably true but probably difficult to prove. - We don't really need Erdős to tell us it's probably true when we can do straightforward probabilistic estimates (plus some geometry of plane curves). A short computation shows that there are no numbers less than $10^{1000}$ that have odd multiplicity greater than 3, and heuristics suggest it is quite unlikely that such numbers exist. –  S. Carnahan Jul 2 '12 at 9:49 @S.Carnahan : How did you do that "short computation"? –  Michael Hardy Jul 6 '12 at 21:49 Odd multiplicity means you have a number of the form $\binom{2k}{k}$. It's not hard to check whether a number has the form of a binomial coefficient $\binom{m}{n}$ in SAGE, since you have a built-in function that estimates integer $n$-th roots. –  S. Carnahan Jul 12 '12 at 7:02 I love this problem! Everything about it is simple and compelling, and it can be understood by anyone who knows how to add. Is there also a simple heuristic argument for why it should be true? @S. Carnahan , can you flesh out your heuristics a little more? What's this stuff about geometry of plane curves? –  Vectornaut Jul 22 '12 at 18:44 Gourevitch's conjecture: $$\sum_{n=0}^\infty \frac{1+14n+76n^2+168n^3}{2^{20n}}\binom{2n}{n}^7 = \frac{32}{\pi^3}.$$ - wow, at first look it seems hard to believe that this is still a conjecture! –  Suvrit Jun 22 '12 at 3:26 As I understand it, this kind of identity is amenable in principle to automatic theorem-proving methods, but (using known techniques) is out of reach of current computers. –  Timothy Chow Jun 22 '12 at 14:40 Tim, there is also an example, from December 2011, for $1/\pi^4$ due to Jim Cullen (members.bex.net/jtcullen515), another mathematics amateur; I cannot easily fine it online though. –  Wadim Zudilin Aug 25 '12 at 11:13 Is the sequence $(3/2)^n \mod 1$ dense in the unit interval? In the other direction, Mahler's 3/2 problem: Do all elements of this sequence with large enough index $n$ lie in the interval $(0,1/2)$? It is known that $\beta^n$ is uniformly distributed modulo one for almost all $\beta>1$, but explicit examples of $\beta$ for which density holds are not known. This question seems to originate in work of Weyl and Koksma on uniform distribution. Update: Since posting this answer I've attempted to find some references with which to flesh it out, with only modest success. The earlier paper I have identified which deals with this question directly is T. Vijayaraghavan's 1940 article On the fractional parts of the powers of a number, in which it is shown that the sequence $(3/2)^n \mod 1$ has infinitely many limit points. Mahler conjectured in 1968 that the answer to his question is negative. Jeffrey Lagarias' 1985 survey on the Collatz problem, The 3x + 1 Problem and Its Generalizations, includes a one-page overview of the literature on the distribution of this sequence. Flatto, Lagarias and Pollington subsequently proved that the diameter of the set of accumulation points is at least 1/3; Mahler's question would be answered in the negative if this is improved to "at least 1/2". - An excellent reference is the recent book Distribution modulo one and Diophantine approximation, by Yann Bugeaud. –  Andres Caicedo Jan 6 '14 at 19:35 There is a lot of number theory elementary conjectures, but one that is especially elementary is the so called Giuga Conjecture (or Agoh-Giuga Conjecture), from the 1950: a positive integer $p>1$ is prime if and only if $$\sum_{i=1}^{p-1} i^{p-1} \equiv -1 \pmod{p}$$ - @temp: $i$ is the summation variable. –  Emil Jeřábek Jun 25 '12 at 10:25 Thank you, Emil. I was under the impression that $i^2=-1$, I couldn't help it. –  Włodzimierz Holsztyński May 4 '13 at 3:51 Is $e+\pi$ rational? - Too famous? Most mathematicians have heard of this one, haven't they? –  Timothy Chow Jun 22 '12 at 14:41 Popular books, I don't know, but David Feldman wrote, "I'm looking for problems that, with high probability, a mathematician working outside the particular area has never encountered." (Emphasis mine.) It feels to me that most professional mathematicians, even those not working in transcendental number theory, are familiar with this one. –  Timothy Chow Jun 22 '12 at 18:17 I have never heard this before... –  Filippo Alberto Edoardo Jul 3 '12 at 14:53 Even though I've seen this one at many different places, what I don't know is: why is this question so exceedingly tricky? –  Suvrit Jul 27 '12 at 20:59 @Suvrit: The way I think of it, it's not so much that this particular question is exceedingly tricky; it's that we don't know that many ways to prove that a specific number is irrational. For "most" numbers that one can name, irrationality is unknown. This just happens to be one of the simplest examples. Similarly, it's not hard to write down a simple Diophantine equation or PDE whose solvability is unknown. –  Timothy Chow Aug 27 '12 at 3:19 It is currently unknown if all triangles have a periodic billiard path. (See, for example, http://en.wikipedia.org/wiki/Outer_billiard#Existence_of_Periodic_Orbits) - Additional info: The best known result is that all triangles of maximum angle 100 degrees admit a periodic orbit. It is also known that all triangles (in fact, all polygons) with angles that are rational multiples of $\pi$ admit periodic orbits. –  Alex Becker Jul 3 '12 at 4:08 From "An Invitation to Mathematics": Are there any integer solutions to $x^3 + y^3 + z^3 = 33$ ? I thought this might be a good candidate since that book was meant as a bridge from competitive Mathematics to research. There are a few other examples, but I am quoting only one here due to your requirement. - Is there something special about 33?! –  Vectornaut Jun 22 '12 at 5:42 For small numbers (<100), 33, 42 and 74 are still unresolved. See this: asahi-net.or.jp/~kc2h-msm/mathland/math04/matb0100.htm . @Vectornaut when I saw your comment the first thing I thought off was the irrational solution $(\sqrt[3]{33/3},\sqrt[3]{33/3},\sqrt[3]{33/3})$. –  Ng Yong Hao Jun 22 '12 at 13:45 But I feel like this is not a good introduction to what actual research, at least for a beginning researcher, is like. Usually, you are taught fairly advanced methods and some result that was achieved using those methods and then are asked to modify it a little bit to see what you can do. –  David Corwin Jun 24 '12 at 18:30 –  KConrad Oct 22 '13 at 1:38 Problem: The partition function $p(n)$ is even (resp. odd) half of the time. Of course you need to explain to a general audience what the partition function is, but that's not hard, my daughter in K1 got as an assignment to compute $p(n)$ for $n$ up to 4. You also need to explain "half of the time", which means that the number of $n < x$ such that $p(n)$ is even, divided by $x$, has limit 1/2 when $x$ goes to infinity, so you need the notion of limit of a sequence, which is in K12, isn't it ? The problem is certainly famous among specialists, but not too famous. I don't think there are books on it, for instance. It is old (formulated as a conjecture during the 50th), with an history going back to Ramanajunan. And I like it very much. UPDATE (28/2/2015) Here is a useful reference: Ken Ono, The parity of the partition function, Electronic Res. Ann. (1995) - The notion of limit of a sequence is not usually taught in the US until a real analysis course, which is usually taken only by students in mathematics and frequently not until the third (or even last) year of university. (But I think this case is concrete enough that the necessary ideas here could be explained to a high school student.) –  Alexander Woo Jun 22 '12 at 4:05 Sequences are taught before real analysis, usually in Calc 2 along with infinite series. And the more basic material is suitable for high school, even a decent precalculus class. These are only sequences of reals so it isn't very general, and while they are taught, students might not really "understand" them until later. –  Francis Adams Jun 22 '12 at 12:51 Yes, there is an option for seniors in a good high school to learn some calculus, but most calculus courses in the United States no longer give a rigourous definition of a limit. Without a rigourous definition, there are some subtle possibilities for what might go wrong that won't be appreciated. (Of course, very few students at that level have the mathematical maturity to understand a rigourous definition well enough to appreciate the subtle possibilities anyway, which is why the rigourous definition isn't taught anymore.) –  Alexander Woo Sep 6 '12 at 4:11 Also, "half of the time" can be restated in probabilistic terms. In other words, instead of framing it as a real analysis question, appeal to probabilistic intuition. Alexander Woo's remarks about subtle possibilities notwithstanding, vastly larger numbers of students learn elementary probability and statistics than calculus. –  Victor Protsak Jan 6 '14 at 19:28 There are infinitely many primes $p$ such that the repeating part of the decimal expansion of $1/p$ has length $p-1$. First explicitly asked by Gauss, now generally thought of as a corollary of Artin's primitive root conjecture. - I think Artin's primitive root conjecture counts as pretty well known. –  John Pardon Jun 22 '12 at 1:51 @unknown: That's a fair comment. Still, if the goal is to find conjectures that are accessible to the general math-loving public that they may not have heard of before, I think the decimal expansion problem counts. Perhaps David Feldman can clarify whether he really means that 90% of non-number theorists haven't heard of the conjecture of which this happens to be a corollary, or whether he means something weaker than that. –  Timothy Chow Jun 22 '12 at 14:37 The circulant Hadamard matrix conjecture, first stated in print by Ryser in 1963. It can be stated as follows. If $n>4$, then there does not exist a sequence $(a_1,a_2,\dots,a_n)$ of $\pm 1$'s satisfying $$\sum_{i=1}^n a_i a_{i+k}=0,\ 1\leq k\leq n-1,$$ where the subscript $i+k$ is taken modulo $n$. - Related to this, the Hadamard conjecture : there exist Hadamard matrices of order $4k$ for every $k$. en.wikipedia.org/wiki/Hadamard_matrix#The_Hadamard_conjecture –  François Brunault Jun 22 '12 at 9:19 Further related: Let m be the largest integer such that the integer interval (-m,m) is contained in the set D_n, the set of determinants of order n 0-1 matrices. What function of n are very good bounds for approximating m? Cf determinant spectrum on Will Orrick's maxdet site. Gerhard "Ask Me About Binary Matrices" Paseman, 2012.06.22 –  Gerhard Paseman Jun 22 '12 at 19:08 Here is one which I found at this MO link: $$\frac{24}{7\sqrt{7}} \int_{\pi/3}^{\pi/2} \log \left| \frac{\tan(t)+\sqrt{7}}{\tan(t)-\sqrt{7}}\right|\ dt = \sum_{n\geq 1} \left(\frac n7\right)\frac{1}{n^2},$$ where $\displaystyle\left(\frac n7\right)$ denotes the Legendre symbol. Not really my favorite identity, but it has the interesting feature that it is a conjecture! It is a rare example of a conjectured explicit identity between real numbers that can be checked to arbitrary accuracy. This identity has been verified to over 20,000 decimal places. See J. M. Borwein and D. H. Bailey, Mathematics by Experiment: Plausible Reasoning in the 21st Century, A K Peters, Natick, MA, 2004 (pages 90-91). - It was a good idea to split the two conjectures to two answers, but you should have done it the other way around. I venture to guess that most people, like me, originally upvoted this answer because of Sendov’s conjecture, not because of an obscure integral equality which I couldn’t explain to any high school student I now of. –  Emil Jeřábek Jun 25 '12 at 10:34 @Emil: Emil, The answers were split because of an user requesting me to do so. Otherwise I would have kept it here itself. –  S.C. Jul 1 '12 at 7:40 Sendov's Conjecture For a polynomial $$f(z) = (z-r_{1}) \cdot (z-r_{2}) \cdots (z-r_{n}) \quad \text{for} \ \ \ \ n \geq 2$$ with all roots $r_{1}, ..., r_{n}$ inside the closed unit disk $|z| \leq 1$, each of the $n$ roots is at a distance no more than $1$ from at least one critical point of $f$. - I always enjoyed telling people about the Inscribed square problem : Does every (Jordan) curve in the plane contain all four vertices of some square? Update: Here is a variation due to Helge Tverberg: Does every (polygonal) curve in the plane outside of the unit circle, contain all four vertices of some square with side length >0.1? This version implies the original problem and lacks disadvantages pointed out by Tim Chow and Henry Cohn. - This is a nice problem but it's only open in the case where the curve is pathologically ugly, in a way that perhaps not "anyone can understand." –  Timothy Chow Jun 22 '12 at 2:05 I do think that anyone can understand whats an injective, continuous map from the circle to the plane. –  Fernando Muro Jun 22 '12 at 6:44 Actually, I disagree that anyone can (quickly, easily) understand what such a map is for the purposes of this problem, since the maps for which it's not known are of a sort even mathematicians didn't realize existed until well into the 19th century. One can still state the problem, but it's likely to lead to conversations of the following sort. "Wow, so you mean nobody knows in advance if this curve [draws a curve] has a square in it?" "Well, actually we know that case, or really any curve you can draw, but mathematicians have discovered exotic curves for which we don't know the answer." –  Henry Cohn Jun 22 '12 at 13:14 The issue here is that intuitive "definitions" of continuous tend to be wrong. "You can draw it without lifting your pencil" really means at least piecewise smooth. –  Noah Snyder Jun 24 '12 at 3:40 Well, not if you shake your hand fast enough (or with enough brownian motion) –  Feldmann Denis Aug 24 '12 at 22:48 Here are a few others: 1. Let $H_n=\sum_{j=1}^n 1/j$. Then for all $n\geq 1$, $$\sum_{d|n}d\leq H_n+(\log H_n)e^{H_n}.$$ Jeff Lagarias showed that this is equivalent to the Riemann hypothesis! 2. Let $x_0=2$, $x_{n+1}=x_n-\frac{1}{x_n}$ for $n\geq 0$. Then $x_n$ is unbounded. 3. The largest integer that cannot be written in the form $xy+xz+yz$, where $x,y,z$ are positive integers, is 462. It is known that there exists at most one such integer $n>462$, which must be greater than $2\cdot 10^{11}$. See J. Borwein and K.-K. S. Choi, On the representations of $xy+yz+xz$, Experiment. Math. 9 (2000), 153-158; http://projecteuclid.org/Dienst/UI/1.0/Summarize/euclid.em/1046889597. - I'm wondering if I "get" #2. I see an implicit map from $S^1$ to $S^1$ of index 2, so yes, it seems generally hard to understand the dynamical fate of a given starting value. A similar question might ask if the binary expansion of $\sqrt{2}$ contains strings of 0's of arbitrary length. But is #2 specifically conjugate to something more familiar? –  David Feldman Jun 23 '12 at 19:17 @Davidac897: I think the conjecture part of #3 is the first sentence: "The largest integer... is 462." If I'm reading the rest correctly, it's known that if the conjecture is false, it's only because of a single counterexample that must be greater than 200 billion. –  Owen Biesel Jul 27 '12 at 21:15 Question #2 was addressed in the paper math.grinnell.edu/~chamberl/papers/mario_digits.pdf The real problem concerns the initial value $x_0=2$. It can be shown that the set of initial values which produce an unbounded sequence $\{x_n\}$ has full measure, so from a probabilistic perspective, one expects the statement in question 2 to hold. –  Marc Chamberland Aug 19 '12 at 18:40 The irrationality of Catalan's constant $G=1-1/3^2+1/5^2-1/7^2+\cdots$. Remarks: Although Catalan's constant is certainly well-known, the irrationality is the tip of the iceberg of a related conjecture of Milnor about the linear independence over the rationals of volumes of certain hyperbolic 3-manifolds (which is a special case of a conjecture of Ramakrishnan). The irrationality of Catalan's constant would imply that the volume of the unique hyperbolic structure on the Whitehead link complement is irrational. To this date, it is not known that any hyperbolic 3-manifold has irrational volume. - At the risk of stretching my own rule, please allow that I could define "ring" for a high school senior. Then I'd proffer this question I heard years ago from Melvin Henriksen: Must a non-commutative ring (with identity) contain a non-zero-divisor aside from the identity? - Here is a link to Henriksen's paper related to this question. google.com/… –  David Feldman Jun 23 '12 at 18:52 I've known ring theory for a while, and it never even occurred to me that that was difficult (let alone possibly true). –  David Corwin Jul 22 '12 at 20:35 Mel, to whom I will be eternally grateful for my low Erdos number and much else, was a master of the uniquely mathematical game of one downsmanship: "You don't if ..., we I don't even know if ...!!! –  David Feldman Jul 23 '12 at 0:04 The Kneser–Poulsen conjecture in dimension 3: An arrangement of (possibly overlapping) unit balls in space is tighter than a second arrangement of the same balls if, for all $i$ and $j$, the distance between the centers of ball $i$ and ball $j$ in the first arrangement is less than or equal to the distance between the centers of ball $i$ and ball $j$ in the second arrangement. The conjecture is that a tighter arrangement always has equal or smaller total volume. True in the plane, open in higher dimensions. - Does the series $\sum_{n=1}^{\infty} \frac{1}{n^3 \sin^2 n}$ converge? (Taken from http://math.stackexchange.com/questions/20555/are-there-any-series-whose-convergence-is-unknown where there are more such examples) - and, in my answer at math.SE which you link here, I refer to the mathoverflow question mathoverflow.net/questions/24579. –  George Lowther Jun 24 '12 at 0:35 Let ${^n a}$ denote tetration: ${^0 a}=1, {^{n+1} a}=a^{({^n a})}$. • It is unknown if ${^5 e}$ is an integer. • It is unknown if there is a non-integer rational $q$ and a positive integer $n$ such that ${^n q}$ is an integer. • It is unknown if the positive root of the equation ${^4 x}=2$ is rational (ditto for all equations of the form ${^n x}=2$ with integer $n>3$) • It is unknown if the positive root of the equation ${^3 x}=2$ is algebraic. - Here is another easy to state problem which is 140 years old but not very famous. Consider the potential of finitely many positive charges: $$u(x)=\sum_{j=1}^n\frac{a_j}{|x-x_j|},\quad x,x_j\in R^3,\quad a_j>0$$ How many equilibrium points can this potential have? Equilibrium points are solutions of $\nabla u(x)=0$. First conjecture: it is always finite. Second conjecture: when finite, it is at most $(n-1)^2$. This estimate is stated by Maxwell in his Treatease on Electricity and Magnetism, vol. I, section 113, as something known. The editor (J. J. Thomson) wrote a footnote that he "could not find any place where this result is proved". Nobody could find this place to this time. This is even unknown in the simplest case when all $a_j=1$ and $n=3$. - Schinzel-Sierpinski Conjecture Melvyn Nathanson, in his book Elementary Methods in Number Theory (Chapter 8: Prime Numbers) states the following: • A conjecture of Schinzel and Sierpinski asserts that every positive rational number $x$ can be represented as a quotient of shifted primes, that $x=\frac{p+1}{q+1}$ for primes $p$ and $q$. It is known that the set of shifted primes, generates a subgroup of the multiplicative group of rational numbers of index at most $3$. - Proving the Inequality of the Means by fitting boxes into a cube. From Berlekamp, Conway and Guy's Winning Ways for Your Mathematical Plays, Academic Press, New York 1983. See the discussion of this problem on Dror Bar-Natan's webpage for details, pictures, etc. Question: Is it possible to pack $n^n$ rectangular n-dimensional boxes whose sides are $a_1, a_2,\ldots, a_n$ inside one big n-dimensional cube whose side is $a_1+a_2+\cdots+a_n$? - Erdos's problem on the length of lemniscates (it is somewhat famous in certain narrow circles). Let $P$ be a polynomial, and consider the set $E=\{ z:|P(z)|=1\}$ in the complex plane. What is the maximum length of $E$ over all monic polynomials of degree $d$? Erdos conjectured that an extremal $P$ is $P_0(z)=z^d+1$. It is known that the asymptotic of maximal length is $2d+o(d).$ It is known that $P_0$ gives a local maximum. It is also known that for every extremal polynomial, all critical points lie on $E$, so $E$ must be connected. However the conjecture is not established even for $d=3$. After Erdos's death, I offered a $200 prize for the first solution. (Erdos had offered the same, but I do not know whether one can collect his prize.) - According to en.wikipedia.org/wiki/Paul_Erdős#Erd.C5.91s.27_problems offers Erdos made will be honored. – Gerry Myerson Jan 29 at 2:18 Thanks, this is nice to know. So you can collect$400 total for this problem. –  Alexandre Eremenko Jan 29 at 18:27 I think nobody pointed this problem, if it is repeated, please say me to delete it. This problem killed me for three weeks, when I was a young student in high school. So, I want to recall it again. $Problem:$ Find all right triangles with rational sides, where the area of these triangles are integer? I think it is still open problem and if somebody can solve it, I will give 100$as a small award. After I searched, I found these two interesting sources. I hope it will be helpful. 1) N.Koblitz, Introduction to elliptic curves and modular forms, volume 97 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1993. 2) Washington, Lawrence C., Elliptic Curves : Number Theory and Cryptography, CRC Press Series On Discrete Mathematics and Its Applications - This is the congruent number problem and leads to the Birch-Swinnerton-Dyer conjecture... math.jussieu.fr/~colmez/congruents.pdf – François Brunault Jun 22 '12 at 23:13 That's presumably the intention, though the problem as stated looks simpler... (The "congrent number problem" amounts to asking which integers are the areas of right triangles all of whose sides are rational.) – Noam D. Elkies Jun 23 '12 at 3:09 The Cerny conjecture says that if X is a collection of mappings on an n element set such that some iterated composition (repetitions allowed) of elements of X is a constant map then there is a composition of at most$(n-1)^2\$ mappings from X which is a constant mapping. This comes from automata theory. See http://en.m.wikipedia.org/wiki/Synchronizing_word. -
2015-09-01 10:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505507707595825, "perplexity": 513.2606896953557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645171365.48/warc/CC-MAIN-20150827031251-00219-ip-10-171-96-226.ec2.internal.warc.gz"}
https://learn.cemetech.net/index.php?title=TI-BASIC:Poissonpdf
TI-BASIC:Poissonpdf Command Summary Calculates the Poisson probability for a single value Command Syntax poissonpdf(mean, value) Press: 1. 2ND DISTR to access the distribution menu 2. ALPHA B to select poissonpdf(, or use arrows. Press ALPHA C instead of ALPHA B on a TI-84+/SE with OS 2.30 or higher. TI-83/84/+/SE 2 bytes This command is used to calculate Poisson distribution probability. In plainer language, it solves a specific type of often-encountered probability problem, that occurs under the following conditions: 1. A specific event happens at a known average rate (X occurrences per time interval) 2. Each occurrence is independent of the time since the last occurrence 3. We're interested in the probability that the event occurs a specific number of times in a given time. The poissonpdf( command takes two arguments: The mean is the average number of times the event will happen during the time interval we're interested in. The value is the number of times we're interested in the event happening (so the output is the probability that the event happens value times in the interval). For example, consider point on a city street where an average of 5 cars pass by each minute. What is the probability that in a given minute, 8 cars will drive by? 1. The event is a car passing by, which happens at an average rate of 5 occurrences per time interval (a minute) 2. Each occurrence is independent of the time since the last occurrence (we'll assume this is true, though traffic might imply a correlation here) 3. We're interested in the probability that the event occurs 8 times in the time interval The syntax in this case is: :poissonpdf(5,8 This will give about .065 when you run it, so there's a .065 probability that in a given minute, 8 cars will drive by. Formulas The value of poissonpdf( is given by the formula $\definecolor{darkgreen}{rgb}{0.90,0.91,0.859}\pagecolor{darkgreen} \operatorname{poissonpdf}(\lambda,k) = \frac{e^{-\lambda}\lambda^k}{k!}$
2022-08-12 05:09:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7145810723304749, "perplexity": 622.7140079060873}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00047.warc.gz"}
https://www.physicsforums.com/threads/capacitance-of-two-tangential-spheres.612236/
# Capacitance of two tangential spheres 1. Jun 7, 2012 ### Ans426 Hi, Question: Consider two conducting spheres with radius R, which are tangential each other (i.e. they touch right at one point) If C = Q/V, where V is the potential at the surface, find the capacitance of this configuration. -------------------------------------------------------------------------------------------- I've came along this question in some sort of old physics exam... I've been giving it some thought for a while but haven't really came up with anything.. Anyone can shed some light on how to do this? (For instance what charge distribution should it have? +ve on one, and -ve on the other? And then integrate for all the charges on the surface?) Thanks. 2. Jun 7, 2012 ### the_emi_guy If they are touching each other then they do not form a capacitor. 3. Jun 7, 2012 ### Ans426 True...I guess that part was confusing me.. The question says that they are tangent to each other, so I guess we'll have to assume that they have infinitesimally close to each other but not touching? But in that case, wouldn't the potential blow up at the point where they are tangent to each other? 4. Jun 8, 2012 5. Jun 8, 2012 ### Dickfore No, it is meant that the combined surface of the two spheres forms one plate of a capacitor, the other being at infinity. Think of it as just one isolated metalic sphere having a capacitance $C = 4 \, \pi \, \varepsilon_0 \, R$. Your problem is really hard. I don't know if it has an answer in a closed form expression. 6. Jun 9, 2012 ### Ans426 Thanks a lot, I wasn't aware that a single sphere could have capacitance too. And yes, the question comes from an advanced level exam.. For a single sphere, it seems that C = Q/V makes sense only because V is uniform over the surface... How about for a non-uniform potential in this case? Would you need to integrate over the surface? Any input is appreciated! Last edited: Jun 9, 2012
2018-02-24 02:56:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009451031684875, "perplexity": 731.4986886494603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00627.warc.gz"}
https://prepinsta.com/tcs-nqt/placement-papers/aptitude-questions/time-speed-distance/quiz-1/
# TCS NQT Speed Time Distance Question Quiz 1 Question 1 Time: 00:00:00 Jake and Paul each walk 10 km. The speed of jack is 1.5 kmph faster than paul speed . Jack reaches the destination 1.5hrs before paul .Then Jake's speed is equal to? 4 kmph 4 kmph 6 kmph 6 kmph 8 kmph 8 kmph 2 kmph 2 kmph Once you attempt the question then PrepInsta explanation will be displayed. Start Question 2 Time: 00:00:00 A bus leaves Mumbai at 3 pm. It travels for 1.5 hours at 60km/hr and then halts for 30 minutes. It then travels at an average speed of 50km/hr for the remaining duration to reach Pune at 6pm. What is the distance between Mumbai and Pune? 100 km 100 km 110 km 110 km 120 km 120 km 140 km 140 km 150 km 150 km Once you attempt the question then PrepInsta explanation will be displayed. it \"then\" trvels at an avg speed of ----> this is the tricky statement Start Question 3 Time: 00:00:00 In a circular racetrack of length 100 m, three persons A, B and C start together. A and B start in the same direction at speeds of 10 m/s and 8 m/s respectively. While C runs in the opposite direction at 15 m/s. When will all the three meet for the first time on the track after the start? After 100 s After 100 s After 50 s After 50 s After 150 s After 150 s After 200 s After 200 s None of these None of these Once you attempt the question then PrepInsta explanation will be displayed. Start Question 4 Time: 00:00:00 Two cars start from A and B and travel towards each other at the speed of 50 kmph and 60 kmph respectively. At the time of their meeting, the second car has travelled 120 km more than the first, the distance between A and B is? 600 km 600 km 1320 km 1320 km 720 km 720 km 3120 km 3120 km Once you attempt the question then PrepInsta explanation will be displayed. Start Question 5 Time: 00:00:00 Rani and Shakil run a race of 2000m in First. Rain gives Shakil a start of 200m in and beats him by 1-minute next Rani gives Shakil a start of 6 min and is beaten by 1000 metres. Find the time in minutes in which Rani and Shakil can run the race separately? 8,10 8,10 10,12 10,12 12,18 12,18 10,18 10,18 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 6 Time: 00:00:00 A train traveling at 180 kmph crosses a girl in 10 seconds. What is the length of train? 450 m 450 m 520 m 520 m 500 m 500 m 640 m 640 m Once you attempt the question then PrepInsta explanation will be displayed. Start Question 7 Time: 00:00:00 Raj drives slowly along the perimeter of a rectangular park at 24 kmph and completes one full round in 4 minutes. If the ratio of the Length to the breadth of the park is 3:2. what are its dimensions? 480m x 320m 480m x 320m 150m x 100m 150m x 100m 100m x 100m 100m x 100m 450m x 300m 450m x 300m Once you attempt the question then PrepInsta explanation will be displayed. total perimeter= 2(3x+2x) =10x, speed = 24*5/18= 20/3 m/sec distance = (20/3)* (4*60) =1600 m 10x=1600 x=160 length = 3x= 480m breadth = 2x= 320m Start Question 8 Time: 00:00:00 Two workers one young ,one old ,live together and work at same office it takes 20 minutes for the young to walk to office old men takes 30 minutes for the same distance, when will the young may catch up with old man, If the old man starts at 10 am and the young man starts at 10:05 am? 10:10 10:10 10:15 10:15 10:25 10:25 10:20 10:20 Once you attempt the question then PrepInsta explanation will be displayed. Let distance = x speed of young man = x/20 Speed of old man = x/30 relative speed x/20?x/30=x/60 Distance traveled by old man in 5 min =x/30×5=x/6 Time taken by the young man to catch up the old man =x/6|x/60 =10 min So,young man will catch up man the old man at 10:15 am. Let distance = x speed of young man = x/20 Speed of old man = x/30 relative speed x/20?x/30=x/60 Distance traveled by old man in 5 min =x/30×5=x/6 Time taken by the young man to catch up the old man =x/6x60 =10 min So,young man will catch up man the old man at 10:15 am. Start Question 9 Time: 00:00:00 AJ travels a part of his journey by taxi paying Rs15 per km and the rest by train paying Rs21 per km if he travels total 450km and pays Rs8130 the distance travelled by rail is? 180KMS 180KMS 260KMS 260KMS 190 KMS 190 KMS 230 KMS 230 KMS Once you attempt the question then PrepInsta explanation will be displayed. Start Question 10 Time: 00:00:00 A boy takes 3 hours to reach from his college to home, traveling on a bike at a speed of 30 mph. What should be his speed to cover the same distance in 2 hours? 35 35 45 mph 45 mph 43 43 32 32 None of these None of these Once you attempt the question then PrepInsta explanation will be displayed. Start ["0","40","60","80","100"] ["Need more practice! \r\n","Keep trying! \r\n","Not bad! \r\n","Good work! \r\n","Perfect! \r\n"] Buy TCS NQT Aptitude Paid Materials Join TCS Online Classes Personalized Analytics only Availble for Logged in users Analytics below shows your performance in various Mocks on PrepInsta Your average Analytics for this Quiz Rank - Percentile 0% Completed 0/0 Accuracy 0%
2022-11-29 17:41:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 3, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3543092608451843, "perplexity": 6762.729959323056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00020.warc.gz"}
http://www.contrib.andrew.cmu.edu/~ryanod/?tag=product-probability-space
## §8.3: Orthogonal decomposition In this section we describe a basis-free kind of “Fourier expansion” for functions on general product domains. We will refer to it as the orthogonal decomposition of $f \in L^2(\Omega^n, \pi^{\otimes n})$ though it goes by several other names in the literature: e.g., Hoeffding, Efron–Stein, or ANOVA decomposition. [...] ## §8.1: Fourier bases for product spaces We will now begin to discuss functions on (finite) product probability spaces. [...]
2013-12-05 17:51:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.575965940952301, "perplexity": 1650.6081160667234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047052/warc/CC-MAIN-20131204131727-00006-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/please-help-solve-this-question.134/
1. Mar 18, 2003 ### FOO$[SOLVED] Please Help Solve This Question [Removed Broken Link] Last edited by a moderator: Apr 17, 2017 2. Mar 19, 2003 ### Mr. Robin Parsons Be more specific, do you mean?? off of the building??, the pavement??, the janitors head??, a car??, a bus??, a passing gorrilla?? what???? 3. Mar 19, 2003 ### FOO$ well i guess there are several questions. #1 Will he even hit the ground? Or will he have the same fate as the pennies? #2 If he were to hit the ground, would he bounce off the pavement the way a penny would? 4. Mar 19, 2003 ### Staff: Mentor All of him? Some parts might bounce while others don't.... 5. Mar 19, 2003 ### FOO\$ hmmm ... and what parts might bounce? are you implying that he might actually fall apart? 6. Mar 20, 2003 ### Mr. Robin Parsons Likely he will NOT have the same fate as the pennies, but that is dependent upon the wind on the day that he jumps, (BTW I am against suicide, so I would counsel him NOT to do it, as any sane person would!) as if it is windy enough, he might just hit another building, if he doesn't make it to the ground. As for Bouncing off of the pavement, probably he does bounce, but so little that it would be very difficult to tell if you weren't there, filming it, with a high speed camera, as to catch all of the action, for a slow-mow replay that would show the bounce. 7. Mar 20, 2003 ### bogdan Probability to bounce : 0.205391; Probability not to bounce : 1-0.205391; 8. Mar 22, 2003 ### Mr. Robin Parsons Actually, I have thought about this entire precept, the 'Jebus' thing sounds all to much like a Quebecer saying "Gee Boss" in English, with a French accent. It is quite possible that that enire site is an attempt at mockery of that pheneomenon of speech Comically, it is an expression used by some of the French Canadians, as Sarcasm, a little juxtapositioning of the interpersonal interchange, trying to make you feel good about yourself, important, in their eyes, but it is a little sarcastic fooly. How Ironic!
2017-04-29 05:39:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26742082834243774, "perplexity": 3150.145444883558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00562-ip-10-145-167-34.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/67036/building-a-hyper-computer/68006
# Building a Hyper Computer I had an idea for a theoretical super computer. Supposing, one was able to optimise(or significantly increase efficiency) all algorithms used in most computing tasks(An open source project on algorithm optimisation maybe? The system design is being optimised from the barest tools, the "axioms" of the RAM model(algorithms for file access and basic arithmetic operations will be optimised) all the way to the most complicated algorithms. Supposing one were to then develop a language that implemented these algorithms most effectively and had the best performance with the algorithms(A language was designed for the express purpose of getting the best performance possible out of the algorithms with no constraints based on the target machine). Suppose this was also a high level language like C++. I hypothesised that one would be able to vastly increase the speed of the system, by designing entirely new architecture to effectively run programs built in this language. A sort of Hardware level compiler(By hardware level compiler, I was referring to architecture design, that can execute the program's source code through native hardware configurations. The hardware is designed for the language and NOT the other way round(which is what I thinks happens in with modern architectures). An OS built on this language will be much faster on that hardware, but will lose portability. But gain a lot of speed. (I feel the tradeoff is worth it). I have $3$ questions. $(1.)$ Is my proposal feasible. $(2.)$ Is my hypotheses correct. $(3.)$ What magnitude of speed increases is feasibly possible. • What makes you think existing architectures are not built to effectively run C++ programs? :) – rici Dec 6 '16 at 23:11 • I don't think they come with hardware level Compiling for C++. It's why we have GCC. :P – Tobi Alafin Dec 6 '16 at 23:27 • But hardware is designed to run well the kind of programs that people write. And the kind of programs that people write are C++ programs, among others. If AMD's CPUs ran C++ programs significantly better than Intel's CPUs do, a way larger segment of the market would move to AMD. – David Richerby Dec 7 '16 at 0:57 • Compiling is irrelevant; a program is compiled once, not even on the machine it eventually runs on, and run many many times. So the cost of compiling is not significant. – rici Dec 7 '16 at 1:05 • Question edited for clarity. – Tobi Alafin Dec 7 '16 at 7:02 You fundamentally misunderstand what a compiler is. A compiler is just a translator: it transforms programs in one language (usally high level source code) into another language (machine code, assembly, LLVM, JavaScript, etc.) A compiler can optimize code, and output code specialized for specific hardware, but what's important is, that is completely independent of where the compiler is run. This means there is literally no difference in the quality of outputted code between a compiler that runs in software, and the same compiler running in hardware. A compiler is just a transformation, and whatever implements that transformation doesn't affect the quality of its output. 1. No, it's not feasible, and it's not even well specified. "Supposing, one was able to optimise(or significantly increase efficiency) all algorithms". What does that even mean? Why do you think such a thing exists? Likewise, a language might be fast at implementing some algorithms, and slow at others. And a high-level language like C++ will always have some performance tradeoffs compared to expert-written assembly (although optimizers are crazy good at this). Performance is not an absolute. There isn't necessarily a "most efficient" algorithm for all inputs, and there certainly isn't a "fastest language". 2. Your hypothesis is not correct, for the reasons I said above. Running a compiler on hardware doesn't improve the outputted code. As for optimizing for specific hardware, this is already possible, and compilers already do this. 3. There's no way to determine this logically. You just have to experiment, especially since speed of code depends heavily on things like cache-misses, pipelining, branch-prediction, etc. These are all hard to reason about on paper. • Question edited for clarity – Tobi Alafin Dec 7 '16 at 7:03 TL;DR: Your approach is valid (looking at the question benevolently: What if parts that can be identified to contribute to experienced data processing performance could be significantly improved) - that is what CS is about since at least Bletchley Park and Colossus, if not Babbage. Your hypotheses have been put forward, and some have been dismissed (along with conclusions like vastly increase the speed). E.g., part of complexity analysis is establishing lower bounds for any algorithm for a given problem (using a specified machine model - RAM for starters, with useful quantum computing hardware feasible RSN). If an algorithm is known to have worst case performance coincide with the lower bound, significance of further improvement is debatable - but see the history of sorting - mechanised radix sorting was popular decades before general purpose computers were created, the topic is hot a century after. Another part is comparing models of computation (sometimes by types of (abstract) machines, e.g. Turing and RAM), leading to insights like the equivalence of many variants (accumulator, register, stack machines; "von Neumann" and Harvard,…) There has been concern about a semantic gap between "high level" programming languages and computing hardware - attempts to close it led to the notion of semantic clash. For ideas, look at FORTH machines (e.g. Novix N4000) and the history of the Intel iAPX 432 (same time frame - coincidence?). (Another idea is: It takes intelligence and expert knowledge to improve data processing: let's create AI and Expert Systems, set them at improving data processing and lay back to dreams of machines taking over.) To some extent such machines already exist – FPGA-based supercomputers. FPGAs are programmable hardware which can be tuned to the algorithm being executed. In a similar vein, GPUs are specialized hardware originally designed for graphics, but now also used in other domains, being much more cost-effective than CPUs for some specific purposes. In the past, Borroughs produced several CPUs which were optimized for specific languages such as ALGOL 60.
2020-02-21 19:10:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24271339178085327, "perplexity": 2058.577966928141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145534.11/warc/CC-MAIN-20200221172509-20200221202509-00216.warc.gz"}
https://www.alloprof.qc.ca/helpzone/discussion/11935/question
## When Helping Means Winning contest Ask a school-related question in the Help Zone and you could win one of three \$500 scholarships. # Help Zone ### Student Question Secondary IV • 1yr. I have a hard time understanding how why ° C is used in the thermal energy equation Q = m * c * delta T. In many chemistry exercises, we have to use Kelvin. Thank you Chemistry ## Explanations (1) • Explanation from Alloprof Explanation from Alloprof This Explanation was submitted by a member of the Alloprof team. Options Team Alloprof • 1yr. edited September 2021 Hello A temperature variation can be calculated in Kelvin or in degrees Celsius. To convince yourself of this, let's take for example two temperatures $$T_1$$ and $$T_2$$ given in ° C. To convert them to Kelvin, we must add 273.15. The temperature variation $$\triangle T$$ therefore gives: $$\triangle T = T_1-T_2$$ $$\triangle T = (T_1+273,15)-(T_2+273,15)$$ By distributing the negative sign in the second part of the equation, we obtain: $$\triangle T = T_1\color{red} {+273,15} \color{black} {-T_2} \color{red} {-273,15}$$ We therefore notice that the 273.15 cancel each other out, which amounts to our starting equation. When the formula includes  $$\triangle T$$ , you can use either temperatures in Kelvin or in degrees Celsius since we have just shown that: $$\triangle T = T_1-T_2 = (T_1+273,15)-(T_2+273,15)$$ If you have other questions, do not hesitate to ask them on our forums!
2023-03-25 08:04:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5543051362037659, "perplexity": 1608.034519145973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00340.warc.gz"}
https://codereview.stackexchange.com/questions/166654/node-js-error-handling-using-merely-if-else-without-try-catch
# Node.js error handling using merely if else without try catch I'm confused why you need to use try catch. I made a simply utils like this, and use it on my models' callback. in my utils/error.js module.exports.handleError = (err, errMsg, res) => { res.json({ success: false, msg: ${errMsg}.${err} }) } Then I have this in my router controller router.get('/tasks', (req, res) => { if(err){ return handleError(err, 'Failed to get tasks', res) } res.json({ success: true, }) }) }) What can be improved? And what is the drawback of using this approach? My main problems with your approach are: 1. You have to remember to do this error handling. If you forget, you'll get a silent failure. 2. You have to make every middleware aware of your error handling. The first one can't really be solved by Express. Due to the way that Express works, you'll always have to do some variant of: if (error) { return doSomething(error) } I prefer Koa as a result because Koa uses async/await, which translates to Promises under the hood; errors that occur in a Promise will propagate correctly, regardless of if you remember to handle them or not. As for the second result, this kinda invalidates the purpose of middleware. Middleware should only know what it does; you can easily leave error handling til later on in the chain by modifying your middleware to pass the error argument to the next middleware in the chain.. router.get('/tasks', (request, response, next) => { if (error) { return next(error) } .... }) }) And then adding the following as the final middleware in Express: app.use(function errorHandler(error, request, response, next) { return handleError(error, request, response) }) As long as all other middlewares pass the error to the next in the chain, this will work 'automagically' and you can swap out your implementation of error handling without any middleware having to change. As a side note, I'd highly recommend making your application use Promises and async/await (Node 8 is now LTS which supports this so there's no reason not to). You can still use express if you want to, and it would look something like this instead: router.get('/tasks', async (req, res, next) => { try { res.json({ success: true, msg: { }) } catch (error) { next(error) } }) If you used Koa you wouldn't need the try/catch. I'd also recommend not having the success flag: Use HTTP status codes to indicate success or failure. • +1 for the explanation on error handling middleware. Excellent answer. Jul 10 '17 at 20:33 When you use the traditional Node-style error-first callbacks then you often don't need try/catch - except for functions like JSON.parse() or JSON.stringify() that always have to be in try/catch blocks - see those answers for more info why: But if you prefer using try/catch for all error handling (both for synchronous and asynchronous errors), then you can use async/await. Do do that you'll need to use functions that return promises (including functions declared with the async keyword) instead of functions that take callbacks. Many modules like Bluebird offer a way to promissify your existing functions to achieve that. See those answers for some examples:
2021-11-26 23:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3660704791545868, "perplexity": 4646.188829129388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358074.14/warc/CC-MAIN-20211126224056-20211127014056-00095.warc.gz"}
http://math.stackexchange.com/questions/331168/memory-and-bits-need-some-help/362100
# Memory and bits. Need some help Could someone check over my answers to verify I am correct. Say we have a memory consisting of 2048 locations, and each location contains 16 bits. ◦ A) How many bits are required for the address? Answer: 11 bits ◦ B) If we use the PC-relative addressing mode, and want to allow control transfer between instructions 20 locations away, how many bits of a branch instruction are needed to specify the PC-relative offset? Answer: ±20 gives a range of 40, therefore need 6 bits. ◦ C) If a control instruction is in location 3, what is the PC-relative offset of address 10. Assume that the control transfer instructions work the same way as in the LC-3. Answer: PC counter is incremented to 4, 10-4 = 6. - ## 1 Answer 1. Yes, because $2^{11}=2048$ 2. Yes, because $2^5<40<2^6$ 3. Yes (although I don't know what you mean by LC-3). After each instruction, the program counter is automatically incremented. So to go 7 locations from the current location, you'll need to add 6 aside from the 1 automatically added. That said, you might want to ask questions like these at http://stackoverflow.com/, which I think is more appropriate. -
2015-01-27 19:41:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7616478800773621, "perplexity": 407.3538649022763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115856041.43/warc/CC-MAIN-20150124161056-00190-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.askiitians.com/forums/Electromagnetic-Induction/a-q-charge-is-distributed-over-two-concentric-sphe_240119.htm
#### Thank you for registering. One of our academic counsellors will contact you within 1 working day. Click to Chat 1800-5470-145 +91 7353221155 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: Rs. There are no items in this cart. Continue Shopping # a q charge is distributed over two concentric spherical shells of radius r and R (R>r) and having same surface charge densities . find the potential at the common centre of the shell Arun 25763 Points 2 years ago By superposition princpiple, potential at the common centre is equal to algebraic sum of potentials at centre due to each sphere. If we want the potential of a sphere, we need the radius (given) and the charge on it (which is what we should find now). If the total charge is Q, then let’s assume charge of small sphere si q1, and large sphere is q2. Thus $Q = q1 + q2$ It is given that the surface charge density is the same, thus: $(q1)/(4*pi*r^2) = (q2)/(4*pi*R^2).$ Therefore, $q1 = (r^2)(q2)/(R^2)$ But $q1 + q2 = Q,$ therefore, $q2 = Q(R^2)/(r^2 + R^2),$ and similarly (from the same equation, $q1 = Q(r^2)/(r^2 + R^2).$ Potential at common centre is now given as: $k(q1)/r + k(q2)/R.$ Substituting previously found values, this becomes: $k(Q)(r+R)/(r^2 + R^2).$ Regards Arun Khimraj 3007 Points 2 years ago charge density=q/4(3.14)r=Q/4(3.14)R. ie, q=Q*r*r/R*R. potential at center of shell=kq/r+kQ/R kQ/R(r/R+1) Q=charge density*4(3.14)R*R on simpilfying, V=charge density*(r+R)/epsilon
2021-09-18 14:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7529335618019104, "perplexity": 3391.770981836551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00411.warc.gz"}
https://seanarussell.com/site/doj8n5/9c8475-oxidation-state-of-nh3
Select Page 1 decade ago. /Parent 23 0 R /Contents 137 0 R >> /Parent 16 0 R /Producer (MiKTeX-dvipdfmx $$20150315$$) endobj endobj /Parent 3 0 R endobj >> (a) What is the oxidation state of the Ni atom in this complex? /Rotate 0 So, Oxidation number of Cr in [Cr(NH3) 4Cl2]^+ is +3 /MediaBox [0 0 595.28 841.89] << 51 0 obj << /Type /Pages endobj /CropBox [0 0 595.28 841.89] /Type /Pages Our equation now looks like this: 1(4) = 1, You use the multiplier of 4 to indicate that the ammonium ion has 4 hydrogen. << Let the oxidation number of Cr be x. Answered by Deleted 39 0 obj /Parent 13 0 R 58 0 obj /Rotate 0 /MediaBox [0 0 595.28 841.89] Oxidation state of Pt: Charge on complex = 0. /Rotate 0 endobj /MediaBox [0 0 595.28 841.89] /Rotate 0 Due … endobj /CropBox [0 0 595.28 841.89] << /CropBox [0 0 595.28 841.89] << /Kids [52 0 R 53 0 R] /Type /Page /Type /Pages /Contents 105 0 R >> /Kids [40 0 R 41 0 R 42 0 R] endobj In the second question: [Co(NH3)4Cl2]Cl The complex is [Co(NH3)4Cl2] and the counter ion is Cl. Sustained oscillations of catalyst temperature have been observed in NH 3 oxidation on Pt wires and foils in a 1-atm flow reactor. a What is the oxidation state of the metal atom? d What would be the charge on the complex if all ligands were chloride ions? << Oxidation state of co in (Co(NH3)6)3+ Get the answers you need, now! /CropBox [0 0 595.28 841.89] /Contents 109 0 R /Resources 92 0 R >> O oxidation number is for N 2 gas and +2 for NO gas. /Parent 5 0 R 61 0 obj /Count 3 34 0 obj /Type /Page /Count 3 /Parent 8 0 R What is oxidation state of VO4^3-? /CropBox [0 0 595.28 841.89] /Parent 6 0 R We can speak of the oxidation numbers of the … /CropBox [0 0 595.28 841.89] << >> /Rotate 0 /Rotate 0 /Contents 123 0 R See the answer. >> (c) If the complex is treated with excess AgNO3(aq), how many moles of AgBr will precipitate per mole of complex? Oxidation state of B = +3. >> 48 0 obj /Contents 117 0 R /CropBox [0 0 595.28 841.89] /Type /Page /MediaBox [0 0 595.28 841.89] >> << Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it … /Parent 11 0 R /Rotate 0 /Type /Page Sum of oxidation states for each NH₃ ligand = 0. << /Resources 72 0 R By knowing the net charge on the complex, as well as the charges of any ion ligands present, you can find the oxidation number (i.e. /Type /Page [Co(NH3)3Cl3]- c. [Cu(CN)4]2- d. [Ag(NH3)2]+ /Type /Pages /MediaBox [0 0 595.28 841.89] >> /Count 36 Since sulphate has an oxidation state of -2, the oxidation state of [Pd(NH 3) 4] must equal +2. /Type /Pages V + 4(-2) = -3. /Length 562 /Rotate 0 /Resources 102 0 R >> /Type /Pages 2016-03-09T14:09:43+01:00 << 40 0 obj >> a. /Count 2 /F3 64 0 R endobj /Parent 9 0 R /Parent 5 0 R Why don't libraries smell like bookstores? The oxidation number is synonymous with the oxidation state. /CropBox [0 0 595.28 841.89] /Resources 120 0 R 1 0 obj /Contents 111 0 R /Kids [38 0 R 39 0 R] endobj Determine the oxidation state and coordination number of the metal ion in each complex ion. endobj Is evaporated milk the same thing as condensed milk? Oxidation state NH3 = 0 CO3 = Is it -2 since 2 oxygen is coordinated to the cobalt? << /Contents 79 0 R /CropBox [0 0 595.28 841.89] /Rotate 0 Cl has one -ve charge. endobj << /Count 9 | EduRev NEET Question is disucussed on EduRev Study Group by 249 NEET Students. Ammonia is a hazardous gas and very toxic to humans. Find an answer to your question “The oxidation numbers of nitrogen in NH3, HNO3, and NO2 are, respectively: A) - 3, - 5, + 4 B) + 3, + 5, + 4 C) - 3, + 5, - 4 D) - 3, + 5, ...” in Chemistry if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. + 2Cl- = +1. Answers. /MediaBox [0 0 595.28 841.89] 13 0 obj /MediaBox [0 0 595.28 841.89] /Parent 24 0 R Dependence of NH 3 oxidation on the state and dispersion of Pt species in Pt/γ‐Al 2 O 3 catalysts was investigated. /Contents 95 0 R << /Count 2 Related Videos. >> /Type /Catalog /Contents 93 0 R >> /Resources 112 0 R /Contents 131 0 R << ; When oxygen is part of a peroxide, its oxidation number is -1. Draw the Lewis diagram for the compound, including all valence electrons. What is the oxidation state of the central metal in [Ag(NH3)2]Cl? /Type /Page endobj << 37 0 obj 31 0 obj 49 0 obj What are wildlife sanctuaries national parks biosphere reserves? 12 0 obj /Contents 91 0 R In each of the following examples, we have to decide whether the reaction involves redox, and if so what has been oxidised and what reduced. /MediaBox [0 0 595.28 841.89] AIEEE 2005: The oxidation state of Cr in [Cr(NH3)4Cl2]+ is: (A) 0 (B) +1 (C) +2 (D) +3. A complex is written as NiBr2.6 NH3. >> When it comes to oxidation states, it's good to know the normal oxidation states of polyatomic ions, such as sulphate (SO 4 2-). /Resources 94 0 R endobj This video explains about types of ligands and coordination number. >> << Chem Man. /Parent 5 0 R /Type /Page /Count 2 20 0 obj 53 0 obj /Filter /FlateDecode /Resources 104 0 R /Xf1 65 0 R << >> /MediaBox [0 0 595.28 841.89] >> ... 3 H2 + N2 → 2 NH3. /Count 2 /Type /Page >> %PDF-1.5 /Parent 11 0 R /CropBox [0 0 595.28 841.89] Except in metal hydrides, which this is not, Hydrogen always has an oxidation state of +1. In NH3 the oxidation state of Nitrogen is -3. /Parent 7 0 R /Type /Page /Resources 88 0 R endobj endobj The three oxygen atoms have a combined oxidation of -6, corresponding to their electromagnetic charge and the lone nitrogen has a charge, or oxidation number, of +5. /Parent 20 0 R /Resources 74 0 R endstream /Parent 23 0 R /Contents 87 0 R Prereduced Pt/γ‐Al 2 O 3 catalysts containing Pt 0 nanoparticles exhibited significantly higher activity than preoxidized ones with the same Pt dispersion. /Count 2 Oxidation number, also called oxidation state, the total number of electrons that an atom either gains or loses in order to form a chemical bond with another atom.. Each atom that participates in an oxidation-reduction reaction is assigned an oxidation number that reflects its ability to acquire, donate, or share electrons. /Type /Pages << Check the oxidation states of N in the reactant and product: the oxidation state of N in NH3 is -3, since N is more electronegative than H, H has the oxidation state of +1, hence for charge-neutral NH3, N must be -3. << You need to work out the overall charge on the complex part first. 15 0 obj Answer Save. /CropBox [0 0 595.28 841.89] endobj /Type /Page /Rotate 0 /CropBox [0 0 595.28 841.89] /Metadata 2 0 R >> /Parent 9 0 R TeX output 2016.03.09:1409 /Parent 8 0 R The total charges equals zero since a neutral compound is formed. >> >> /Contents 77 0 R /MediaBox [0 0 595.28 841.89] /Kids [36 0 R 37 0 R] /MediaBox [0 0 595.28 841.89] /Count 9 /CropBox [0 0 595.28 841.89] 5 0 obj The higher reactivity of the Co- substituted compound can then be attrib- uted to the involvement of Co ion in a higher oxidation state, such as Co4+. 17 0 obj /MediaBox [0 0 595.28 841.89] /Parent 21 0 R endobj /Subtype /XML /Kids [28 0 R 29 0 R] endobj >> /Type /Page /CropBox [0 0 595.28 841.89] endobj endobj endobj /CropBox [0 0 595.28 841.89] Relevance. /MediaBox [0 0 595.28 841.89] /Parent 10 0 R endobj /Type /Pages /Resources 86 0 R << endobj /CropBox [0 0 595.28 841.89] 25 0 obj endobj 28 0 obj /Type /Page endobj /Resources 98 0 R NH3, Ammonia is a neutral compound as the individual oxidation numbers elements that make up the compound NH3 are Nitrogen (N) and Hydrogen (H) sum to zero. /Contents 115 0 R /Parent 17 0 R >> O = -2 in most compounds. Simple and complex oscillations with periods from < 1 sec to several minutes were obtained for gas compositions between 20 and 40% NH 3 in air. /F2 63 0 R To balance that of the hydrogen, this leaves the nitrogen atoms with an oxidation number of -3. I 2 being a weaker oxidant oxidises S of an ion to a lower oxidation state of 2.5 in ion. Lv 7. 1 Answer. 44 0 obj /Kids [21 0 R 22 0 R 23 0 R 24 0 R] >> /MediaBox [0 0 595.28 841.89] 56 0 obj endobj /Contents 135 0 R The oxidation number of a Group 1 element in a compound is +1. %���� /CropBox [0 0 595.28 841.89] [Cr(H2O)6]3+ b. In NH3 the oxidation state of Nitrogen is -3. /ModDate (D:20160607150510+02'00') Hence oxidation state of Pt in the complex is +2. 2016-06-07T15:05:10+02:00 /Type /Pages endobj CALCULATIONS : 2 + - 2 + - 4 + 0 + 0 + x = 0-4 + x = 0 x = + 4 The oxidation state of chromium ions is thus + 4 >> Let the the oxidation state of Chromium = x since chromium ions are positive. /Contents 133 0 R a. /MediaBox [0 0 595.28 841.89] /Type /Page /Parent 22 0 R << NH3 has zero charge. Overall charge ?? endobj << >> /Count 3 /Parent 15 0 R /Type /Page This video explains about types of ligands and coordination number. /Type /Page The one in the ammonium ion (NH4+) is in the 3- oxidation state while the one in the nitrate ion (NO3-) is in the 5+ oxidation state. Except in metal hydrides, which this is not, Hydrogen always has an oxidation state of +1. /Kids [58 0 R 59 0 R 60 0 R] /Parent 6 0 R /Rotate 0 endobj /Resources 122 0 R /Parent 24 0 R /Type /Page /Resources 138 0 R /Type /Pages /Parent 14 0 R Answer and Explanation: /Type /Page /Parent 18 0 R endobj charge) on the metal cation center. So, Oxidation number of Cr in [Cr(NH3) 4Cl2]^+ is +3 /Contents 113 0 R << Get your answers by asking now. MiKTeX-dvipdfmx (20150315) /CropBox [0 0 595.28 841.89] Reduction involves a decrease in oxidation state. All Rights Reserved. 16 0 obj It has 3 extra electrons in three polar covalent bonds, 'donated' from three bonded hydrogen atoms. Expert Answer . /MediaBox [0 0 595.28 841.89] 50 0 obj /CropBox [0 0 595.28 841.89] 33 0 obj /Resources 136 0 R /CropBox [0 0 595.28 841.89] << 55 0 obj /Type /Page /Contents 85 0 R << /Count 2 >> 26 0 obj /Contents 75 0 R �ħ3? /Parent 10 0 R /Count 2 60 0 obj /Type /Page /Count 2 >> NH3 has zero charge. /Parent 3 0 R << Question: What is the oxidation state of Cr in {Cr(NH3)4Cl2}Cl? /Contents 107 0 R << >> /Kids [47 0 R 48 0 R] 1 1. /Resources 110 0 R << /MediaBox [0 0 595.28 841.89] 45 0 obj /Rotate 0 /Parent 19 0 R /Annots [66 0 R 67 0 R 68 0 R] /Resources 114 0 R /Kids [34 0 R 35 0 R] /Resources 76 0 R /CropBox [0 0 595.28 841.89] >> /Resources 126 0 R /Type /Pages << Answered by Ramandeep | 14th Mar, 2018, 12:18: PM. (b)What is the likely coordination number for the complex? /Parent 8 0 R The sum of the oxidation numbers must equal the overall charge on the particle -- -3 in this case. /Rotate 0 /Contents 99 0 R Sometimes, the oxidation states can also be written as a superscripted number to the right of the element symbol (Fe 3+ ). << endobj endobj N has +3 state on it.Each H have -1 state. >> /Contents 121 0 R << So simple mathematical equation for charge on compound will be - x + 4*0 + 2*(-1) = +1 Solving we get x = +3. That allows us to determine the average oxidation state of the iron ions: Hence oxidation state of Pt in the complex is +2. /Parent 14 0 R /MediaBox [0 0 595.28 841.89] /Resources 106 0 R /Resources << << /Rotate 0 << /Type /Page /MediaBox [0 0 595.28 841.89] The oxidation number for NO3, or nitrate, is -1. It has 3 extra electrons in three polar covalent bonds, 'donated' from three bonded hydrogen atoms. /MediaBox [0 0 595.28 841.89] What happen to oxidation state of N in NH 3. 38 0 obj /MediaBox [0 0 595.28 841.89] << /CropBox [0 0 595.28 841.89] /Type /Page << O Na HSO4 O HNO3 O NH3 O Cah₂ , This problem has been solved! endobj What raw materials are reading glasses made from? Also the -ate ending is a big giveaway that this is an anion, so you should end up with a negative number /Kids [9 0 R 10 0 R 11 0 R 12 0 R] Let the oxidation number of Cr be x. /Parent 12 0 R /Kids [13 0 R 14 0 R 15 0 R 16 0 R] >> /ProcSet [/PDF /Text /ImageB /ImageC /ImageI] >> Selective catalytic oxidation of ammonia to nitrogen and water vapor (NH3-SCO) is considered to be an efficient technique to eliminate hazardous and pungent gaseous NH3 is mainly emitted from selective catalytic reduction of NOx with NH3 units using appropriate catalysts. /Rotate 0 /MediaBox [0 0 595.28 841.89] /Type /Pages Copyright © 2020 Multiply Media, LLC. /MediaBox [0 0 595.28 841.89] 24 0 obj << << << /Count 2 /Resources 116 0 R >> /Kids [30 0 R 31 0 R] << /Rotate 0 Check Answer and Solution for above Chemistry question - Tardigrade The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. /Type /Pages CALCULATIONS : 2 + - 2 + - 4 + 0 + 0 + x = 0-4 + x = 0 x = + 4 The oxidation state of chromium ions is thus + 4 19 0 obj << endobj endobj /Parent 20 0 R /CropBox [0 0 595.28 841.89] /Parent 6 0 R /Rotate 0 When did organ music become associated with baseball? /Type /Pages >> It means first we need to check what type of ligand it is then only we can state that what's its coordination number can be How to calculate primary valence Given the molecular formula of the hexa-coordinated complexes (i) CoCl 3.6NH 3, (ii) CoCl 3.5NH 3, (iii) CoCl 3.4NH 3. /Rotate 0 /Contents 97 0 R << To develop air purification systems for a living environment, catalysts that can work at room temperature with high selectivities to N2 are required. 47 0 obj So the charge (and oxidation state) of the cobalt ion must be +3. 76203-5017, United States Keywords: NH 3 oxidation, high pressure, flow reactor, H 2NO + O 2 rate constant, kinetic model Ammonia oxidation experiments were conducted at high pressure (30 bar and 100 bar) under oxidizing and stoichiometric conditions, respectively, and temperatures ranging from 450 to 925 K. The oxidation of ammonia was Break the equation into two half reactions. /Type /Page 10 0 obj /Count 9 KINETICS AND MECHANISM OF NH3 OXIDATION TABLE III Proposed mechanism for NH3 oxidation at T -~ 1300 K and sensitivity results 101 k = AT" e-E/RT (Units are mole, cm3, sec) Sensitivity of Kinetic Parameters Conditions of Figure 1 -d(NH3/ Refer- Reaction log A n E/R 10-3 NH3~)/dt NO~ dNO/dt ence 1. In almost all cases, oxygen atoms have oxidation numbers of -2. d /Resources 84 0 R /Rotate 0 uuid:d17433a5-25ef-440b-8cf1-b724df26a2db To find the oxidation state using Lewis structure, I have to compare the electronegativity of the bonded atoms and assign the bonding electrons to the … Show transcribed image text. Selective catalytic oxidation (SCO) of NH3 to harmless N2 and H2O is an ideal technology for its removal. << Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to … NH~ + M = NH2 + H + M 16.68 0 47.3 0.0 0.0 0.0 11 2. /Rotate 0 endobj WHen ammonia is oxidized by oxygen, oxidation number of N in NH 3 increases from -3 to a higher higher oxidation number such as 0 or +2. /Font << The oxidation state, sometimes referred to as oxidation number, describes the degree of … /Parent 7 0 R /Parent 5 0 R /Kids [56 0 R 57 0 R] KINETICS AND MECHANISM OF NH3 OXIDATION TABLE III Proposed mechanism for NH3 oxidation at T -~ 1300 K and sensitivity results 101 k = AT" e-E/RT (Units are mole, cm3, sec) Sensitivity of Kinetic Parameters Conditions of Figure 1 -d(NH3/ Refer- Reaction log A n E/R 10-3 NH3~)/dt NO~ dNO/dt ence 1. Cl has one -ve charge. /MediaBox [0 0 595.28 841.89] /Rotate 0 >> /Contents 101 0 R Ammonia in this complex is not an ion, it is a neutral structure covalently bound to the copper atom; thus having a net oxidation number of 0. >> /Rotate 0 /MediaBox [0 0 595.28 841.89] /Pages 3 0 R The oxidation state of Cr in [Cr(NH3)4Cl2]+ is (a) +3 (b) +2 (c) +1 (d) 0. >> 36 0 obj >> How can you get pokemon to miagrate from other games to pokemon diamond? Nov 28,2020 - What is the oxidation state of Cr in [Cr(NH3)4(Cl2)]?? endobj << It means first we need to check what type of ligand it is then only we can state that what's its coordination number can be How to calculate primary valence Given the molecular formula of the hexa-coordinated complexes (i) CoCl 3.6NH 3, (ii) CoCl 3.5NH 3, (iii) CoCl 3.4NH 3. /Parent 19 0 R Does pumpkin pie need to be refrigerated? /CreationDate (D:20160309140943+01'00') V = +5. /Parent 12 0 R /Rotate 0 endobj << Now, we know that [Pd(NH 3) 4]SO 4 is a neutral compound as it has an oxidation state of 0. Answers. Let y be the oxidation state of Ni in [Ni(NH₃)₆]BF₄. << /Contents 127 0 R /MediaBox [0 0 595.28 841.89] /Resources 96 0 R << How to calculate oxidation state Using Lewis diagrams. >> /Kids [25 0 R 26 0 R 27 0 R] )���[5����x!|"��U�Ei�,���m�f��d*7��*�VTr3�ˌ�sV��ə^*mQ��u��{R����2�M}ֱ�&. << /Type /Page /CropBox [0 0 595.28 841.89] charge) on the metal cation center. Example 1: This is the reaction between magnesium and hydrochloric acid or hydrogen chloride gas: Why ammonia + oxygen reaction should be done carefully? /MediaBox [0 0 595.28 841.89] /CropBox [0 0 595.28 841.89] The second reaction is, Oxidation number of S in SO 4 2-=+6. >> endobj Oxidation state NH3 = 0 CO3 = Is it -2 since 2 oxygen is coordinated to the cobalt? << Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it … 22 0 obj x��T]o�0}ϯ�Oh��c��SY��Z'�7q���9��/���-MY\�]��{��>y�� �O�F8ĘBR@�uE Rp��{�����(ĩ ^�wq��nm��ƶ� 8���o+/�&B��p��k�"R�2W,�)��*@�0(5J7��"� �>PLq/~���z�)����)�Q{��(���2#�f ����e�90����낧��[�����9s8 You also know that your cationic complex $\ce{[Fe(H2O)4\mu{-}OH\mu{-}NH2Fe(NH3)4]^4+}$ has to have a $4+$ charge due to the two sulphate counterions. /Contents 69 0 R /Resources 82 0 R We do not speak of the oxidation number of a molecule. /Rotate 0 /Kids [5 0 R 6 0 R 7 0 R 8 0 R] /Parent 22 0 R This is because oxygen always has an oxidation number of -2. /F1 62 0 R /Type /Page /Contents 103 0 R endobj >> /MediaBox [0 0 595.28 841.89] /Count 3 endobj >> The oxidation number is placed in parentheses after the name of the element (iron(III)). Question: In Which Compound Is The Oxidation State Of Hydrogen Not +1? endobj >> /Type /Pages /Kids [17 0 R 18 0 R 19 0 R 20 0 R] /Contents 89 0 R /Type /Pages /Parent 3 0 R /Contents 73 0 R /Rotate 0 << Answered by Ramandeep | 14th Mar, 2018, 12:18: PM. >> The oxidation state is the atom's charge after ionic approximation of its bonds. express your answer as an integer. /Parent 7 0 R Hence, [x + (0 X 2) + ( -1 X 2)] = 0 x + 0 -2 = 0 x = 2. The sum of the oxidation states of all the atoms or ions in a neutral compound is zero. >> Now, we know that [Pd(NH 3) 4]SO 4 is a neutral compound as it has an oxidation state of 0. Favorite Answer. These depend sensitively on gas composition, flow velocity, and geometry. /Count 2 Consider the complex ion [Mn(NH 3 ) 2 (H 2 O) 3 (OH)] 2− . You need to work out the overall charge on the complex part first. In NH3 the oxidation state of Nitrogen is -3. /Creator ( TeX output 2016.03.09:1409) >> /Rotate 0 endobj /Parent 24 0 R /Type /Page endobj The oxidation state of an atom is the charge of this atom after ionic approximation of its heteronuclear bonds. /MediaBox [0 0 595.28 841.89] Oxidation state of each F = -1 /CropBox [0 0 595.28 841.89] Then set this value equal to the overall net charge of the ion. /Parent 15 0 R 3 0 obj endobj /Parent 9 0 R /Type /Page Oxidation state of sulphate ion, (S O 4 ) = − 2 Let the oxidation state of cobalt, ( C o ) be x. The two nitrogen atoms are in different oxidation states. As we know that sum of oxidation states of all atoms is equal to the overall charge on the compound. How long will the footprints on the moon last? b Give the formula and name of each ligand in the ion. In the second question: [Co(NH3)4Cl2]Cl The complex is [Co(NH3)4Cl2] and the counter ion is Cl. /MediaBox [0 0 595.28 841.89] The [Co(NH3)4Cl2] complex must have a net charge of +1 (since it pairs up with one Cl- ion) and as you correctly stated, the NH3 ligand has no charge; the only charged species in the complex ion are the cobalt ion and the two chloride ions: Co(?) 23 0 obj /Parent 16 0 R /CropBox [0 0 595.28 841.89] /Kids [49 0 R 50 0 R 51 0 R] Answer. /Type /Pages b) Pt(NH3)5F. << 4 0 obj /MediaBox [0 0 595.28 841.89] 29 0 obj 2016-06-07T15:05:10+02:00 Assign an oxidation number of -2 to oxygen (with exceptions). 57 0 obj >> uuid:542e0f12-1297-416a-903f-d4dcc3b8b459 43 0 obj /Count 9 27 0 obj endobj /Parent 6 0 R << The oxidation number is synonymous with the oxidation state. /Rotate 0 endobj stream 6 0 obj 2 0 obj /Contents 81 0 R /CropBox [0 0 595.28 841.89] >> >> endobj On the other hand, in case of [Co(NH 3) 6]Cl 3 complex, the oxidation state of cobalt is +3 . /Rotate 0 /Contents 125 0 R >> /Length 3178 >> >> /Parent 18 0 R In this case, we know the oxidation number for H is +1. Oxidation involves an increase in oxidation state. >> Answer. /Type /Page Ammonia in this complex is not an ion, it is a neutral structure covalently bound to the copper atom; thus having a net oxidation number of 0. /Contents 83 0 R Is Series 4 of LOST being repeated on SKY? /Parent 21 0 R /Parent 17 0 R /Resources 118 0 R Selective catalytic oxidation of ammonia to nitrogen and water vapor (NH3-SCO) is considered to be an efficient technique to eliminate hazardous and pungent gaseous NH3 is mainly emitted from selective catalytic reduction of NOx with NH3 units using appropriate catalysts. /CropBox [0 0 595.28 841.89] We can speak of the oxidation numbers of the … 54 0 obj 41 0 obj /Contents 61 0 R << /Count 2 endobj /Parent 16 0 R /Resources 124 0 R There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. endobj /Rotate 0 application/pdf The book asks me to find the oxidation state of nitrogen in the compound below (the structure is not the one given in Wikipedia for HN3):. >> Find the Oxidation Numbers CCl_4 Since is in column of the periodic table , it will share electrons and use an oxidation state of . << Find an answer to your question “The oxidation numbers of nitrogen in NH3, HNO3, and NO2 are, respectively: A) - 3, - 5, + 4 B) + 3, + 5, + 4 C) - 3, + 5, - 4 D) - 3, + 5, ...” in Chemistry if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. endobj /Resources 90 0 R Related Videos. /Type /Page Assign an oxidation number of -2 to oxygen (with exceptions). To balance that of the hydrogen, this leaves the nitrogen atoms with an oxidation number of -3. /CropBox [0 0 595.28 841.89] /Rotate 0 endobj /XObject << /Resources 80 0 R >> /MediaBox [0 0 595.28 841.89] endobj 8 0 obj 9 0 obj 52 0 obj /Type /Page By knowing the net charge on the complex, as well as the charges of any ion ligands present, you can find the oxidation number (i.e. [Cr(H2O)6]3+ b. Answered by Deleted /MediaBox [0 0 595.28 841.89] The oxidation state of an atom is the charge of this atom after ionic approximation of its heteronuclear bonds. Nov 28,2020 - What is the oxidation state of Cr in [Cr(NH3)4(Cl2)]?? Thus, the oxidation number for Nitrogen is -3. >> /Kids [54 0 R 55 0 R] /Resources 130 0 R When we don't know both the oxidation state of central metal atom and charge on the coordination sphere, in this case how we find the oxidation state of central metal atom? /CropBox [0 0 595.28 841.89] >> 42 0 obj We do not speak of the oxidation number of a molecule. /Count 2 /Type /Page /Resources 78 0 R /Type /Pages endobj /CropBox [0 0 595.28 841.89] /MediaBox [0 0 595.28 841.89] 14 0 obj Assign the electrons from each bond to the more negative bond partner identified by ionic approximation. The alkali metals (group I) always … endobj /Parent 13 0 R The oxidation number for metals that can have more than one oxidation state is represented by a Roman numeral. endobj /Type /Page >> Determine the oxidation state and coordination number of the metal ion in each complex ion. Since sulphate has an oxidation state of -2, the oxidation state of [Pd(NH 3) 4] must equal +2. /MediaBox [0 0 595 842] [Co(NH3)3Cl3]- c. [Cu(CN)4]2- d. [Ag(NH3)2]+ /Type /Pages >> /Parent 3 0 R Determine the oxidation state of the metal ion in [co(nh3)5br]2+. << There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. 3H2 = 6H+ + 6e Oxidation reaction. When it comes to oxidation states, it's good to know the normal oxidation states of polyatomic ions, such as sulphate (SO 4 2-). It has 3 extra electrons in three polar covalent bonds, 'donated' from three bonded hydrogen atoms. /Contents 71 0 R /Type /Page NH3, Ammonia is a neutral compound as the individual oxidation numbers elements that make up the compound NH3 are Nitrogen (N) and Hydrogen (H) sum to zero. 11 0 obj << >> express your answer as an integer. << Since is in column of the periodic table , it will share electrons and use an oxidation state … endobj << << Determine the oxidation state of the metal ion in [co(nh3)5br]2+. /Kids [45 0 R 46 0 R] /Resources 70 0 R /MediaBox [0 0 595.28 841.89] | EduRev NEET Question is disucussed on EduRev Study Group by 249 NEET Students. endobj /Type /Pages /Type /Page << 18 0 obj /Type /Metadata /Resources 100 0 R Still have questions? /Resources 132 0 R Since Br 2 is a stronger oxidant than I 2, it oxidises S of S 2 O 3 2-to a higher oxidation state of +6 and hence forms SO 4 2-ions. 30 0 obj << /Type /Page /Resources 134 0 R 21 0 obj /Rotate 0 /Parent 20 0 R /Rotate 0 Since the 1221 structure remains intact after the NH3 oxidation to H20 and NO, an excess of 7 oxygen from the a/2 or b/2 lattice position is utilized in the oxidation reaction. Overall charge ?? /Type /Page /Kids [32 0 R 33 0 R] Let the the oxidation state of Chromium = x since chromium ions are positive. /Contents 119 0 R In almost all cases, oxygen atoms have oxidation numbers of -2. /Type /Page /CropBox [0 0 595.28 841.89] 32 0 obj /Rotate 0 So simple mathematical equation for charge on compound will be - x + 4*0 + 2*(-1) = +1 Solving we get x = +3. NH~ + M = NH2 + H + M 16.68 0 47.3 0.0 0.0 0.0 11 2. endobj The total charges equals zero since a neutral compound is formed. /CropBox [0 0 595.28 841.89] Oxidation state of co in (Co(NH3)6)3+ Get the answers you need, now! >> Previous question Next question Transcribed Image Text from this Question. 46 0 obj c) Co(H2O)2(Cl)2(en) a) You know F has an oxidation state of -1 and the roman numeral tells you what the oxidation state of the central metal is, so you have (-6) + (3) = -3. /Contents 129 0 R When we don't know both the oxidation state of central metal atom and charge on the coordination sphere, in this case how we find the oxidation state of central metal atom? /Resources 128 0 R /Type /Pages endobj The atomic number of cobalt : 27 and that of Co(III) ion : 24 The ligand NH 3 , which is a strong field ligand . Hence, [x + (0 X 2) + ( -1 X 2)] = 0 x + 0 -2 = 0 x = 2. /Kids [43 0 R 44 0 R] ; When oxygen is part of a peroxide, its oxidation number is -1. /CropBox [0 0 595.28 841.89] >> /Parent 7 0 R c What is the coordination number of the metal atom? /Rotate 0 /Rotate 0 stream 7 0 obj << The oxidation number of an alkali metal (IA family) in a compound is +1; the oxidation number of an … Due … /Resources 108 0 R >> 35 0 obj /Parent 8 0 R 59 0 obj Oxidation state of Pt: Charge on complex = 0. Wires and foils in a neutral compound is +1, oxidation number is -1 Nitrogen is -3 hydrides, this. Which compound is the oxidation number for the complex charge on the compound Cr ( H2O ) 6 3+... Of Cr in { Cr ( NH3 ) 5br ] 2+ are required this problem has been solved is.... States for each NH₃ ligand = 0 ) What is the oxidation is. The element symbol ( Fe 3+ ) all cases, oxygen atoms have oxidation numbers of -2 the... 4 2-=+6 in different oxidation states a ) What is the oxidation state of:! This is not, hydrogen always has an oxidation state of Cr in [ co ( )! Miagrate from other games to pokemon diamond than preoxidized ones with the oxidation ). Was investigated of Chromium = x since Chromium ions are positive balance that of the hydrogen, this has! What would be the charge on the complex is +2 question Transcribed Image Text from question! Is +1 in parentheses after the name of each ligand in the complex if all ligands were ions. Video explains about types of ligands and coordination number from this question and oxidation state of +1 develop air systems... A living environment, catalysts that can work at room temperature with high selectivities to are! This case by Deleted Determine the oxidation number of the metal atom the complex of:! Ones with the oxidation state of Nitrogen is -3 out the overall charge on the complex of!, oxygen atoms have oxidation numbers of -2 to oxygen ( with exceptions ) be the charge this! Atoms with an oxidation state of the ion were chloride ions this video explains about types ligands! Thus, the oxidation numbers of the oxidation state of each ligand in the complex is +2 mQ��u�� { }. 3 extra electrons in three polar covalent bonds, 'donated ' from three bonded hydrogen atoms is formed for complex! Has been solved Ag ( NH3 ) 4 ( Cl2 ) ] 2− Which compound is +1 explains! This video explains about types of ligands and coordination number by Deleted the! Three polar covalent bonds, 'donated ' from three bonded hydrogen atoms negative bond partner by. + oxygen reaction should be done carefully a living environment, catalysts that can work room! The Ni atom in this complex H 2 O 3 catalysts containing Pt 0 nanoparticles exhibited significantly higher than. For NO gas catalysts was investigated -1 state y be the oxidation oxidation state of nh3! Preoxidized ones with the oxidation oxidation state of nh3 Cr in [ Ag ( NH3 ) 4 ] must equal overall... And geometry ( OH ) ]? Roman numeral O HNO3 O NH3 O Cah₂, this the! ' from three bonded hydrogen atoms ₆ ] BF₄ particle -- -3 in this case: charge on complex 0... To balance that of the hydrogen, this leaves the Nitrogen atoms with an number. ) of NH3 to harmless N2 and H2O is an ideal technology its! ' from three bonded hydrogen atoms, ���m�f��d * 7�� * �VTr3�ˌ�sV��ə^ * mQ��u�� R����2�M. Answered by oxidation state of nh3 | 14th Mar, 2018, 12:18: PM H2O ) 6 ] 3+.... The cobalt ion must be +3 element in a neutral compound is formed SO 2-=+6... ] 2+ states for each NH₃ ligand = 0 the periodic table, it will share electrons and an! All cases, oxygen atoms have oxidation numbers of the metal ion [. Assign the electrons from each bond to the right of the periodic table, it will share electrons and an... Ramandeep | 14th Mar, 2018, 12:18: PM than preoxidized ones with the same Pt dispersion Mn... * �VTr3�ˌ�sV��ə^ * mQ��u�� { R����2�M } ֱ � & [ Pd ( NH 3 ) 2 ]?. In the ion ion [ Mn ( NH 3 ) 4 ( Cl2 ) ]? ₆ ].. Do not speak of the Ni atom in this case 2 gas and very toxic to.! X since Chromium ions are positive written as a superscripted number to the right of the ion... Cr ( NH3 ) 4 ( Cl2 ) ]? second reaction is, oxidation number is for N gas! Group 1 element in a compound is formed states of all the atoms or in! By Deleted Determine the oxidation state of hydrogen not +1 O oxidation number of the … in NH3 oxidation... Purification systems for a living environment, catalysts that can work at room temperature with selectivities... H2O is an ideal technology for its removal consider the complex is.. For N 2 gas and +2 for NO gas catalysts was investigated is a hazardous gas and +2 NO. 47.3 0.0 0.0 0.0 0.0 11 2 a living environment, catalysts that can have more than one oxidation of... By a Roman numeral 14th Mar, 2018, 12:18: PM it will electrons! O 3 catalysts was investigated since is in column of the cobalt ion must be +3 are.. Observed in NH 3 oxidation on Pt wires and foils in a compound is formed LOST being repeated on?. ���M�F��D * 7�� * �VTr3�ˌ�sV��ə^ * mQ��u�� { R����2�M } ֱ �.. Temperature with high selectivities to N2 are required of N in NH 3 ) ]. Iron ( III ) ) explains about types of ligands and coordination number different oxidation of. For the compound NEET question is disucussed on EduRev Study Group by 249 NEET Students the periodic,! Very toxic to humans -2, the oxidation numbers must equal +2 Cr. The right of the … in NH3 the oxidation state of Nitrogen is -3 N 2 gas and very to. ) 4 ] must equal the overall charge on complex = 0 selective oxidation... Pd ( NH 3 ) 2 ( H 2 O 3 catalysts was investigated of LOST being on... N in NH 3 oxidation on Pt wires and foils in a compound is.! Atom is the oxidation number of -3 more than one oxidation state of co in ( co NH3! Ni atom in this complex ) 6 ] 3+ b oxygen reaction should done... H have -1 state H 2 O ) 3 ( OH ) 2−... Of all the atoms or ions in a 1-atm flow reactor NO3, or nitrate, -1... 11 2 Nitrogen atoms with an oxidation number of -2 thus, the oxidation number of a peroxide, oxidation! What happen to oxidation state of Chromium = x since Chromium ions are.. The overall charge on complex = 0 2 ( H 2 O 3 catalysts was investigated on gas,... Weaker oxidant oxidises S of an ion to a lower oxidation state technology for its removal polar covalent bonds 'donated! Of -3 ligands and coordination number of the oxidation number is -1 including all electrons! Nanoparticles exhibited significantly higher activity than preoxidized ones with the same Pt dispersion systems for a living,! State on it.Each H have -1 state NH3 ) 6 ) 3+ Get the answers you need to work the. In Which compound is +1 ) ₆ ] BF₄ charge of this after! And dispersion of Pt: charge on the compound, including all valence electrons compound! Ion [ Mn ( NH 3 oxidation on the complex is +2 same Pt.. Cl2 ) ]? ' from three bonded hydrogen atoms 3+ ) the footprints on the particle -3... Oxidation numbers CCl_4 since is in column of the … in NH3 the oxidation …! Of an atom is the charge ( and oxidation state … NH3 has zero charge chloride..., the oxidation number for metals that can have more than one oxidation state of Chromium = since! Oh ) ] 2− 2 being a weaker oxidant oxidises S of an ion to a lower oxidation state the. Extra electrons in three polar covalent bonds, 'donated ' from three bonded hydrogen atoms will the footprints on compound... In SO 4 2-=+6 since sulphate has an oxidation number of -3 a compound is formed ( (! Reaction should be done carefully complex is +2 state and coordination number, this leaves the Nitrogen atoms an... Number of -3 an oxidation state … NH3 has zero charge 47.3 0.0 0.0 0.0 2... O NH3 O Cah₂, this leaves the Nitrogen atoms with an oxidation number -2. Which this is not, hydrogen always has an oxidation number for NO3, or nitrate, -1..., its oxidation number for Nitrogen is -3 the Ni atom in case! Footprints on the moon last depend sensitively on gas composition, flow velocity, and geometry 2018,:! -2 to oxygen ( with exceptions ) problem has been solved in ion evaporated milk the thing! Written as a superscripted number to the overall charge on complex = 0 NH₃ ) ₆ ] BF₄ that have. R����2�M } ֱ � & was investigated for NO3, or,! Than one oxidation state of Pt: charge on the moon last in NH3 the number. Catalysts was investigated containing Pt 0 nanoparticles exhibited significantly higher activity than preoxidized ones with the same Pt dispersion ). H 2 O 3 catalysts containing Pt 0 nanoparticles exhibited significantly higher activity than preoxidized ones with oxidation!, ���m�f��d * 7�� * �VTr3�ˌ�sV��ə^ * mQ��u�� { R����2�M } ֱ � & set value... Or ions in a compound is +1 ( a ) What is the oxidation numbers must equal the charge. Including all valence electrons repeated on SKY is, oxidation number is placed parentheses... ( b ) What oxidation state of nh3 the oxidation number for metals that can work at room temperature with high selectivities N2! Catalysts that can have more than one oxidation state of -2 this question balance that of the in... Share electrons and use an oxidation number is synonymous with the oxidation state of co (! In almost all cases, oxygen atoms have oxidation numbers of -2, the oxidation state ) of NH3 harmless...
2021-03-02 07:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645231485366821, "perplexity": 6647.742582354341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00305.warc.gz"}
https://doc.rc-cube.com/v21.07/en/troubleshooting.html
# Troubleshooting¶ ## Camera-image issues¶ The camera image is too bright • If the camera is in manual exposure mode, decrease the exposure time (see Parameters), or • switch to auto-exposure mode (see Parameters). The camera image is too dark • If the camera is in manual exposure mode, increase the exposure time (see Parameters), or • switch to auto-exposure mode (see Parameters). The camera image is too noisy Large gain factors cause high-amplitude image noise. To decrease the image noise, • use an additional light source to increase the scene’s light intensity, or • choose a greater maximal auto-exposure time (see Parameters). The camera image is out of focus • Check whether the object is too close to the lens and increase the distance between the object and the lens if it is. • Check whether the camera lenses are dirty and clean them if they are. The camera image is blurred Fast motions in combination with long exposure times can cause blur. To reduce motion blur, • decrease the motion speed of the camera, • decrease the motion speed of objects in the field of view of the camera, or • decrease the exposure time of the camera (see Parameters). The camera image frame rate is too low • Increase the image frame rate as described in Parameters. • The maximal frame rate of the cameras is 25 Hz. ## Depth/Disparity, error, and confidence image issues¶ All these guidelines also apply to error and confidence images, because they correspond directly to the disparity image. The disparity image is too sparse or empty The disparity images’ frame rate is too low • Check and increase the frame rate of the camera images (see Parameters). The frame rate of the disparity image cannot be greater than the frame rate of the camera images. • Choose a lesser Disparity Image Quality. • Increase the Minimum Distance as much as possible for the application. The disparity image does not show close objects • Check whether the object is too close to the cameras. Consider the depth ranges of the rc_visard variants. • Decrease the Minimum Distance. The disparity image does not show distant objects The disparity image is too noisy The disparity values or the resulting depth values are too inaccurate • Decrease the distance between the camera and the scene. Depth-measurement error grows quadratically with the distance from the cameras. • Check whether the scene contains repetitive patterns and remove them if it does. They could cause wrong disparity measurements. The disparity image is too smooth The disparity image does not show small structures ## GigE Vision/GenICam issues¶ No images • Check that the modules are enabled. See ComponentSelector and ComponentEnable in Important GenICam parameters.
2021-10-20 17:03:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27388978004455566, "perplexity": 3095.1188078727578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00231.warc.gz"}
https://crypto.stackexchange.com/questions/11614/how-do-i-test-my-encryption-absolute-amateur?noredirect=1
# How do I test my encryption? (absolute amateur) I am a hobby programmer with a background in biology and have developed an encryption program based on DNA. I tried to make it hard to crack, but it's essentially a substitution cipher and uses the default Java random number generator so my guess it could be cracked relatively easily. But how do I find out how good my encryption is? Can I post an encrypted message here and see if someone can crack it? Again, I am not a professional cryptographer or programmer, I'm a grad student who does too much outside the lab like attempting to write encryption programs, so if there is already a question about this, I wouldn't know because I don't understand any of the terms I'm seeing in the similar questions. Here is my code: import java.util.Random; import java.util.ArrayList; import java.util.HashMap; public class GenenCrypt { private Random ranGen; private Random coinFlip; private String[] bases; private ArrayList<String> originalCodonList; private ArrayList<String> shuffledCodonList; private String[] charList; private HashMap<String,String[]> codonTable; private HashMap<String, String> decryptTable; private String key; public GenenCrypt(String key){ // define the initial, unshuffled codon list of 4 base codons originalCodonList = new ArrayList<String>(); bases = new String[]{"A", "T", "G", "C"}; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ for(int k = 0; k < 4; k++){ for(int l = 0; l < 4; l++){ originalCodonList.add("" + bases[i] + bases[j] + bases[k] + bases[l]); } } } } // make a random number generator with a seed based on the key this.key = key; ranGen = new java.util.Random(makeKey(key)); coinFlip = new java.util.Random(makeKey(key)); // use the random number generator and the originalCodonList to make a shuffled list shuffledCodonList = new ArrayList<String>(); while(originalCodonList.size() > 0){ int index = ranGen.nextInt(originalCodonList.size()); originalCodonList.remove(index); } // define the characters that can be encoded, 64 in total // 26 capital letters // 10 digits // space, newline, and tab // the symbols . , ? " ! @ # $% ^ & * ( ) - + = / _ \ : ; < > charList = new String[]{"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", " ", "\t", "\n", ".", ",", "?", "\"", "!", "@", "#", "$", "%", "^", "&", "*", "(", ")", "-", "+", "=", "/", "_", "\\", ":", ";", "<", ">", "|"}; // define the codon table to encode text codonTable = new HashMap<String, String[]>(); for(int i = 0; i < charList.length; i++){ String[] tempArray = new String[]{shuffledCodonList.get(4 * i), shuffledCodonList.get(4 * i + 1), shuffledCodonList.get(4 * i + 2), shuffledCodonList.get(4 * i + 3)}; //System.out.println(i); codonTable.put(charList[i], tempArray); } // define the decryption table decryptTable = new HashMap<String, String>(); for(int i = 0; i < codonTable.size(); i++){ String s = charList[i]; String[] sa = codonTable.get(s); decryptTable.put(sa[0], s); decryptTable.put(sa[1], s); decryptTable.put(sa[2], s); decryptTable.put(sa[3], s); } } public void printShuffledList(){ for(int i = 0; i < shuffledCodonList.size(); i++){ System.out.println(shuffledCodonList.get(i)); } } public void printOriginalList(){ for(int i = 0; i < originalCodonList.size(); i++){ System.out.println(originalCodonList.get(i)); } } public void printCodonTable(){ // print the codon table for(int i = 0; i < codonTable.size(); i++){ String s = charList[i]; String[] sa = codonTable.get(s); if(s == "\t"){ System.out.println(i + "\t" + "\\t" + "\t" + sa[0] +", "+ sa[1] +", "+ sa[2] +", "+ sa[3]); } else if(s == "\n"){ System.out.println(i + "\t" + "\\n" + "\t" + sa[0] +", "+ sa[1] +", "+ sa[2] +", "+ sa[3]); } else if(s == " "){ System.out.println(i + "\t" + "\" \"" + "\t" + sa[0] +", "+ sa[1] +", "+ sa[2] +", "+ sa[3]); } else{ System.out.println(i + "\t" + s + "\t" + sa[0] +", "+ sa[1] +", "+ sa[2] +", "+ sa[3]); } } } public String encrypt(String input){ String output = ""; for(int i = 0; i < input.length(); i++){ // insert junk bases int offset = ((int)key.charAt(i % key.length()))%100; String junk = ""; for(int j = 0; j < offset; j++){ junk += bases[ranGen.nextInt(4)]; } output += junk; int comp = coinFlip.nextInt(2); int choose = ranGen.nextInt(4); String s = ("" + input.charAt(i)).toUpperCase(); if(codonTable.containsKey(s)){ String[] sa = codonTable.get(s); if(comp == 0){ output += sa[choose]; }else{ output += complement(sa[choose]); } } } // add some junk bases to the end of the cipher text int offset = ((int)key.charAt(input.length() % key.length()))%100; // add bases to make the total length a mutliple of 4 offset += (output.length() + offset) % 4; String junk = ""; for(int j = 0; j < offset; j++){ junk += bases[ranGen.nextInt(4)]; } output += junk; // reset the random number generators ranGen.setSeed(makeKey(key)); coinFlip.setSeed(makeKey(key)); return output; } public String decrypt(String in){ String input = "" + in; String output = ""; int keyCount = 0; int junk = ((int)key.charAt(keyCount % key.length()))%100; while(input.length() > junk + 4 ){ // cuts out the junk bases input = input.substring(junk); // get the codon, decrypt the codon, remove it from the input string String codon = input.substring(0, 4); int comp = coinFlip.nextInt(2); if(comp == 1){ codon = complement(codon); } output += decryptTable.get(codon); input = input.substring(4); // increment the key counter and update junk keyCount++; junk = ((int)key.charAt(keyCount % key.length()))%100; } //reset the random number generators ranGen.setSeed(makeKey(key)); coinFlip.setSeed(makeKey(key)); return output; } private String complement(String in){ String out = ""; for(int i = 0; i < in.length(); i++){ switch(in.charAt(i)){ case 'A': out+= 'T'; break; case 'T': out+= 'A'; break; case 'G': out+= 'C'; break; case 'C': out+= 'G'; break; default: out+= in.charAt(i); break; } } return out; } private long makeKey(String k){ long longKey = 0; for(int i = 0; i < key.length(); i++){ longKey += (int)key.charAt(i); } return longKey; } public static void main(String[] args){ String plaintext = "This is the plaintext"; String key = "this is the key"; GenenCrypt gc1 = new GenenCrypt(key); System.out.println("Encrypting the line \"" + plaintext + "\""); System.out.println(); String encrypted = gc1.encrypt(plaintext); System.out.println(encrypted); System.out.println(); System.out.println("Decrypting the ciphertext"); System.out.println(gc1.decrypt(encrypted)); } } The comments probably aren't good enough to understand what I'm doing. First, you should know a little how DNA works. There are 4 bases, A, G, C, and T. DNA codes for proteins, which are made from 20 amino acids. Since $4^1$ = 4, and $4^2$ = 16, we need $4^3$, for 64 possible combinations of 3 bases. This 3 base unit is called a codon. Since 64 is larger than 20, most amino acids are coded for by more than 1 codon, and 3 codons are stop codons, simply marking where the protein ends. But 20 symbols isn't enough to encrypt a message, I figured 64 symbols would be ok, that gives me all the letters (uppercase only), all the numbers, and most of the punctuation. I also wanted each symbol to be represented by more than 1 codon, so instead of a 3 base codon, I used a 4 base codon, which gives 256 possible combinations. So I assigned each symbol 4 random 4 base codons. Another concept from DNA is reading frame. A DNA double strand has 6 possible ways to translate protein, 3 forward and 3 reverse, depending on whether you start on the first, second, or third base on either end. To mess up the reading frames in my encrypted messages, I insert a random amount of random bases in between each codon. Also, each codon has a 50% chance to be reversed to it's complement, so A becomes T, G becomes C, and so on. This means that in order to succesfully decrypt a message, you need to find all 4 codons for each symbol, and sort out the junk, and determine which codons have been reversed. To further complicate things, you could encrypt the ciphertext with a ceasar cipher or other simple encryption to make it look like you have more than 4 characters and disguise the DNA. Or you could go with a hide in plain sight approach and post the message on any number of publicly available DNA databases. • You might like this tool cryptool.org/en – sav Nov 9 '13 at 2:35 • The absolute minimum is a specification using typical notation and a reference implementation in c. You should also consider an attacker who knowns or even chooses both message and ciphertext. If such an attacker can break your scheme then it's very weak by modern standards. – CodesInChaos Nov 9 '13 at 9:46 • I too am a complete amateur, but I have been playing with several classical ciphers. I would probably not be the best choice to test your algorithm as I am not good at breaking them, but I would be fascinated to know how you used DNA as a basis for a cipher. If you would care to post your algorithm I would be happy to look at it, for what it may be worth. Edit: My apologies. I meant to post this as a comment, not an answer... – Daniel Nov 10 '13 at 8:14 • How does your decryption algorithm (for someone who actually has the key) handles the ambiguities you described in the last two paragraphs? – Paŭlo Ebermann Nov 10 '13 at 20:21 • Based on quick analysis I would not consider this scheme secure, I will propose an alternative in an answer – Richie Frame Nov 12 '13 at 9:05 Based on your sample code I do not consider the scheme secure enough for implementation. Additionally you will run into a few problems if you actually try to implement this to generate DNA strands with encrypted messages in them (ala some kind of futuristic scifi thriller). As the other answer suggests, it would be best to think about using the DNA sequence as a data storage layer. That would allow storage of encrypted or plaintext data of any type. As technology progresses, the cost to generate custom DNA strands will only drop, in 20 years this may be a commonplace method. Problem 1: Redundancy If you generate a DNA strand, inject it into a whatever, transport it across a few continents over a period of time, and then try to read it... it will most likely be interpreted as something else. You will need a large amount of redundancy in order to make sure that degradation of the strand and transcription errors do not destroy the encrypted data. Some species encode dozens or even hundreds of the same sequence so that it will survive intact. At a minimum you will need a very robust encoding method for the input data that takes into account the time and environment the strand is exposed to. Problem 2: The Other Side There are 2 sides to a DNA strand. A on one side is T on the other. If you encode using all 4 bases, you will run into a problem (possibly) when reading the strand. The most simple solution is to use the base pair as a binary value instead of a codon, AT and CG as 0 and 1. This simplifies the nucleotide encoding algorithm (now there is none!) and allows reading from any side (not any end) of the strand without determining which side is the correct one (by some termination sequence perhaps) Problem 3: The Other End As you mentioned, reading from one end may be a problem, of which there are 2 solutions. The first is to encode the sequence, then reverse and append. This makes the strand the same in both directions. The other solution is to use some kind of termination sequence in order to determine which end to read the strand from. You don't want to accidentally code for botulinum toxin or something, this is probably not an issue unless you are actually generating DNA strands that may wind up being exposed to living organisms. This can be solved by using an encoding that uses only codons that generate the same amino acid. A long sequence of arganine will not spontaneously generate botox. Arganine has 6 codons that will create it, giving multiple options for encoding. The simple solution to problem 2 may not be very friendly with the solution to this problem, although CGC And CGG both encode arganine, and on the flip side GCG and GCC both encode alanine, so you could encode binary data on either side of the strand and have it generate a long string of the same amino acid! Using a codon to encode only 1 bit of data, and requiring several codons for genetic redundancy and several bits for data redundancy will add up VERY quickly. I can see 5 duplicate codons requiring 3 to match to encode a single bit (15 base pairs per bit), and then an 8x4 hamming code which takes 16 bits to encode a byte (480 base pairs per byte!!), and some termination codes on each end to make sure it is read in the correct way (another 100 or so base pairs per end). Disadvantage lots of DNA, advantages highly redundant storage that is easily read and wont accidentally create a bio weapon. As for the actual cryptography part, the best method would be to compress your input data, then use something like AES_CTR (if no authentication required) and use that data to generate the DNA code. The cryptographic method you are using appears to be a simple substitution at first glance, and figuring it out from a known plaintext and knowledge of the algorithm is not difficult. ## DNA digital data storage I also wanted each symbol to be represented by more than 1 codon, so instead of a 3 base codon, I used a 4 base codon, which gives 256 possible combinations. So I assigned each symbol 4 random 4 base codons. With this kind of encoding, you will end up representing one byte per four codons. The most of current cryptographic algorithms work with groups of bits, with commonly requirement that group needs to be multiple of 8 bits, 64 bits or maybe 128 bits. Such groups can be represented with 4 codons, 32 codons, and so on. DNA digital data storage refers to some recent experiments of storing digital information on DNA. The concept of storing data on DNA is not new. DNA has one important advantage: data storage density is some orders of magnitude better than current commercially available solutions for spinning disc and solid state disc storage. For this reason, it is interesting research target. The main reason it is not currently used commercially as storage device is that writing and reading DNA is prohibitively expensive. Consider: 1. When thinking DNA as a binary data storage medium (2 bits per codon), it would be immediately obvious that you can actually use any current well tested cryptographic algorithms like AES and RSA for processing information to store on DNA. 2. Classical cryptographic mechanisms commonly do not offer security strong enough against current attackers. Result: I would recommend to trying to repartition the problem: consider DNA as storage layer and consider cryptography as another layer and apply the best solutions available on each layer. • This was never about compression or storage, I realized pretty quickly that even short messages would become very long sequences of DNA. My method isn't suited for anything other than text. And you're right about reading and writing DNA being expensive, if I could just have 5000 bases of DNA printed on a machine, I could actually get something done in lab instead of spending all my time building DNA. – user137 Nov 11 '13 at 16:15 The point being made by all the answers currently written is that you should is that you should consider your scheme in two parts: 1. Cryptography Layer 2. DNA encoding layer You are using DNA as a storage mechanism (about which I'm sure you know more than me), and as noted in the other answers there are some issues and research papers. So, taking into account that it may well be that writing arbitrary DNA is not possible (eg as Richie's answer points out you can't make a virus), let's assume you've managed to create a sufficiently large list of DNA strings that you can confidently read/write. My contribution to answering your question is to put forward the suggestion of using Format Transforming Encryption. Only recently developed, the idea is to efficiently combine [authenticated] encryption with an encoding engine that maps arbitrary binary strings onto elements of some language (using regular expressions - read the paper, it's interesting). So, to summarise: There are solutions for encrypting information into DNA, but I suspect that's not really what you were interested in. Unfortunately, your crypto scheme has some pretty serious issues once we assume the attacker is in a similar situation to the legitimate user (ie he has access to everything apart from the key).$^{[1]}$ Clearly, you might decide that in the case of reading DNA this is not accurate - that actually your adversary would not be able to read the DNA you'd encoded due to lack of equipment, but in that case you might as well just use a direct encoding. [1] Reading again, I'm not even clear that there is any cryptography in your suggestion other than adding random data every now and then (which would be just as hard for the legitimate user as an attacker).
2021-06-14 18:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32070133090019226, "perplexity": 1940.2854600171827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00066.warc.gz"}
http://doc.rero.ch/record/21342
Facoltà di scienze economiche ## Limits of learning about a categorical latent variable under prior near-ignorance ### In: International journal of approximate reasoning, 2009, vol. 50, no. 4, p. 597-611 In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another... Plus Ajouter à la liste personnelle # Exporter vers Summary In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another state of beliefs, called near-ignorance, has been proposed. Near-ignorance resembles ignorance very closely, by satisfying some principles that can arguably be regarded as necessary in a state of ignorance, and allows learning to take place. What this paper does, is to provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we provide, for the case of categorical latent variables (and general manifest variables) a condition that, if satisfied, prevents learning to take place under prior near- ignorance. This condition is shown to be easily satisfied even in the most common statistical problems. We regard these results as a strong form of evidence against the possibility to adopt a condition of prior near-ignorance in real statistical problems.
2018-07-18 11:28:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363698720932007, "perplexity": 701.5790195875727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590127.2/warc/CC-MAIN-20180718095959-20180718115959-00167.warc.gz"}
https://www.doulike.com/article-www9org/CDROM/refereed/629/index.html
Web Page Scoring Systems for Horizontal and Vertical Search # Web Page Scoring Systems for Horizontal and Vertical Search Michelangelo Diligenti Dipartimento di Ingegneria dell'Informazione Via Roma 56 - Siena, Italy diligmic@dii.unisi.it Marco Gori Dipartimento di Ingegneria dell'Informazione Via Roma 56 - Siena, Italy marco@dii.unisi.it Marco Maggini Dipartimento di Ingegneria dell'Informazione Via Roma 56 - Siena, Italy maggini@dii.unisi.it Copyright is held by the author/owner(s). WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA. ACM 1-58113-449-5/02/0005. ### Abstract Page ranking is a fundamental step towards the construction of effective search engines for both generic (horizontal) and focused (vertical) search. Ranking schemes for horizontal search like the PageRank algorithm used by Google operate on the topology of the graph, regardless of the page content. On the other hand, the recent development of vertical portals (vortals) makes it useful to adopt scoring systems focussed on the topic and taking the page content into account. In this paper, we propose a general framework for Web Page Scoring Systems (WPSS) which incorporates and extends many of the relevant models proposed in the literature. Finally, experimental results are given to assess the features of the proposed scoring systems with special emphasis on vertical search. ### Categories and Subject Descriptors F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems - Sorting and Searching; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - Information Filtering; H.5.4 [Information Interfaces and Presentation]: Hypertext/Hypermedia Algorithms ### Keywords Web Page Scoring Systems, Random Walks, HITS, PageRank, Focused PageRank # Introduction The analysis of the hyperlinks on the Web [1] can significantly increase the capability of search engines. A simple counting of the number of references does not take into account the fact that not all the citations have the same authority. PageRank[2] is a noticeable example of a topological-based ranking criterion. An interesting example of query-dependent criteria is given in [3]. User queries are issued to a search engine in order to create a set of seed pages. Crawling the Web forward and backward from that seed is performed to mirror the Web portion containing the information which is likely to be useful. A ranking criterion based on topological analyses can be applied to the pages belonging to the selected Web portion. Very interesting results in this direction have been proposed in [4,5,6]. In [7] a Bayesian approach is used to compute hub and authorities, whereas in [8] both topological information and information about page content are included in distillation of information sources performed by a Bayesian approach. Generally speaking, the ranking of hypertextual documents is expected to take into account the reputation of the source, the page updating frequency, the popularity, the speed access, the degree of authority, and the degree of hubness. The page rank of hyper-textual documents can be thought of as a function of the document content and the hyperlinks. In this paper, we propose a general framework for Web Page Scoring Systems (WPSS) which incorporates and extends many of the relevant models proposed in the literature. The general web page scoring model proposed in this paper extends both PageRank [2] and the HITS scheme [3]. In addition, the proposed model exhibits a number of novel features, which turn out to be very useful especially for focused (vertical) search. The content of the pages is combined with the web graphical structure giving rise to scoring mechanisms which which are focused on a specific topic. Moreover, in the proposed model, vertical search schemes can take into account the mutual relationship amongst different topics. In so doing, the discovery of pages with high score for a given topic affects the score of pages with related topics. Experimental results were carried out to assess the features of the proposed scoring systems with special emphasis on vertical search. The very promising experimental results reported in the paper provide a clear validation of the proposed general scheme for web page scoring systems. # Page Rank and Random Walks Random walk theory has been widely used to compute the absolute relevance of a page in the Web [2,6]. The Web is represented as a graph , where each Web page is a node and a link between two nodes represents a hyperlink between the associated pages. A common assumption is that the relevance of page is represented by the probability of ending up in that page during a walk on this graph. In the general framework we propose, we consider a complete model of the behavior of a user surfing the Web. We assume that a Web surfer can perform one out of four atomic actions at each step of his/her traversal of the Web graph: • stay in the same node. Thus the set of the atomic actions used to move on the Web is . We assume that the behavior of the surfer depends on the page he is currently visiting. The action he/she will decide to take will depend on the page contents and the links it contains. For example if the current page is interesting to the surfer, it is likely he/she will follow a hyperlink contained in the page. Whereas, if the page is not interesting, the surfer will likely jump to a different page not linked by the current one. We can model the user behavior by a set of probabilities which depend on the current page: • the probability of following one hyperlink from page , • the probability of following one back-link from page , • the probability of jumping from page , • the probability of remaining in page . These values must satisfy the normalization constraint Most of these actions need to specify their targets. Assuming that the surfer's behavior is time-invariant, then we can model the targets for jumps, hyperlink or back-link choices by using the following parameters: • the probability of jumping from page to page ; • the probability of selecting a hyperlink from page to page ; this value is not null only for the pages linked directly by page , i.e. , being the set of the children of node in the graph ; • the probability of going back from page to page ; this value is not null only for the pages which link directly page , i.e. , being the set of the parents of node in the graph . These sets of values must satisfy the following probability normalization constraints for each page : , , . The model considers a temporal sequence of actions performed by the surfer and it can be used to compute the probability that the surfer is located in page at time , . The probability distribution on the pages of the Web is updated by taking into account the possible actions at time using the following equation (1) where the probability of going from page to page is obtained by considering the action which can be performed by the surfer. Thus, using the previous definitions for the actions, the equation can be rewritten as (2) These probabilities can be collected in a -dimensional vector , being the number of pages in the Web graph , and the probability update equations can be rewritten in a matrix form. The probabilities of moving from a page to a page given an action can be organized into the following matrices: • the forward matrix whose element is the probability ; • the backward matrix collecting the probabilities ; • the jump matrix which is defined by the jump probabilities . The forward and backward matrices are related to the Web adjacency matrix whose entries are 1 if page links page . In particular, the forward matrix has non null entries only in the positions corresponding to 1s in matrix , and the backward matrix has non null entries in the positions corresponding to 1s in . Further, we can define the set of action matrices which collect the probabilities of taking one of the possible actions from a given page . These are diagonal matrices defined as follows: whose diagonal values are the probabilities , collecting the probabilities , containing the values , and having on the diagonal the probabilities . Hence, equation (2) can be written in matrix form as (3) The transition matrix used to update the probability distribution is (4) Using this definition, the equation (3) can be written as (5) Starting from a given initial distribution equation (5) can be applied recursively to compute the probability distribution at a given time step yielding (6) In order to define an absolute page rank for the pages on the Web, we consider the stationary distribution of the Markov chain defined by the previous equations. is the state transition matrix of the Markov chain. is stable, since is a stochastic matrix having its maximum eigenvalue equal to . Since the state vector evolves following the equation of a Markov Chain, it is guaranteed that if , then . By applying the results on Markov chains (see e.g. [9]), we can prove the following proposition. Proposition 2.1   If then where does not depend on the initial state vector . ## Uniform Jump Probabilities In order to simplify the general model proposed so far, we can introduce some assumptions on the probability matrices. A possible choice is to consider some actions to be independent on the current page. A first hypothesis we investigate is the case of jump probabilities which are independent on the starting page . This choice models a surfer who decides to make random jumps from a given page to another page with uniform probability. Thus, the jump matrix has all the entries equal to Moreover, we suppose that also the probability of choosing a jump among the available actions does not depend on the page, i.e. . Under these two assumptions, equation (3) becomes (7) because = as . If we define , the solution of equation (7) is (8) Using the Frobenius theorem on matrices, the following bound on the maximum eigenvalue of can be derived (9) because . Hence, in equation (8) the first term vanishes as , i.e. (10) where is the matrix having all elements equal to . Thus, when , the probability distribution converges to (11) Since as shown in equation (9), it can be proven that the previous equation converges to (12) As it is stated by proposition 2.1, the value of does not depend on the choice of the initial state vector . ## Multiple State Model A model based on a single variable may not capture the complex relationships among Web pages when trying to model their importance. Ranking schemes based on multiple variables have been proposed in [3,6], where a pair of variables are used to represents the concepts of hubness and authority of a page. In the probabilistic framework described so far, we can define a multivariable scheme by considering a pool of Web surfers each described by a single variable. Each surfer is characterized by his/her own way of browsing the Web modeled by using different parameter values in each state transition equation. By choosing proper values for these parameters we can choose different policies in evaluating the absolute importance of the pages. Moreover, the surfers may interact by accepting suggestions from each other. In order to model the activity of different surfers, we use a set of state variables which represent the probability of each surfer to be visiting page at time . The interaction among the surfers is modeled by a set of parameters which define the probability of the surfer to accept the suggestion of the surfer , thus jumping from the page he/she is visiting to the one visited by the surfer . This interaction happens before the choice of the actions described previously. If we hypothesize that the interaction does not depend on the page the surfer is currently visiting, the degree of interaction with the surfer is modeled by the value which represents the probability for the surfer of jumping to the page visited by the surfer . These values must satisfy the probability normalization constraint . As an example, suppose that there are two surfers, the novice'' and the expert''. Surfer , the novice, blindly trusts the suggestions of surfer as he/she believes is an expert in discovering authoritative pages, whereas does not trust at all his/her own capabilities. In this case the complete dependence of the novice on the expert is modeled by choosing and , i.e. the novice chooses the page visited by the expert with probability equal to 1. Before taking any action, the surfer repositions himself/herself in page with probability looking at the suggestions of the other surfers. This probability is computed as (13) Thus, when computing the new probability distribution due to the action taken at time by the surfer , we consider the distribution instead of when applying equation (2). Thus, the transition function is defined as follows (14) When considering surfers, the score vectors of each surfer can be collected as the columns of a matrix . Moreover, we define the matrix which collects the values . The matrix will be referred to as the interaction matrix. Finally, each surfer will be described by his/her own forward, backward and jump matrices , , and by the action matrices , , , . Thus, the transition matrix for the Markov chain associated to the surfer is . Using these definitions, the set of interacting surfers can be described by rewriting equation (14) as a set of matrix equations as follows (15) When the surfers are independent on each other (i.e. , and ), the model reduces to models as described by equation (6). # Horizontal WPSS Horizontal WPSSs do not consider any information on the page contents and produce the rank vector using just the topological characteristics of the Web graph. In this section we show how these scoring systems can be described in the proposed probabilistic framework. In particular we derive the two most popular Page Scoring Systems, PageRank and HITS, as special cases. ## PageRank The Google search engine employs a ranking scheme based on a random walk model defined by a single state variable. Only two actions are considered: the surfer jumps to a new random page with probability or he/she follows one link from the current page with probability . The Google ranking scheme, called PageRank, can be described in the general probabilistic framework of equation (2), by choosing its parameters as follows. First, the probabilities of following a back-link and of remaining in any page are null for all the pages . Then, as stated above, the probability of performing a random jump is for any page , whereas the probability of following a hyperlink contained in the page is also a constant, i.e. . Given that a jump is taken, its target is selected using a uniform probability distribution over all the Web pages, i.e. . Finally, the probability of following the hyperlink from to does not depend on the page , i.e. . In order to meet the normalization constraint, where is the number of links exiting from page (the page hubness). This assumption makes the surfer random, since we define a uniform probability distribution among all the outgoing links. This last hypothesis cannot be met by pages which do not contain any links to other pages. A page with no out-links is called sink page, since it would behave just like a score sink in the PageRank propagation scheme. In order to keep the probabilistic interpretation of PageRank, all sink nodes must be removed. The page rank of sinks is then computed from the page ranks of their parents. Under all these assumptions equation (2) can be rewritten as (16) Since the probabilistic interpretation is valid, it holds that and, then, equation (16) becomes (17) Since , for each page and then equation (12) guarantees that the Google's PageRank converges to a distribution of page scores that does not depend on the initial distribution. In order to apply the Google's PageRank scheme without removing the sink nodes, we can introduce the following modification to the original equations. Since no links can be followed from a sink node , must be equal to and equal to . Thus, when there are sinks, is defined as (18) In this case the contribution of the jump probabilities does not sum to a constant term as it happens in equation (17), but the value must be computed at the beginning of each iteration. This is the computational scheme we used in our experiments. ## The HITS Ranking System The HITS algorithm was proposed to model authoritative documents only relying on the information hidden in the connections among them due to co-citation or web hyperlinks [3]. In this formulation the Web pages are divided into two page classes: pages which are information sources (authorities) and pages which link to information sources (hubs). The HITS algorithm assigns two numbers to each page , the page authority and the page hubness in order to model the importance of the page. These values are computed by applying iteratively the following equations (19) where indicates the authority of page and its hubness. If is the vector of the authorities at step , and is the hubness vector at step , the previous equation can be rewritten in matrix form as (20) where is the adjacency matrix of the Web graph. It is trivial to demonstrate that as tends to infinity, the direction of authority vector tends to be parallel to the main eigenvector of the matrix, whereas the hubness vector tends to be parallel to the main eigenvector of the matrix. The HITS ranking scheme can be represented in the general Web surfer framework, even if some of the assumptions will violate the probabilistic interpretation. Since HITS uses two state variables, the hubness and the authority of a page, the corresponding random walk model is a multiple state scheme based on the activity of two surfers. Surfer 1 is associated to the hubness of pages whereas surfer 2 is associated to the authority of pages. For both surfers the probabilities of remaining in the same page and of jumping to a random page are null. Surfer 1 never follows a link, i.e. whereas he/she always follows a back-link, i.e. . Because of this, the HITS computation violates the probability normalization constraints, since . Surfer 2 has the opposite behavior with respect to surfer 1. He/she always follows a link, i.e. , and he/she never follows a back-link, i.e. . In this case the normalization constraint is violated for the values of because . Under these assumptions , being the identity matrix, whereas , , , , , are all equal to the null matrix. The interaction between the surfers is described by the matrix: (21) The interpretation of the interactions represented by this matrix is that surfer considers surfer as an expert in discovering authorities and always moves to the position suggested by that surfer before acting. On the other hand, surfer considers surfer as an expert in finding hubs and then he/she always moves to the position suggested by that surfer before choosing the next action. In this case equation (15) is (22) Using equation (21) and the HITS assumption , we obtain (23) which, redefining and , is equivalent to the HITS computation of equation (20). The HITS model violates the probabilistic interpretation and this makes the computation unstable, since the matrix has a principal eigenvalue much larger then . Hence, unlike Google's PageRank, the HITS algorithm needs the score to be normalized at the end of each iteration. Finally, the HITS scheme suffers from other drawbacks. In particular, large highly connected communities of Web pages tend to attract the principal eigenvector of , thus pushing to zero the relevance of all other pages. As a result the page scores tend to decrease rapidly to zero for pages outside those communities. Recently some heuristics have been proposed to avoid this problem even if such behavior can not be generally avoided because of the properties of the dynamic system associated to the HITS algorithm [10]. ## The PageRank-HITS model PageRank is stable, it has a well defined behavior because of its probabilistic interpretation and it can be applied to large page collections without canceling the influence of the smallest Web communities. On the other hand, PageRank is sometimes too simple to take into account the complex relationships of Web page citations. HITS is not stable, only the largest Web community influences the ranking, and this does not allow the application of HITS to large page collections. On the other hand the hub and authority model can capture more than PageRank the relationships among Web pages. In this section we show that the proposed probabilistic framework allows to include the advantages of both approaches. We employ two surfers, each one implementing a bidirectional PageRank surfer. We assume that surfer either follows a back-link with probability or jumps to a random page with probability . Whereas surfer either follows a forward link with probability or jumps to a random page with probability . Like in HITS, the interaction between the surfers is described by the matrix In this case, equation (15) becomes (24) Further, we assume the independence of parameters and on the page . Hence, it holds that , , where is the diagonal matrix with element equal to and is the diagonal matrix with element equal to . Then: (25) This page rank is stable, the scores sum up to and no normalization is required at the end of each iteration. Moreover, the two state variables can capture and process more complex relationships among pages. In particular, setting yields a normalized version of HITS, which has been proposed in [4]. # Vertical WPSS Horizontal WPSSs exploit the information provided by the Web graph topology. Different properties of the graph are evidenced by each model. For example the intuitive idea that a highly linked page is an absolute authority can be captured by PageRank or HITS schemes. However, when applying scoring techniques for focused search the page contents should be taken into account beside the graph topology. Vertical WPSSs aim at computing a relative ranking of pages when focusing on a specific topic. A vertical WPSS relies on the representation of the page content with a set of features (e.g. a set of keywords) and on a classifier which is used to assess the degree of relevance of the page with respect to the topic of interest. Basically the general probabilistic framework of WPSSs proposed in this paper can be used to define a vertical approach to page scoring. Several models can be derived which combine the ideas underlining the topology-based scoring and the topic relevance measure provided by text classifiers. In particular a text classifier can be used to compute proper values for the probabilities needed by the random walk model. As it is shown by the experimental results, vertical WPSSs can produce more accurate results in ranking topic specific pages. ## Focused PageRank In the PageRank framework, when choosing to follow a link in a page each link has the same probability to be followed. Instead of the random surfer model, in the focused domain we can consider the more realistic case of a surfer who follows the links according to the suggestions provided by a page classifier. If the surfer is located at page and the pages linked by page have scores by a topic classifier, the probability of the surfer to follow the -th link is defined as (26) Thus the forward matrix will depend on the classifier outputs on the target pages. Hence, the modified equation to compute the combined page scores using a PageRank-like scheme is (27) where is computed as in equation (26). This scoring system removes the assumption of complete randomness of the underlying Web surfer. In this case, the surfer is aware of what he/she is searching, and he/she will trust the classifier suggestions following the links with a probability proportional to the score of the page the links leads to. This allows us to derive a topic-specific page rank. For example: the Microsoft'' home is highly authoritative according to the topic-generic PageRank, whereas it is not highly authoritative when searching for Perl'' language tutorials, since even if that page gets many citations, most of these citations will be scarcely related to the target topic and then not significantly considered in the computation. ## Double Focused PageRank The focused PageRank model described previously uses a topic specific distribution for selecting the link to follow but the decision on the action to take does not depend on the contents of the current page. A more realistic model should take into account the fact that the decision about the action is usually dependent on the contents of the current page. For example, let us suppose that two surfers are searching for a Perl Language tutorial'', and that the first one is located at the page http://www.perl.com'', while the second is located at the page http://www.cnn.com''. Clearly it is more likely that the first surfer will decide to follow a link from the current page while the second one will prefer to jump to another page which is related to the topic he is interested in. We can model this behavior by adapting the action probabilities using the contents of the current page, thus modeling a focused choice of the surfer's actions. In particular, the probability of following a hyperlink can be chosen to be proportional to the degree of relevance of the current page with respect to the target, i.e. (28) where is computed by a text classifier. On the other hand, the probability of jumping away from a page decreases proportionally to , i.e. (29) Finally, we assume that the probability of landing into page after a jump is proportional to its relevance , i.e. (30) Such modifications can be integrated into the focused PageRank proposed in section 4.1 to model a focused navigation more accurately. Equation (12) guarantees that the resulting scoring system is stable and that it converges to a score distribution independent from the initial distribution. # Experimental Results (a) (b) Using the focus crawler described in [11], we have performed two focus crawling sessions, downloading 150.000 pages for each single crawl. During the first session the crawler spidered the Web searching for pages on the topic Linux''. During the second session the crawler gathered pages on cooking recipes''. Each downloaded page was classified to assess its relevance with respect to the specific topic. Considering the hyperlinks contained in each page, two Web subgraphs were created to perform the evaluation of the different WPSSs proposed in the previous sections. For the second crawling session, the connectivity map of pages was pruned removing all links from a page to pages in the same site in order to reduce the nepotism'' of Web pages. The topological structure of the graphs and the scores of the text classifiers were used to evaluate the following WPSSs: • the In-link'' surfer. Such surfer is located in pages with probability where is the relevance of the page computed by the text classifier; • the PageRank surfer; • the Focused PageRank scheme as described in section 4.1; • the Double Focused PageRank scheme described in section 4.2; • the HITS surfer pool; • the PageRank-HITS surfer pool. ## The Distribution of Score Among Pages We have performed an analysis of the distribution of page scores using the different algorithms proposed in this paper. For all the PageRank surfers (focused or not) we set the parameter equal to . For each ranking function, we normalized the rank using its maximum value. We sorted the pages according to their ranks, then we plotted the distribution of the normalized rank values. Figure 1 reports the plots for the two categories, Linux'' and cooking recipes''. In both cases the HITS surfer assigns a score value significantly greater than zero only to the small set of pages associated to the main eigenvector of the connectivity matrix of the analyzed portion of the Web. On the other hand the PageRank surfer is more stable and its score distribution curve is smooth. This is the effect of the homogeneous term in equation (17) and of the stability in the computation provided by the probabilistic interpretation. The Focused PageRank surfer and the Double Focused one still provide a smooth distribution. However, the focused page ranks are more concentrated around the origin. This reflects the fact that the vertical WPSSs are able to discriminate the authorities on the specific topic, whereas the classical PageRank scheme considers the authoritative pages regardless their topic. Figure 2: We report the 8 top score pages from a portion of the Web focused on the topic Linux'', using either the PageRank surfer, or a HITS surfer pool. For the HITS surfer pool we report the pages with the top authority value. Figure 3: We report the 8 top score pages from a portion of the Web focused on the topic Linux'', using the proposed focused versions of the PageRank surfer. ## Some Qualitative Results Figures 2 and 3 show, respectively, the 8 top score pages for four different WPSSs on the database with pages on topic Linux'', while figures 4 and 5 reports the same results for the topic cooking recipes''. As shown in figure 2, all pages distilled by the HITS algorithm come from the same site. In fact, a site which has many internal connections may acquire the principal eigenvector of the connectivity matrix associated to the considered portion of the Web graph, conquering all the top positions in the page rank and hiding all other resources. Even the elimination of intra-site links does not improve the performances of the HITS algorithm. For example, as shown in the HITS section of figure 4, the Web site www.allrecipe.com'', which is subdivided into a collection of Web sites (www.seafoodrecipes.com'', www.cookierecipes.com'', etc.) strongly connected among them, occupies all the top positions in the ranking list, hiding all other resources. In [10] the content of pages is considered in order to propagate relevance scores only over the subset of links pointing to pages on a specific topic. In the cooking recipe'' case, the performances cannot be improved even using page content, since all the considered sites are effectively on the topic cooking recipes'', and then there is a semantic reason because such sites are connected. We claim that such behavior is intrinsic of the HITS model. Figure 4: We report the 8 top score pages from a portion of the Web, focused on the topic cooking recipes'', using either the PageRank surfer, or the HITS surfer pool. For the HITS surfer pool we report the pages with the top authority value. Figure 5: We report the 8 top score pages from a portion of the Web, focused on the topic cooking recipes'', using the proposed focused PageRank surfers. The PageRank algorithm is not topic dependent. Since some pages are referred to by many Web pages independently from their content, such pages result in always being authoritative, regardless the topic of interest. For example, in figures 2 and 4 pages like www.yahoo.com'', www.google.com'', etc, are shown in the top list even if they are not closely related to the specific topic. It is shown in figures 3 and 5 that the Focused PageRank'' WPSS described in section 4.1 can filter many off-topic authoritative pages from the top list. Finally, the Double Focused PageRank'' WPSS is even more effective in filtering all the off-topic authorities, while pushing all the authorities on the relevant topic to the top positions. ## Evaluating the WPPSs In order to evaluate the proposed WPSSs, we employed a methodology similar to that one presented in [12]. For each WPSS we selected the pages with highest score, creating a collection of pages to be evaluated by a pool of humans. experts on the specific topics independently labelled each page in the collection as authoritative for the topic'' or not authoritative for the topic''. Such a reliable set of judgments was finally used to measure the percentage of positive (or negative) judgments on the best pages returned by each ranking function. In particular, was varied between and . The topics selected for these experiments were Linux'' and Golf''. As in the previous experiments 150.000 pages were collected by focus crawling the Web. Figure 6 reports the percentage of positive judgments on the best pages returned by the five WPSSs, respectively, for the topic Linux'' and Golf''. In both cases the HITS algorithm is clearly the worst among the other ones. Since its performance decreases significantly when applied to the entire collection of documents, it can only be used as a query-dependent ranking schema [1]. (a) (b) As previously reported in [12], in spite of its simplicity the In-link algorithm has performances similar to PageRank. In our experiments PageRank outperformed the In-Links algorithm on the category Golf'', whereas it was outperformed on the category Linux''. However, in both cases the gap is small. The two focused ranking functions clearly outperformed all the not focused ones, demonstrating that when searching focused authorities, a higher accuracy is provided by employing a stable computation schema and by taking into account the page content. # Conclusions In this paper, we have proposed a general framework for the definition of web page scoring systems for horizontal and vertical search engines. The proposed scheme incorporates many relevant scoring models proposed in the literature. Moreover, it contains novel features which looks very appropriate especially for vortals. In particular, the topological structure of the web as well as the content of the web pages play jointly a crucial rule for the construction of the scoring. The experimental results support the effectiveness of the proposal which clearly emerge especially for vertical search. Finally, it is worth mentioning that the model described in this paper is very well-suited for the construction of learning-based WPSS, which can, in principle, incorporate the user information while surfing the Web. Acknowledgments We would like to thank Ottavio Calzone who performed some of the experimental evaluations of the scoring systems. ## Bibliography 1 M. Henzinger, Hyperlink analysis for the Web,'' IEEE Internet Computing, vol. 1, pp. 45-50, January/February 2001. 2 L. Page, S. Brin, R. Motwani, and T. Winograd, The PageRank citation ranking: Bringing order to the web,'' tech. rep., Computer Science Department, Stanford University, 1998. 3 J. Kleinberg, Authoritative sources in a hyperlinked environment.'' Report RJ 10076, IBM, May 1997, 1997. 4 K. Bharat and M. Henzinger, Improved algorithms for topic distillation in a hyperlinked enviroment,'' in Proceedings of the 21st ACM SIGIR Conference on Research and Developments in Information Retrieval, pp. 104-111, 1998. 5 R. Lempel and S. Moran, The stochastic approach for link-structure analysis (SALSA) and the TKC effect,'' in Proceedings of the 9th World Wide Web Conference, 2000. 6 R. Lempel and S. Moran, Salsa: The stochastic approach for link-structure analysis,'' ACM Transactions on Information Systems, vol. 19, pp. 131-160, April 2001. 7 D. Cohn and H. Chang, Learning to probabilistically identify authoritative documents,'' in Proc. 17th International Conf. on Machine Learning, pp. 167-174, Morgan Kaufmann, San Francisco, CA, 2000. 8 D. Cohn and T. Hofmann, The missing link: a probabilistic model of document content and hypertext connectivity,'' in Neural Information Processing Systems, vol. 13, 2001. 9 E. Seneta, Non-negative matrices and Markov chains. Springer-Verlag, 1981. 10 M. Joshi, V. Tawde, and S. Chakrabarti, Enhanced topic distillation using text, markup tags, and hyperlinks,'' in International ACM Conference on Research and Development in Information Retrieval (SIGIR), August 2001. 11 M. Diligenti, F. Coetzee, S. Lawrence, L. Giles, and M. Gori, Focus crawling by context graphs,'' in Proceedings of the International Conference on Very Large DataBases, 11-15 September 2000, Il Cairo, Egypt, pp. 527-534, 2000. 12 B. Amento, L. Terveen, and W. Hill, Does authority mean quality? predicting expert quality ratings of Web documents,'' in Proceedings of the 23rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 296-303, 2000. ... PageRank
2017-08-22 09:13:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990579605102539, "perplexity": 780.4749310814788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00510.warc.gz"}
https://chemistry.meta.stackexchange.com/questions/4358/please-dont-post-links-to-the-mobile-versions-of-the-websites-unless-there-is
Please don't post links to the mobile versions of the websites (unless there is a really good reason to do so) Often I stumble upon the so-called mobile links both in the comments and in the answers, e.g. URLs that lead to a mobile version of the website and often seen as https://m.<website_name>.<domain>. 95% of them lead to Wikipedia (https://en.m.wikipedia.org) which is to me unusable in that form on the desktop. Some tools are hidden, font and images are bloated, large margins and tables not fitting page width don't make me happier either. Every time I need to go to the address bar and get rid of m. part in order to productively use the web page. On the other hand, virtually any mobile browser would automatically switch to the mobile version (if there is one) when the "classic" URL is clicked. Also, even on smartphones mobile version is not always the best viewing option as most modern smartphones have screen resolutions higher that the 5-years-old desktops (that's also the case for the gadgets I use in my workflow: Moto G has 1920×1080 screen, whereas my old Thinkpad's screen resolution is 1366×768). • @ChrisH I'm sorry, but I'm still not convinced. Posting mobile links benefits only one person – the one who posts it "as is" by saving a few seconds by not editing – and makes lives of everybody else harder. The m. part consists of only two symbols and it's practically always in the beginning of the address so that it's perfectly visible even on the old mid-2000s Symbian smartphones and can be edited out with ease. The question is not really about excuses for the "normies" doing what they find the easiest, but doing the right thing with long run benefits. – andselisk Jan 16 '19 at 10:03
2021-06-14 05:16:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3450954556465149, "perplexity": 1369.32636549418}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00307.warc.gz"}
https://math.stackexchange.com/questions/371412/differential-calculus-reviewing-and-drawing-graph
# Differential calculus - Reviewing and drawing graph I have missed math class for a few weeks and I'm quite behind with the new stuff learned by the others, so I'm stuck with a problem here. The main problem is, I'm going to have hard time explaining the problem in English. I have this calculus problem: $y=\frac{x^2-5x+2}{2x-4}$ If I try to translate the question for the problem from my language to English, it says: "Review function and submit the graph". What I know is, this problem consists of 8 steps and in the end I must draw a graph for it. I totally can't start solving this question myself since no one is ready to explain anything to me and I see my math teacher once a week and she expects a solved problem next week. I'm stuck here. Not sure if this should be a comment or an answer. The 8 steps when studying a function $f$ are generally: A - Domain of $f$, continuity, differentiability. B - Symmetry (odd,even,periodic). C - y-intercept and x-intercepts if any. D - Existence of asymptotes (horizontal,vertical,slant). E - Local and absolute extremums. F - Concavity G - Graph of the function. It is quite long to write the details and it is worthy to try to do it on your own. Tell us if you have difficulties in some part. I think you are supposed to analyse the given function and plot it. $y=\frac{x^2-5x+2}{2x-4}=\frac{x^2-5x+2}{2(x-2)}=\frac{(x-3)(-4)}{2(x-2)}=\frac{-2(x-3)}{x-2}$ $x \neq 2$ Consider different values of x for plotting the graph $x>3; 3>x>2; x=0; x<2$ • Yeah, analyzing must be the correct word instead of reviewing. My classmates mentioned 8 steps. What is the meaning of that? – Aborted Apr 24 '13 at 15:14 • @Dugi, my soln is wrong, y needs to be solved by quadratic roots formula $y=-b \pm \sqrt (b^2-4ac) / 2a$ – Vikram Apr 25 '13 at 5:52 • Thanks a lot for your help, Vikram. Could you update your answer with a completed solution? As I said, I don't know any way to solve this and seeing it all done once will help me a lot for future problems. – Aborted Apr 26 '13 at 14:08 I want to try answer it, just for a part. $y=\frac{x^2-5x+2}{2x-4}$ if $y=0$ then I have root of $0={x^2-5x+2}$ is $x_1=\frac{5-\sqrt 17}{2}$ and $x_2=\frac{5+\sqrt17}{2}$ The domain is $x\in \mathbb{R}, x\neq2$ this is graph for $y$, I use Maple 13 for plotting. $y$ is discontinuous at $x=2$, because $lim_{x\to 2} \frac{x^2-5x+2}{2x-4}=\frac{-4}{0}=\infty$
2019-12-10 16:43:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6356536746025085, "perplexity": 454.6025515153385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00519.warc.gz"}
https://acooke.org/cute/ChileanFoo0.html
## Chilean Food (Pebre) From: "andrew cooke" <andrew@...> Date: Sat, 29 Sep 2007 17:39:45 -0400 (CLT) Just had a rather good lunch - a pastrami and pebre sandwich with a nice glass of dry SB (Tierra del Fuego reserve 2006). We're slowly shifting how we buy our food (I hope) - Saturday mornings we intend to visit an open-air market (as we used to do when we live in Las Condes, although this is a different market, down Tobalaba, near the Principio de Gales metro stop). The local supermarket is sliding downhill - it's charging crazy prices for fruit and veg, while the in-store bakery has been reduced to warming up bread made elsewhere. So now we're getting fruit + veg for the week at the market, and bread at a small bakery across on Pedro de Valdivia (La Baguette - it's the only place I've found nearby that bakes it's onw bread - a bit further away there's another place at the the back of the three-legged star towers where Carlos Antunez meets Providencia). Anyway, I was going to give a recipe for pebre, since I think I make a pretty good pebre (it got approval at Pauli's party last month). But really, it's probably not going to be that useful because (1) I don't have any idea about quantities and (2) I start with a bag of "greens" from the market (which costs about 50p - one dollar). Ignoring those issues, here's what you do: - buy a bag of the green stuff. This looks to be about 50% (by volume) chopped cilantro (fresh coriander). There might be some parsley in there too. The other 50% is a mix of finely chopped cebollines (spring onions) and ordinary (but mild and/or rinsed) onions. - add an equal volume of chopped tomatoes. Just the normal round red things, or the italian ones if they're in season. They need to be chopped fairly small (aim for cubes 5mm on a side, sat), which means sharpening your knife first, then slicing all three ways. - slop in some lemon juice, olive oil, some kind of vinegar (apple or wine), salt, some crushed garlic. Mix How much you add depends on how juicy the tomatoes were, and how runny you want things - my pebre remains pretty dry, but it's still clearly a "goo" and not a "salad". Volume-wise - I made a batch from one bag with 6 or 7 tomatoes (not very big ones), one large lemon, three cloves of garlic. That would be much less than one onion - perhaps a half - but a lot more cilantro (coriander) than you might expect (it squashes up!). Leave to stand for a while. There's a lot - I put half in the freezer, but don't know what it will be like defrosted. Andrew
2021-09-21 22:23:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22624662518501282, "perplexity": 9642.571898827884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00052.warc.gz"}
https://www.gradesaver.com/textbooks/math/statistics-probability/elementary-statistics-12th-edition/chapter-5-discrete-probability-distributions-5-5-poisson-probability-distributions-beyond-the-basics-page-235/17
## Elementary Statistics (12th Edition) a) n=12, p=$\frac{1}{6}$, therefore $n\geq100$ is not satisfied, therefore we cannot use the Poisson distribution. b) Counting by the Poisson distribution: $\frac{2^3\cdot e^{-2}}{3!}=0.18$, whilst counting by the binomial distrbution:${12\choose 3}\cdot (\frac{1}{6})^3 \cdot (\frac{5}{6})^9=0.197.$ These two results are far from each other, so the result gained by the Poisson distribution is not OK.
2018-10-19 05:12:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369761347770691, "perplexity": 719.313209486184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512323.79/warc/CC-MAIN-20181019041222-20181019062722-00223.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-1-equations-and-inequalities-1-1-solve-linear-equations-guided-practice-for-examples-1-and-2-page-19/1
Algebra 2 (1st Edition) $x=3$ $4x+9=21$ First we move the 9 to the other side of the equation to isolate the "x" on one side. Moving the 9 will cause it to become negative. $4x=21-9$ $4x=12$ Then we will divide the equation by 4 so we could get the value of the single "x" $4x=12 /:4$ $\frac{4}{4}x=\frac{12}{4}$ $x=3$
2022-09-27 10:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5951799154281616, "perplexity": 153.58749892878183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00009.warc.gz"}
https://mitpress.mit.edu/books/odd-one
Paperback | $25.95 Trade | £20.95 | 240 pp. | 5.375 x 8 in | 3 figures | February 2008 | ISBN: 9780262740319 eBook |$18.95 Trade | February 2008 | ISBN: 9780262254397 On Comedy ## Overview Why philosophize about comedy? What is the use of investigating the comical from philosophical and psychoanalytic perspectives? In The Odd One In, Alenka Zupančič considers how philosophy and psychoanalysis can help us understand the movement and the logic involved in the practice of comedy, and how comedy can help philosophy and psychoanalysis recognize some of the crucial mechanisms and vicissitudes of what is called humanity. Comedy by its nature is difficult to pin down with concepts and definitions, but as artistic form and social practice comedy is a mode of tarrying with a foreign object—of including the exception. Philosophy’s relationship to comedy, Zupančič writes, is not exactly a simple story (and indeed includes some elements of comedy). It could begin with the lost book of Aristotle’s Poetics, which discussed comedy and laughter (and was made famous by Umberto Eco’s The Name of the Rose). But Zupančič draws on a whole range of philosophers and exemplars of comedy, from Aristophanes, Molière, Hegel, Freud, and Lacan to George W. Bush and Borat. She distinguishes incisively between comedy and ideologically imposed, “naturalized” cheerfulness. Real, subversive comedy thrives on the short circuits that establish an immediate connection between heterogeneous orders. Zupančič examines the mechanisms and processes by which comedy lets the odd one in.
2017-02-24 19:47:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20168907940387726, "perplexity": 7118.255178760015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00333-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.mathsassignmenthelp.com/numerical-methods/
## Help with numerical methods assignment on Integration Using Matlab The numerical methods assignment helper was tasked with integrating the equation: ${{\frac{\partial_{\rho}}{\partial_t}} ={M{\nabla}^2{\mu}}}$ ### Assistance with numerical methods homework on Using the Euler scheme. In this equation, ${{\rho}(\overrightarrow{r},t)}$ – is the density field, ${{\mu}(r,t)}$ – is the chemical potential, and the Laplace operator for 2D case rewrites in the following way: ${{\nabla^2f}={\frac{\partial^2f}{\partial{x^2}}+{\frac{\partial^2f}{\partial{y^2}}}}}$ To find this differentiation, the numerical method homework helper used a second-order central difference scheme in the way: ${{\nabla^2f(i,j)}={\frac{f(i+1j)-2f(ij)=f(i-1j)}{h}+{\frac{f(ij+1)-2f(ij)+f(ij-1)}{h^2}}}}$ For ${\mu(r)}$ the next relation was used: ${{\mu}={RT(1+1nln({\frac{\rho}{1-b\rho}})+{\frac{b\rho}{1-b\rho})}-2a\rho-k\nabla^2\rho}}$ Where for calculation ${\nabla^2\rho}$ the scheme (3) was used. So, for the Euler scheme we have: ${{\rho(i,j,t+dt)}={\rho(i,j,t)+dt\nabla^2\mu}}$ ### Online numerical methods tutors getting the finite-differences In the finite-difference scheme, the online numerical methods tutor used the periodic boundary conditions and the next values for constants and parameters: ${\Delta t = 0.01, \Delta x = \Delta y=1.0, L_x=80, M=10, a=\frac{2}{29}, b=\frac{2}{21}, R=1, T_c=\frac{8a}{27bR}, T=0.7T_c, k=0.025}$ For the level curve superimposed to a contour plot of the concentration field and to a vector plot of the gradient of the density field and for the values ${\rho_{min}(t), \rho(t)}$ and Helmholtz energy ${F=\smallint(\rho RT(1+1nln({\frac{\rho}{1-b\rho }))-a\rho^2-\frac{k}{2}|\nabla \rho|^2)dV}}$ We got the next results (contour plots presented only for some values of t) Also, in this project, we calculated the radial distribution function of the Fourier transform ${\Omega(q_x,q_y)=F(p(x,y))}$ of the density field as: ${f(|q|)= \frac{\Omega(q)}{\Omega_{tot}}}$ And beneath some results at different times are presented. These results gave us the possibility to calculate the average size of drops as: ${R^{av}= \frac{L_x \smallint f(q)dq}{\smallint qf(q)dq}}$ In the final part of the project we repeated the above calculations for some different initial densities, these results gave us the possibility to compare the average size as a function of time for initial densities in the following way: ${\rho= 2.6-2.8}$ ${\rho= 3.6-3.8}$ ${\rho= 4.6-4.8}$
2020-10-23 11:02:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7307358384132385, "perplexity": 559.9474712306459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00172.warc.gz"}
https://www.webtourguide.com/paper/f053002427b8d0812f0e51214311dd85
Counterfactual Regret Minimization (CFR) has found success in settings like poker which have both terminal states and perfect recall. We seek to understand how to relax these requirements. As a first step, we introduce a simple algorithm, local no-regret learning (LONR), which uses a Q-learning-like update rule to allow learning without terminal states or perfect recall. We prove its convergence for the basic case of MDPs (and limited extensions of them) and present empirical results showing that it achieves last iterate convergence in a number of settings, most notably NoSDE games, a class of Markov games specifically designed to be challenging to learn where no prior algorithm is known to achieve convergence to a stationary equilibrium even on average. ### 相關內容 iOS 8 提供的應用間和應用跟係統的功能交互特性。 • Today (iOS and OS X): widgets for the Today view of Notification Center • Share (iOS and OS X): post content to web services or share content with others • Actions (iOS and OS X): app extensions to view or manipulate inside another app • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device • Custom Keyboard (iOS): system-wide alternative keyboards Source:iOS 8 Extensions: Apple’s Plan for a Powerful App Ecosystem In recent years, deep off-policy actor-critic algorithms have become a dominant approach to reinforcement learning for continuous control. One of the primary drivers of this improved performance is the use of pessimistic value updates to address function approximation errors, which previously led to disappointing performance. However, a direct consequence of pessimism is reduced exploration, running counter to theoretical support for the efficacy of optimism in the face of uncertainty. So which approach is best? In this work, we show that the most effective degree of optimism can vary both across tasks and over the course of learning. Inspired by this insight, we introduce a novel deep actor-critic framework, Tactical Optimistic and Pessimistic (TOP) estimation, which switches between optimistic and pessimistic value learning online. This is achieved by formulating the selection as a multi-arm bandit problem. We show in a series of continuous control tasks that TOP outperforms existing methods which rely on a fixed degree of optimism, setting a new state of the art in challenging pixel-based environments. Since our changes are simple to implement, we believe these insights can easily be incorporated into a multitude of off-policy algorithms. We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym. The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain. The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in $n$-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point. This paper proposes a model-free Reinforcement Learning (RL) algorithm to synthesise policies for an unknown Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), then construct a synchronized MDP between the automaton and the original MDP. According to the resulting LDBA, a reward function is then defined over the state-action pairs of the product MDP. With this reward function, our algorithm synthesises a policy whose traces satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches. · · · · · 2018 年 7 月 25 日 We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice. Deep hierarchical reinforcement learning has gained a lot of attention in recent years due to its ability to produce state-of-the-art results in challenging environments where non-hierarchical frameworks fail to learn useful policies. However, as problem domains become more complex, deep hierarchical reinforcement learning can become inefficient, leading to longer convergence times and poor performance. We introduce the Deep Nested Agent framework, which is a variant of deep hierarchical reinforcement learning where information from the main agent is propagated to the low level $nested$ agent by incorporating this information into the nested agent's state. We demonstrate the effectiveness and performance of the Deep Nested Agent framework by applying it to three scenarios in Minecraft with comparisons to a deep non-hierarchical single agent framework, as well as, a deep hierarchical framework. Policy gradient methods are often applied to reinforcement learning in continuous multiagent games. These methods perform local search in the joint-action space, and as we show, they are susceptable to a game-theoretic pathology known as relative overgeneralization. To resolve this issue, we propose Multiagent Soft Q-learning, which can be seen as the analogue of applying Q-learning to continuous controls. We compare our method to MADDPG, a state-of-the-art approach, and show that our method achieves better coordination in multiagent cooperative tasks, converging to better local optima in the joint action space. Policy gradient methods are widely used in reinforcement learning algorithms to search for better policies in the parameterized policy space. They do gradient search in the policy space and are known to converge very slowly. Nesterov developed an accelerated gradient search algorithm for convex optimization problems. This has been recently extended for non-convex and also stochastic optimization. We use Nesterov's acceleration for policy gradient search in the well-known actor-critic algorithm and show the convergence using ODE method. We tested this algorithm on a scheduling problem. Here an incoming job is scheduled into one of the four queues based on the queue lengths. We see from experimental results that algorithm using Nesterov's acceleration has significantly better performance compared to algorithm which do not use acceleration. To the best of our knowledge this is the first time Nesterov's acceleration has been used with actor-critic algorithm. In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. The optimal cost function of the aggregate problem, a nonlinear function of the features, serves as an architecture for approximation in value space of the optimal cost function or the cost functions of policies of the original problem. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with reinforcement learning based on deep neural networks, which is used to obtain the needed features. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by deep reinforcement learning, thereby potentially leading to more effective policy improvement. Ted Moskovitz,Jack Parker-Holder,Aldo Pacchiano,Michael Arbel,Michael I. Jordan 0+閱讀 · 1月14日 Zengyi Qin,Yuxiao Chen,Chuchu Fan 6+閱讀 · 2021年6月24日 Rong Zhu,Mattia Rigotti 9+閱讀 · 2020年12月2日 Andrea Celli,Alberto Marchesi,Gabriele Farina,Nicola Gatti 3+閱讀 · 2020年6月20日 3+閱讀 · 2018年12月6日 Brendan O'Donoghue 3+閱讀 · 2018年7月25日 Marc Brittain,Peng Wei 3+閱讀 · 2018年5月18日 Ermo Wei,Drew Wicke,David Freelan,Sean Luke 10+閱讀 · 2018年4月25日 K. Lakshmanan 6+閱讀 · 2018年4月24日 Dimitri P. Bertsekas 8+閱讀 · 2018年4月22日 24+閱讀 · 2021年4月2日 26+閱讀 · 2020年11月3日 24+閱讀 · 2020年6月3日 73+閱讀 · 2020年5月15日 50+閱讀 · 2020年2月18日 111+閱讀 · 2020年2月1日 41+閱讀 · 2019年12月20日 30+閱讀 · 2019年10月17日 31+閱讀 · 2019年10月10日 35+閱讀 · 2019年10月13日 CreateAMind 13+閱讀 · 2019年5月22日 CreateAMind 8+閱讀 · 2019年5月18日 CreateAMind 7+閱讀 · 2019年1月7日 CreateAMind 9+閱讀 · 2019年1月2日 CreateAMind 4+閱讀 · 2018年12月28日 9+閱讀 · 2018年11月10日 CreateAMind 16+閱讀 · 2018年5月25日 CreateAMind 11+閱讀 · 2017年8月2日 CreateAMind 9+閱讀 · 2017年7月21日 Top
2022-05-18 19:40:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41359296441078186, "perplexity": 896.151202364111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00146.warc.gz"}
https://itectec.com/database/thesql-how-to-extend-the-tablespace-when-using-innodb_file_per_table/
# Mysql – How to extend the tablespace when using innodb_file_per_table innodbMySQL With innodb_file_per_table off, you can create multiple tablespaces on multiple devices if necessary to manage growth, balance I/O, etc. With the option on, how do you control growth of the files? Do they autoextend? And can you set a maximum and then extend the tablespace for a given table onto another filesystem if necessary? This is interesting because last month (July 25, 2012 15 days ago) some asked a similar question about extending the system tablespace file ibdata1 : Database Design - Creating Multiple databases to avoid the headache of limit on table size Please read that link and see if you would like to extend ibdata1 the way I detailed To be totally honest with you, storing ibdata across separate volumes from the datadir is a bad idea , and storing .ibd files is even worse, because of You will improve things if you go with innodb_file_per_table. Why? • You only need one ibdata1 file • No data would reside in ibdata1 • All data and index pages would not spread across multiple tablespaces Controlling individual tablespace growth is rather simple. You must schedule proper maintenance windows for this: For example, to shrink an InnoDB table named mydb.mytable simply run ALTER TABLE mydb.mytable ENGINE=InnoDB; With innodb_file_per_table on, the table shrinks. With innodb_file_per_table off, ibdata1 just grows rapidly. There is no autoextend feature for .ibd files. That only applies to ibdata1. In light of this, the maximum size of an InnoDB table stored in a .ibd file would be OS dependent: • In ext3, an InnoDB table can go to 2TB • In ext4, an InnoDB table can go to 16TB To be more blunt, you should never attempt to spread .ibd to other volumes. Percona has strongly denounced this : http://www.mysqlperformanceblog.com/2010/12/25/spreading-ibd-files-across-multiple-disks-the-optimization-that-isnt/
2021-12-08 07:24:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4433644413948059, "perplexity": 6809.642535195156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00075.warc.gz"}
https://www.physicsforums.com/threads/problems-with-integrating-and-deferential-equation.317768/
# Homework Help: Problems with integrating and Deferential equation 1. Jun 3, 2009 ### IRNB 1. The problem statement, all variables and given/known data give the general solution of the following equation x' = tx + 6te-t2 2. Relevant equations for x'+p(t)x=q(t) xeI=$$\int$$q(t)eIdt where I=$$\int$$p(t)dt integration by parts $$\int$$f'g = [fg] - $$\int$$fg' 3. The attempt at a solution x'-tx=6te-2t I=$$\int$$-t dt = -t2$$/$$2 xe-t2$$/$$2dt = $$\int$$6te-2te-t2$$/$$2dt using integration by parts i get $$\int$$f'g = [fg] - $$\int$$fg' f'=e-t2$$/$$2 g=6t $$\int$$6te-2te-t2$$/$$2dt = [$$\frac{6t}{-2-t}$$e-2t-t2$$/$$2] - $$\int$$$$\frac{6}{-2-t}$$e-2t-t2$$/$$2 dt I've tried to integrate the second part of this integral i.e. $$\int$$$$\frac{6}{-2-t}$$e-2t-t2$$/$$2 dt using integration by parts but it seems to be a very difficult integral to solve. I also have my suspicions that this method may go on forever. can anyone help? am i missing some kind of identity that i should know? any help would be appreciated. 2. Jun 3, 2009 ### IRNB i just noticed a typo where i stated what value i used for f' it should read f' = e-2-t2$$/$$2 3. Jun 3, 2009 ### Physics_Math You seem to have stated the question twice, but differently both times. first you have x' = tx + 6texp(-t^2) and then you write x'-tx=6texp(-2t) Which is the correct form? 4. Jun 3, 2009 ### IRNB Hi Physics Math The second one was just me re-arranging the first one to get it into the form of x'+p(t)x=q(t) so that i could apply the relevant formula (under relevant equations). 5. Jun 3, 2009 You didn't just rearrange you changed the argument of the exponent as well. It would also be nice if you would put $$tags around the entire expression and not just the occasional symbol. 6. Jun 3, 2009 ### IRNB oh yes, i see what you mean exp(-t^2) does not equal exp(-2t). I'll give it another go. thanks guys. 7. Jun 3, 2009 ### Cyosis Using the correct argument will make the integration easier. 8. Jun 3, 2009 ### IRNB Okay I've had another go and got an answer and want to check whether its correct or not. x'-tx=6te-t2 I = [tex]\int$$-t dt = -t2$$/$$2 xe-t2$$/$$2=$$\int$$6te-t2e-t2$$/$$2 dt xe-t2$$/$$2 = $$\int$$6te-(3/2)t2dt now using integration by parts where f'=e-(3/2)t2 and g=6t and recalling that $$\int$$e-ax2 dx = $$\sqrt{pi/a}$$ i get [$$\sqrt{2pi/3}$$ 6t] - $$\int$$$$\sqrt{2pi/3}$$6 dt = 2[$$\sqrt{2pi/3}$$ 6t] + C so x=(2[$$\sqrt{2pi/3}$$ 6t] + C) / (e-t2$$/$$2) However, in all the other examples i have done the exp term has disappeared by the time i got to the final answer, so i just wanted to check that i have the correct answer here or not. 9. Jun 3, 2009 ### Cyosis This is only true if you're integrating from -infinity to +infinity, which you are not. If you want to use partial integration you will have to resort to the error function. That said you do not want to use partial integration, but instead use a substitution u=-t^2. 10. Jun 3, 2009 ### IRNB Thank you cyosis for all your help. My integration skills are quite weak so please bare with me here. f' = exp(-3t2/2) using u= t2 f' = exp((-3/2)u) f = $$\int$$exp((-3/2)u) (dt/du) du f = (-2/3)exp((-3/2)u) (1/2t) f = (-1/3t)exp((-3/2)t2) g = 6t g'=6 now using integration by parts $$\int$$6t exp((3/2)tt) dt = [(-6t/3t)exp((-3/2)t2)] + $$\int$$(6/3t)exp((-3/2)t2) dt integrating the second term by parts f' = exp(-3t2/2) f = (-1/3t)exp((-3/2)t2) g = 6/3t g' = -6/3t2 $$\int$$(6/3t)exp((-3/2)t2) dt = [(6/9t2)exp((-3/2)t2)] - $$\int$$(-6/9t3)exp((-3/2)t2) dt i seem to be going in circles... it seems there is always going to be an integral that needs to be solved... help 11. Jun 3, 2009 ### Cyosis Why are you using partial differentiation again? The original integral is already cast in a very easy form. Example integral: Using the substitution u=t^2 du=2tdt. \begin{align*} \int t e^{t^2}dt & =\int \frac{1}{2}e^u du\\ & =\frac{1}{2}e^u\\ & =\frac{1}{2}e^{t^2} \end{align*} 12. Jun 3, 2009 ### IRNB i have no idea why i didnt see that. its been a really long day so now i have $$\int$$ 6t exp((-3/2)t2) dt u = t2 du = 2t dt $$\int$$3 exp((-3/2)u) du = 3(-2/3) exp((-3/2)u) = -2 exp((-3/2)t2) + C x exp(-t2/2) = -2 exp((-3/2)t2) + C x = (1/exp(-t2/2))C - 2 exp(-t2/2) does this look correct? 13. Jun 3, 2009 ### IRNB there is a typo in my answer. it should read x = (1/exp(-(t^2)/2))C - 2 exp(-t^2) Thank you for your help Cyosis 14. Jun 3, 2009 ### Cyosis You're welcome, and your answer is correct.
2018-07-19 00:27:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623748183250427, "perplexity": 3159.1682344675537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590362.13/warc/CC-MAIN-20180718232717-20180719012717-00562.warc.gz"}
https://www.spp2026.de/projects/06/
06 Spectral Analysis of Sub-Riemannian Structures We aim to study geometric aspects of sub-Riemannian structures and relations to the spectral analysis of induced differential operators (e.g. sub-Laplace operators). The project is divided into three parts that have close relations. Existence and construction of sub-Riemannian geometries. We wish to study the existence of sub-Riemannian structures (also under additional conditions such as regular or trivializable) on symmetric spaces of compact and noncompact type, Lie groups and their homogeneous spaces. In particular, the notion and analysis of sub-Riemannian curvature will be of interest since it is an object which might be detected from (or is related to) the spectral data as it happens in the Riemannian setting. Spectral analysis of geometrically defined sub-elliptic operators. Specific questions related to the spectral analysis and geometry of (sub)-Laplace operators concern: • (Sub)-Laplace operators on exotic spheres • (Sub)-Laplace operator on pseudo-H-type Lie groups and compact quotients • Path integrals and heat kernels for "higher step" nilpotent Lie groups Heat kernel for the Laplacian on differential forms and sub-Riemannian limit We plan to study the (sub)-Laplacian on differential forms and its heat kernel and trace in concrete cases. We plan to consider the behaviour of such objects under taking sub-Riemannian limits. Applications to $$L^2$$-invariants may be within reach. • Heat kernel of the form Laplacian on nilpotent Lie groups • Novikov-Shubin invariants ## Publications Pseudo H-type Lie groups $$G_{r,s}$$ of signature (r,s) are defined via a module action of the Clifford algebra $$C\ell_{r,s}$$ on a vector space V≅$$\mathbb{R}^{2n}$$. They form a subclass of all 2-step nilpotent Lie groups and based on their algebraic structure they can be equipped with a left-invariant pseudo-Riemannian metric. Let $$\mathcal{N}_{r,s}$$ denote the Lie algebra corresponding to $$G_{r,s}$$. A choice of left-invariant vector fields [$$X_1, \ldots, X_{2n}$$] which generate a complement of the center of $$\mathcal{N}_{r,s}$$ gives rise to a second order operator $$\Delta_{r,s}:=\big{(}X_1^2+ \ldots + X_n^2\big{)}- \big{(}X_{n+1}^2+ \ldots +X_{2n}^2 \big{)}$$ which we call ultra-hyperbolic. In terms of classical special functions we present families of fundamental solutions of $$\Delta_{r,s}$$ in the case r=0, s>0 and study their properties. In the case of r>0 we prove that $$\Delta_{r,s}$$ admits no fundamental solution in the space of tempered distributions. Finally we discuss the local solvability of $$\Delta_{r,s}$$ and the existence of a fundamental solution in the space of Schwartz distributions. Link to preprint version Related project(s): 6Spectral Analysis of Sub-Riemannian Structures We construct a codimension 3completely non-holonomic subbundle on the Gromoll–Meyer exotic 7-sphere based on its realization as a base space of a Sp(2)-principal bundle with the structure group Sp(1). The same method can be applied to construct a codimension 3 completely non-holonomic subbundle on the standard 7-sphere (or more general on a (4n +3)-dimensional standard sphere). In the latter case such a construction based on the Hopf bundle is well-known. Our method provides a new and simple proof for the standard sphere S7. Journal Appl. Anal. 96 (2017), 2390–2407. Link to preprint version Related project(s): 6Spectral Analysis of Sub-Riemannian Structures • 1 ## Team Members Prof. Dr. Wolfram Bauer
2019-04-22 22:09:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46595868468284607, "perplexity": 804.1105729379427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582736.31/warc/CC-MAIN-20190422215211-20190423001211-00242.warc.gz"}
http://openstudy.com/updates/55a85fc2e4b0f93dd7c49163
## anonymous one year ago 1. f(x) = 3 - x2 - 6x 2. (x) = x2 - 8x + 2 I have these two problems. I need to find out the domain, range, maximum, and minimum for both of the above. I keep getting the wrong answers when I try it, and some that don't make sense.. Please help? Thanks in advance! 1. anonymous 2. SolomonZelman For a parabola with a positive leading coefficient, absolute maximum won't exit. (it opens up, and its absolute minimum is the vertex - y-coordinate is the value of the minimum, and x-coordintae is where this minimum is located) ---------------------------------------------------- For a parabola with a negative leading coefficient, absolute minimum won't exit. (it opens up, and its absolute maximum is the vertex - y-coordinate is the value of the maximum, and x-coordintae is where this maximum is located) ---------------------------------------------------- To find the vertex in each parabola, you need to complete the square. (for each function) 3. SolomonZelman If you are not familiar with what "perfect square trinomial" means, then I would advise to review that concept (here, with other people, or watch a video, read a book, get a tutor.... idk, that is your responsibility. I won't do it now, because I got to go pretty soon). 4. jtvatsim @macky342 You still there? Have you made any progress on the question, or still stuck? :) 5. anonymous I'm still stuck unfortunately. 6. jtvatsim K, let's see if we can get somewhere... 7. anonymous math has always been my worst subject. thank you. 8. jtvatsim Well, maybe we'll be able to turn that around. :) I'm taking a look... 9. anonymous Great! Thank you. This is the last question i have in this class before i graduate so i just want to be done! 10. jtvatsim First thing I'm going to do is rewrite the equations this way: $f(x) = -x^2-6x+3$ and $g(x) = x^2-8x+2$. 11. anonymous okay, now what do i do? 12. jtvatsim Alright, personally, for me a picture is worth a thousand words, so I'm going to set these up to make it easier to graph a picture. 13. jtvatsim This is a trick that most classes don't teach you, but I'm going to factor an x out of the first two terms of each. The result looks like this: 14. jtvatsim $f(x)=-x(x+6)+3$ and $g(x)=x(x-8)+2$ 15. jtvatsim You probably haven't seen that, but does that make sense so far? 16. anonymous not really :( 17. jtvatsim OK, no worries, all I did is this for the first one $f(x) = -x^2-6x+3 = x(-x-6)+3 = -x(x+6)+3$ So, I just ignored the 3, and factored an x out of the first two terms. Then, I took out the negative sign (I didn't have to but it looks nicer). Please let me know if you have any questions on this part. We may need to review factoring. :) 18. anonymous i am not understanding a thing thats going on. i reviewed the chapter over 5 times. had a tutor try and explain it and my last hope was this website and i feel like theres still no hope :( 19. jtvatsim Gotcha. No worries, there's just a mental block somewhere that we have to sort out. I've had plenty before myself. :) Let's start more basic. 20. anonymous perfect. thanks. 21. jtvatsim I'm going to do a few diagnostic questions (totally different than the current question) to see where your understanding starts. Does this make sense? $x^2 + x = x(x + 1)$ 22. anonymous nope. 23. jtvatsim OK, that's fine. Let's go simpler. :) 24. anonymous the x's is where i get lost. 25. anonymous I'm pretty good at basic math 26. jtvatsim OK, so does this make more sense $3^2 + 3 = 3(3+1)$ 27. anonymous yes 28. jtvatsim OK, so let's see if we can build on that understanding.... Let me think for a moment... 29. Mertsj Do you know that these two equations have graphs that are parabolas? 30. anonymous yes. 31. Mertsj In the first one, the x^2 term has a negative coefficient so it will look like this: 32. Mertsj |dw:1437101892911:dw| 33. anonymous okay. 34. Mertsj In the second one, the x^2 term is positive so it will look like this: 35. Mertsj |dw:1437101949338:dw| 36. Mertsj Does the first one have a high point or a low point? 37. anonymous high 38. Mertsj That high point is called a maximum point. It is also the vertex of the parabola. 39. anonymous that makes sense. 40. jtvatsim So, we know the first one has a maximum, but does it have a minimum too? 41. anonymous when i was trying what i knew it didn't. and i didn't know if that was right or not.. theres a space to enter it on my school work but i don't know what to put there if theres no minimum. 42. jtvatsim You are right. There is no minimum for the first graph. You can just put 'no minimum' or if you want to be fancy you can say that the graph's minimum goes to 'negative infinity' 43. anonymous great! so what is the maximum? and i still need the range and the domain :( 44. jtvatsim Good question. Let's see if we can track down the maximum first. Since you aren't comfortable with x's we are going to have to do this by "brute force" and plug in numbers until we see a pattern. :) 45. anonymous oh gosh. okay. 46. jtvatsim No worries, though let's just try some obvious numbers. I'm assuming you are comfortable with plugging in numbers for x? Like f(x) = -x^2 - 6x + 3 so if x = 0, then -0^2 - 6*0 + 3 = 3. 47. jtvatsim Basically, I just transform the x's into whatever number I like. Does that make sense? 48. anonymous kind of? I'm not too sure whats going on. I'm extremely tired right now. I'm a little older, i have a son who's a year and a half and he's sick so sleep and graduation are hard to balance on top of a child. 49. anonymous I'm trying here. let me look again. 50. jtvatsim Sure, I get the picture. Let me see if we can't get over the hump... How about this... You've heard the saying "X marks the spot" right? 51. anonymous yes. 52. jtvatsim In math, it's the same idea, "x" marks the spot where some number should be. 53. anonymous yes yes.. 54. jtvatsim So, if I give you a formula, x + 10 you can replace the x (that marks the spot) with a number to see what I really mean. 55. jtvatsim I could mean, 0 + 10, 1 + 10, 2 + 10, 3 + 10, or lots of other things. The point is, I can always turn a "x equation" into just simple math and numbers. I get to choose. 56. anonymous im understanding so far. 57. jtvatsim OK, so if we turn it up a notch, and I give you 5x + x now I have two x's that mark the spot. I can choose to replace these with a number, but it must be the same number. So, 5*1 + 1 or 5*2 + 2 but not, 5*1 + 50 or 5*3 + 100, I must pick the same number since the "x"s are the same. 58. anonymous okay. 59. jtvatsim Alright, so the formula we have has this symbol in it x^2. Do you know what this symbol means? 60. anonymous yes the x has a exponent. 61. jtvatsim Good, and exponents tell us to multiply that number that many times. For example, 2^2 = 2*2 5^2 = 5*5 and so on 62. anonymous got it. 63. jtvatsim OK, so let's see if we can figure out what on earth the original formula is saying. -x^2 - 6x + 3 we have two "x"s (that mark the spot) so we get to pick a number to put in both spots. 64. anonymous ok, can we pick something easy like 2? 65. jtvatsim Sure, let's do that to start. 66. jtvatsim So we have: - 2^2 -6*2 + 3 and we need to figure out what this means. 67. anonymous yes. 68. jtvatsim Alright, order of operations Please Excuse My Dear Aunt Sally, parentheses, exponents, multiplication/division addition/subtraction 69. jtvatsim So first, what is 2^2? 70. anonymous 4 71. jtvatsim Good, so we have -4 - 6*2 + 3 next is multiplication 72. anonymous ok so 6*2 right? 73. jtvatsim that is right! 74. anonymous so 12. 75. jtvatsim Good! -4 -12 + 3 this is easy now 76. jtvatsim "easier" that is... :) the negatives might be a little strange. :) 77. anonymous ok will i miltiply -4 & -12 or what do i do since theres no symbol? 78. anonymous multiply lol 79. anonymous I'm decent when it comes to negatives. 80. jtvatsim There's technically subtraction there. 81. anonymous how? 82. jtvatsim Remember it was -x^2 minus 6x plus 3 in the original formula -x^2 - 6x + 3 83. anonymous ok got it 84. jtvatsim So, we then get -4 - 12 + 3 = -16 + 3 = -13 I believe. 85. jtvatsim This gets faster, don't worry. It takes a lot of words to describe the process. :) 86. anonymous mhm... 87. anonymous this is gonna take til 5am. 88. jtvatsim Hopefully not. :) So, believe it or not we have actually found a small piece of the parabola, we took x = 2 and got -13 as the result. 89. anonymous not even your fault, its mine.. 90. jtvatsim This is graphed on the xy-axis like this|dw:1437103634210:dw| 91. anonymous yes 92. jtvatsim OK, well we can keep plugging in numbers until the sun goes down (or comes up in your case), but I'm going to give you a small trick to use. 93. anonymous great. 94. jtvatsim Always begin by using x = 0. After that use x = "the number in front of the x in the formula". Here's what I mean for the second part. 95. jtvatsim We have -x^2 - 6x + 3. We should use x = 0 and x = -6 because the -6 is sitting in front of the x without weird exponents. 96. jtvatsim If we had x^2 + 10x + 3 we would pick x = 0 and x = 10. OK with that trick? 97. anonymous yes 98. jtvatsim OK, so let's get started plug in x = 0 into the formula -x^2 - 6x + 3 I will speed things up for you, you should get -0^2 - 6*0 + 3 = -0 - 0 + 3 = 3. 99. jtvatsim OK with that? 100. jtvatsim Actually scratch that, sorry. But I just found a better way that will take one step for you to use to find the maximum, we still need to plug in to use it though so our practice is not for nothing. 101. anonymous ok 102. jtvatsim The maximum or minimum of a parabola can be found by plugging in a special x. $x = -\frac{b}{2a}$ where b is the number sitting in front of the x without an exponent, and a is the number in front the x with an exponent. So, in -x^2 - 6x + 3, a = -1 and b = -6. For x^2 + 10x + 40, a = 1 and b = 10. It's a bit of a magic trick why this works, but it works. 103. anonymous now I'm lost 104. anonymous i fell asleep there for a minute too so that isn't helping 105. jtvatsim K, don't worry. I'm pulling out all the tricks I have here, Let me explain it one step at a time. 106. anonymous ok 107. jtvatsim We are looking at -x^2 - 6x + 3 right? What number is in front of the x^2? Well, there is -1 there because there is a negative sign and 1 is always in front of any number. 108. anonymous yes 109. jtvatsim And what number is in front of the "x"? We already said that this was -6. 110. anonymous yup 111. jtvatsim The magic recipe says we want an x that is equal to -b/2a. b is the number in front of the x a is the number in front of the x^2. 112. anonymous yes 113. jtvatsim So, we found that a = -1 (in front of x^2) and b = -6 (in front of x). The magic recipe is -b/2a = -(-6)/2(-1) 114. jtvatsim This looks horrible, but if we just multiply the negatives in front we get -(-6)/2(-1) = 6/2(-1) and the 2 times -1 is -2 =6/-2 then we get = -3. This is the magic x. 115. anonymous i kind of understand, lets just go with it. 116. jtvatsim K, We now plug in x = -3 into the original formula -x^2 - 6x + 3. This will give us the maximum that we were looking for (for the last 5 hours) -(-3)^2 -6(-3) + 3 = -9 +18 + 3 = 9 + 3 = 12. 117. anonymous I'm sorry. its been such a struggle with me. i know it. 118. jtvatsim No, no, I'm sorry it's taken me so long to figure out the best way to help you. It's totally fine. :) 119. anonymous i really appreciate it.. but now what? 120. jtvatsim OK, well we celebrate that we finally have the maximum is is 12. We know that the graph has no minimum, so the range gets as large as 12 and as small as negative infinity. We write this as $Range \ is \ (-\infty, 12]$ 121. anonymous lol how do i do the infinity symbol? 122. jtvatsim Is this an online assignment? maybe just type "-infinity" in words I'm sure the teacher will be fine with that (hopefully). :) 123. anonymous yes. he will fix it. he's nice. 124. anonymous what exactly is the domain? 125. jtvatsim Cool. For the domain, this has to do with what types of numbers can you plug in for x. There are problems with formulas like 10 divided by x, because 10 divided by 0 doesn't make sense. However, with our formula, we are find and can plug in whatever number we want. The domain is "all real numbers" or (-infinity, +infinity) 126. jtvatsim *fine not find. :) 127. anonymous great!! 128. anonymous the only problem is theres still one more equation lol 129. jtvatsim Now, just for that last one... It's the same thing, just with different numbers. Let's slay this dragon once and for all. >:) 130. anonymous yay! 131. jtvatsim Alright, so now, x^2 - 8x + 2. What number in front of x^2? That's a = 1. What number in front of x? That's b = -8. 132. anonymous yup 133. jtvatsim What is the magic recipe? -b/2a That gives us -(-8)/2(1) = 8/2 = 4. The magic x is 4. (Remember that this second graph has a minimum? This magic recipe will always find the maximum OR the minimum and it knows which one is which! Crazy, but true). 134. anonymous but doesn't the graph have no maximum? 135. jtvatsim That's right. So this magic x will give us the minimum that we want. Pretty cool of it. The reason this works is because of Calculus, but let's not go there today... :) 136. anonymous thank god i don't need anymore math to graduate. lol 137. jtvatsim lol! Anyways, we now take x = 4, and replace this in our original formula x^2 - 8x + 2 4^2 - 8(4) + 2 = 16 - 32 + 2 = -16 + 2 = -14 if I'm not mistaken. 138. anonymous uh huh uh huh 139. jtvatsim lol 140. anonymous -14 is the minimum correct? 141. jtvatsim Yep! So, the minimum is -14. There is no maximum (or its +infinity). The range is [-14, +infinity). The domain is fine so "all real numbers" or (-infinity,+infinity). 142. jtvatsim I'm going to graph this on my graphing calculator just to confirm. :) 143. anonymous great!! i really appreciate all of your help!! 144. jtvatsim You are very welcome! I'm glad the dragon is dead now. lol :) Here's the second graph, and we are right. 145. anonymous yay!! 146. jtvatsim and the first one, is...... right again! Yay! 147. anonymous great your amazing!! i appreciate it so much!! especially the patience you had!! 148. jtvatsim No problem! I'm glad to help! Great effort on your end as well, you stuck it out and survived to tell the tale. :) 149. jtvatsim Go get some rest, your brain is probably deep fried. :) 150. anonymous Thanks again! :D 151. jtvatsim Take care! :D
2016-10-24 20:17:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6401242017745972, "perplexity": 1209.031986427977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00418-ip-10-171-6-4.ec2.internal.warc.gz"}
https://ncatlab.org/nlab/show/arrow+%28in+computer+science%29
Contents # Contents ## Idea A generalization of the concept of monad (in computer science). ## References • John Hughes, section 2 of Generalising Monads to Arrows, Science of Computer Programming (Elsevier) 37 (1-3): 67–111. (2000) (pdf) A comparison of monads with applicative functors (also known as idioms) and with arrows (in computer science) is in • Exequiel Rivas?, Relating Idioms, Arrows and Monads from Monoidal Adjunctions, (arXiv:1807.04084) Last revised on November 8, 2018 at 06:50:42. See the history of this page for a list of all contributions to it.
2019-10-22 21:53:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679000735282898, "perplexity": 6127.918061691101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987824701.89/warc/CC-MAIN-20191022205851-20191022233351-00053.warc.gz"}
https://stats.stackexchange.com/questions/479551/meaning-of-phrases-median-vs-mean-income-of-the-average-person-in-my-city-vs
# Meaning of phrases (median vs mean): Income of the average person in my city vs. The average income of people in my city Hello Cross Validated, I have been discussing things like the average income of a person in my city, and have come to think there is a subtle but important difference in the phrases "Income of the average person in my city" and "The average income of people in my city" in that the first would be the median income in the city, while the later would be the mean income. I would guess that this difference of phrasing would be often ignored and might be confusing (especially to the average person), but is it technically correct? Thanks! • How would you classify someone as “average” in order to then find out their income? – Dave Jul 29 '20 at 2:18 • My instinct is to line them up and pick the middle one, though I guess that's a bit of a tautology? Jul 29 '20 at 2:27 • Line them up according to what? Height? Last name alphabetical order? Alphabetically by height? (That last one is a joke.) – Dave Jul 29 '20 at 2:31 • @Dave Since we are discussing income, I assume we would line them up by income. Are you suggesting it is incorrect to refer to the average person at all? Jul 29 '20 at 3:30 In ordinary (non-technical) English the word average can be used for various concepts relating to a typical value of a list of numbers: maybe the "most common" income, middle income in @Dave's line-up, or the number you get when you add all the incomes and divide by the number of incomes. If two people disagree about "the average," they may need to discuss what each of them really has in mind. In statistical terminology the first of these is called the mode (most common), the second is called the median (middle entry in a sorted list), and the third is called the (arithmetic) mean. [There are also 'geometric' and 'harmonic' means, but let's leave them for later.] Sometimes mode, median and mean are all the same number (or nearly the same number). If we have a list of heights of 25 randomly chosen college students (measured to the nearest inch), we might get: 57 61 62 64 64 65 66 66 66 66 66 66 68 68 68 69 70 70 71 71 71 71 71 71 72 For this sample: the mode is 66, the median is 68, and the mean is 67.2. However, for the data: 1, 2, 3, 3, 3, 4, 6, 7, 10, 15, 23, the mode is 3, the median is 4 and the mean is 77/11 = 7. A billing agency might have 300 clerks making \$40,000 a year, 10 supervisors making \$80,000, and a CEO making \$800,000. Then the mode and median are both \$40,000, and the mean is \\$43,730. The mode and median may seem more typical, but only the mean is directly related to the total annual payroll. The harmonic mean is the reciprocal of the arithmetic mean of reciprocals of the numbers. It can be useful for finding average MPG (miles per gallon) for driving a car. I live on a hill. If I drive a mile down the hill at 30 MPG, a mile on flat roads at 25 MPG and back home for a mile up hill at 20 MPG, my average gas mileage is not 25 MPG. I use 1/30 gal. downhill, 1/25 gal. on flat roads, and 1/20 gal. uphill, for a total of 0.1233 gal. or an average of 0.1233/3 = 0.0411 gal./mi. So my average MPG is 1/0.0411 = 24.3 MPG. [Where liters and kilometers are used, a customary measure of fuel efficiency is the number of liters to go 100km, and the arithmetic mean works fine for that.] The geometric mean is used, among other things, to compute average interest rates or stock portfolio returns. I will give links for that: Investopia and Wikipedia. • Thanks @BruceET, that was helpful. Since there are multiple technical meanings of the word average, then there is not single specific technical meaning of the "average person" or "average income"? Jul 29 '20 at 16:37 • Right. But the default supposition would be that 'average' is 'arithmetic mean', subject to revision with more information or context. Jul 29 '20 at 17:58
2022-01-26 08:19:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051506996154785, "perplexity": 770.7093743047863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00704.warc.gz"}
https://codereview.stackexchange.com/questions/23900/am-i-coding-java-in-c
# Am I coding Java in C#? Note: This ended up being much longer than I was expecting. I have a number of side questions that relate to specific parts of the code for if you don't want to slog through all this mess. ## Background I have had some experience in Java, but recently decided to learn C#. I would like to know if my first program is idiomatic. I got the idea for this program out of a Java textbook I had lying around. Here's what it says: Write a program that evaluates expressions typed in by the user. The expressions can use real numbers, variables, arithmetic operators, parentheses, and standard functions (sin, cos, tan, abs, sqrt, and log.) A line of input must contain exactly one such expression. If extra data is found on a line after an expression has been read, it is considered an error. A variable name must consist of letters. Names are case-sensitive. The program should accept commands of two types from the user. For a command of the form print <expression>, the expression is evaluated and the value is output. For a command of the form let <variable> = <expression>, the expression is evaluated and the value is assigned to the variable. If a variable is used in an expression before it has been assigned a value, an error occurs. I learned about using text mode in C#, so I went a little crazy with it. ## Structure My program is probably over 1000 lines long, so I'm not going to post the whole thing. Here's the basic breakdown of all the classes: • BraceMatcher.cs Contains a class that validates that the braces in an expression match. • CommandHandler.cs Contains a class that handles the commands sent from the user. A private nested inner exception class called MalformedAssignmentException deals with variable assignment commands that are malformed. • ConsoleFormatter.cs Contains a class that formats the Console IO for text mode. • IExpression.cs Contains an interface that represents a mathematical expression. The only method it defines is double Evaluate(). • TermExpression.cs Contains a class that implements IExpression for addition and subtraction operations. • FactorExpression.cs Contains a class that implements IExpression for multiplication, division, and modulo operations. • FunctionExpression.cs Contains a class that implements IExpression for the standard functions. • NumericExpression.cs Contains a class that implements IExpression as a simple wrapper around a real number. • ExpressionReader.cs Contains a class that reads expressions and returns a simplified form. The main method exposed is public IExpression Read() which will return a NumericExpression when called from outside the class. • HelpCommandInfo.cs Contains a class that formats the information relating to commands given in the help menu. A public nested struct named CommandPair associates a command example with its corresponding explanation. • InvalidExpressionException.cs Contains a class that represents an exception thrown when the user enters an invalid expression. • TextReaderExtensions.cs Contains a static class that extends TextReader by providing two additional methods: public static string ReadLetters(this TextReader source) which reads only a sequence of letters from the stream and public static void SkipBlanks(this TextReader source) which reads and ignores whitespace. • Program.cs - The main program. ## Example Code Because my code is so large, I will be showing bits and pieces of files. ### IExpression.cs namespace SimpleInterpreter { /// <summary> /// Represents a mathematical expression. /// </summary> interface IExpression { /// <summary> /// Returns the value of the expression. /// </summary> /// <returns>The value of the expression.</returns> double Evaluate(); } } ### TermExpression.cs namespace SimpleInterpreter { using System; using System.Collections.Generic; /// <summary> /// An expression take operates on two terms. /// </summary> class TermExpression : IExpression { private static readonly Dictionary<char, Operator> Operators = new Dictionary<char, Operator>() { { '+', Operator.Addition }, { '-', Operator.Subtraction } }; private enum Operator { Addition, Subtraction } /// <summary> /// Creates a new TermExpression that operates of the specified inner /// expressions. /// </summary> /// <param name="x">The first term.</param> /// <param name="operatorSymbol"> /// The symbol that represents the operator this expression uses. /// </param> /// <param name="y">The second term.</param> /// <exception cref="ArgumentException"> /// If <paramref name="operatorSymbol"/> refers to an invalid operator. /// </exception> public TermExpression(IExpression x, char operatorSymbol, IExpression y) { this.x = x; this.y = y; try { termOperator = Operators[operatorSymbol]; } catch (KeyNotFoundException) { throw new ArgumentException( String.Format("Invalid operator: {1}", operatorSymbol)); } } public double Evaluate() { switch (termOperator) { case Operator.Addition: return x.Evaluate() + y.Evaluate(); case Operator.Subtraction: return x.Evaluate() - y.Evaluate(); default: throw new InvalidOperationException( "Should not be reached."); } } public override string ToString() { return Convert.ToString(Evaluate()); } } } /// <summary> /// Provides a set of static methods for specialized reading. /// </summary> { /// <summary> /// Reads a sequence of letters. /// </summary> { var wordBuilder = new StringBuilder(); var i = source.Peek(); if (i == -1) { var ch = Convert.ToChar(i); if (!Char.IsLetter(ch)) return Convert.ToString(ch); wordBuilder.Append(ch); i = source.Peek(); } while (true) { var ch = Convert.ToChar(i); if (!Char.IsLetter(ch)) return wordBuilder.ToString(); wordBuilder.Append(ch); i = source.Peek(); } } // snip } ### InvalidExpressionException.cs /// <summary> /// An exception that is thrown when the user enters an invalid expression. /// </summary> [Serializable()] class InvalidExpressionException : Exception { public InvalidExpressionException() : base() { } public InvalidExpressionException(string message) : base(message) { } public InvalidExpressionException(string message, Exception inner) : base(message, inner) { } protected InvalidExpressionException( SerializationInfo info, StreamingContext context) : base(info, context) { } } ### HelpCommandInfo.cs /// <summary> /// Formats the information relating to commands given in the help menu. /// </summary> class HelpCommandInfo : IEnumerable<HelpCommandInfo.CommandPair> { /// <summary> /// Associates a command to its description. /// </summary> public struct CommandPair { /// <summary> /// Gets the command. /// </summary> /// <value>The command.</value> public string Command { get { return command; } } /// <summary> /// Gets the formatted description of the command. /// </summary> /// <value>The formatted description of the command.</value> public List<string> Description { get { return description; } } /// <summary> /// Creates a new CommandPair that associates the specified command /// to its specified description. /// </summary> /// <param name="command">The command.</param> /// <param name="description">The description.</param> public CommandPair(string command, List<string> description) { this.command = command; this.description = description; } } // snip } double ReadNumber() { var numberBuilder = new StringBuilder(); var hasDecimal = false; while (true) { var isUnaryMinus = ch == '-' && numberBuilder.Length == 0; var isDecimalPoint = ch == '.'; if (isDecimalPoint) { if (hasDecimal) { throw new FormatException( "Number cannot have multiple decimal points."); } hasDecimal = true; } if (!Char.IsDigit(ch) && !isDecimalPoint && !isUnaryMinus) { return Convert.ToSingle(numberBuilder.ToString()); } numberBuilder.Append(ch); } } ### ConsoleFormatter.cs /// <summary> /// Creates a new ConsoleFormatter that resizes the Console to the /// specified width, uses the specified color for the background of /// Console output, and initially outputs the specified introduction. /// </summary> /// <param name="intro">The introduction.</param> /// <param name="consoleWidth"> /// The width to resize the Console to. /// </param> /// <param name="outputBackground"> /// The color of the background for Console output. /// </param> /// <exception cref="ArgumentNullException"> /// If <paramref name="intro"/> is null. /// </exception> public ConsoleFormatter(string intro, int consoleWidth=75, ConsoleColor outputBackground=ConsoleColor.DarkCyan, ConsoleColor foreground=ConsoleColor.White) { if (intro == null) { throw new ArgumentNullException("intro"); } this.consoleWidth = consoleWidth; ValidateConsoleWidth(); this.outputBackground = outputBackground; Console.ForegroundColor = foreground; Console.WindowWidth = consoleWidth; Console.CursorVisible = false; WriteCentered(intro); inputX = Console.CursorLeft; inputY = Console.CursorTop; } ### BraceMatcher.cs static readonly Dictionary<char, char> Braces = new Dictionary<char, char>() { { '(', ')' }, { '[', ']' }, { '<', '>' }, { '{', '}' } }; /// <summary> /// Returns true if the specified character is a left brace. /// </summary> /// <param name="brace"> /// The character tested for whether it is a left brace. /// </param> /// <returns> /// <c>true</c> if <paramref name="brace"/> is a left brace. /// </returns> public static bool IsLeftBrace(char brace) { return Braces.ContainsKey(brace); } /// <summary> /// Returns true if the specified character is a right brace. /// </summary> /// <param name="brace"> /// The character tested for whether it is a right brace. /// </param> /// <returns> /// <c>true</c> if <paramref name="brace"/> is a right brace. /// </returns> public static bool IsRightBrace(char brace) { return Braces.ContainsValue(brace); } public bool IsEmpty { get { return braceMatcher.Count == 0; } } ### CommandHandler.cs readonly ConsoleFormatter formatter; delegate void CommandHandlerFunc(); /// <summary> /// Creates a new CommandHandler that uses the specified /// ConsoleFormatter to format its output. /// </summary> /// <param name="formatter">Formats the output.</param> /// <exception cref="ArgumentNullException"> /// If <paramref name="formatter"/> is null. /// </exception> public CommandHandler(ConsoleFormatter formatter) { if (formatter == null) { throw new ArgumentNullException("formatter"); } this.formatter = formatter; helpCommandInfo = new Dictionary<string, string>() { { "let <name> = <expression>", "Assigns the value of the expression " + "into the variable name" }, { "print <expression>", "Outputs the value of the expression" }, { "help <command>", "Prints a help message that explains " + "how to use the command" }, { "quit", "Ends the program" } }; specificHelpCommandInfo = new Dictionary<string, string>() { { "let", "Format:\tlet <name> = <expression>\n" + "The 'let' command assigns an expression to\n" + "a variable. Valid names must have only letters.\n" + "Names are case-sensitive letters and must not be one\n" + "of the standard functions (sin, cos, tan, abs, sqrt,\n" + "log) or a command for this program (let, print, help,\n" + "quit). You can store mathematical expressions in the\n" + "variable you create. You can combine real numbers,\n" + "arithmetic operators, parenthetical expressions, the\n" + "built-in functions, and even other variables within\n" + "many times as you want. The mathematical constants\n" + "'e' and 'pi' have already been defined for you." }, { "print", "Format:\tprint <expression>\n" + "The 'print' command outputs the value of the\n" + "expression you enter. You may use combinations of\n" + "real numbers, arithmetic operators, parenthetical\n" + "expressions, standard functions (sin, cos, tan, abs,\n" + "sqrt, log), and variables you have already defined\n" + "using the 'let' command." } }; commandHandler = new Dictionary<string, CommandHandlerFunc>() { { "print", WriteExpression }, { "help", WriteHelp }, { "quit", Quit } }; } { try { formatter.WriteOutput( String.Format("{0} set to {1}", name, value)); } catch (Exception ex) { if (!(ex is MalformedAssignmentException) && !(ex is InvalidExpressionException)) { throw; } formatter.WriteOutput(ex.Message); } } ### Program.cs /// <summary> /// The main program for the Simple Interpreter. /// </summary> class Program { static void Main(string[] args) { var formatter = new ConsoleFormatter( intro: "\nWelcome to my Simple Interpreter!\n" + "Please enter a command (or enter \"help\" for help).\n"); var commandHandler = new CommandHandler(formatter); string command; do { formatter.ColorInputBackground(ConsoleColor.DarkBlue); Console.In.SkipBlanks(); } while (commandHandler.Execute(command)); formatter.MakeOutputInvisible(); } } ## Questions My main question is Where does my code stray from idiomatic C#? 1. In ExpressionReader.cs, I wrote a method double ReadNumber() which manually checks for numeric input. Usually I frown at this, but I couldn't find a method where I could just get the next numbers in the stream. Is there one that I'm missing? 2. I used default parameters in the constructor for the ConsoleFormatter.cs and call it in Program.cs. It seems to me like a cleaner alternative to the Builder Pattern. Am I on the right track here? 3. In BraceMatcher.cs, I made a property IsEmpty that doesn't have an analog in a member variable. I haven't seen this anywhere else. Is that bad practice? 4. In CommandHandler.cs, I use a delegate void CommandHandlerFunc() purely for the purpose of putting functions in a Dictionary. I saw delegates used in conjunction with events elsewhere, but I didn't understand it very well. Is it common to use delegates by themselves without events? 5. Java 7 has a very handy multi-catch exception feature. I was trying a workaround for it in the void ReadVariable() method in CommandHandler.cs. Is there a better way to do that? • You should look into automatic properties. You can write things like public string Command { get; private set; } and not need backing fields. Mar 15, 2013 at 15:16 • @Bobson I don't like setters. I also saw this, – Eva Mar 16, 2013 at 2:01 • I never understood why, but Mark only talks about fully public properties in that article. Properties with private setters aren't as bad - but you can't make them readonly. On a different note, the mostly missing access modifiers are mildly irritating to me; you don't see that much in C#. That can also make it trickier for you, since I seem to remember that Java's defaults are different from those in C#. Mar 16, 2013 at 2:58 • @Eva - I disagree with that article, but to each his own. Mar 18, 2013 at 13:21 • @Eva I read that article as being about encapsulation. The difference is between {get;set;} and {get;private set;}. I don't think the author was saying auto-property setters are bad but that setting fields (from a property in this case) from outside is not good. The suggestion made by @Bobson will generate a backing field for you automagically. The only thing you really lose (assuming JIT in-lining) is readonly which is a shame but you will move on with your life, I promise. Also if you want you can apply Contract.Invariant to a property using Microsoft Code Contracts. Mar 22, 2013 at 21:48 First off, for somebody who is just learning C# you use it better than some of the people I work with. 1. You are correct, there is no native way to read numbers from the Console in C#. You could look at decimal.TryParse which tries parsing the input. The code would look something like this: var input = Console.ReadLine(); var inputAsNumber = 0d; if (!decimal.TryParse(input, out inputAsNumber) { // throw favorite exception } 2. There is nothing wrong with default parameters. I like them better than having multiple constructors. I too think it's much cleaner. 3. There is nothing wrong with an IsEmpty method. C# has one for strings string.IsNullOrEmpty(string) 4. Using delegates this way is perfectly acceptable. I would look into Action and Action<paramTypes[]> though. I think they are a little more common. 5. Your exception handling could be improved. You can catch multiple exceptions from one call: try { formatter.WriteOutput( String.Format("{0} set to {1}", name, value)); } catch (MalformedAssignmentException ex) { throw; } catch (InvalidExpressionException ex) { throw; } catch (Exception ex) { if (Console.In.Peek() != -1) { } formatter.WriteOutput(ex.Message); } • As for default parameters, be careful with those, especially if you expose those calls as some kind of public API - default parameter values will be compiled into the calling assembly. That means if you have a library that uses default parameters and later change those values, it will not be enough to exchange the DLL file - all calling assemblies will have to be recompiled against the new version to pick up the updated default values. This is something to keep in mind when using default parameters. (As a matter of fact, I never use them at all, not least because of this.) Mar 15, 2013 at 15:18 • This: "for somebody who is just learning C# you use it better than some of the people I work with". I was very impressed how good your code was and the fact you were using languages features not in Java, like extensions methods and delegates. Mar 16, 2013 at 3:31 • Thanks! Action is quite handy. I have been putting it in other places where I have delegates in Dictionaries. – Eva Mar 16, 2013 at 17:21 • var inputAsNumber = 0d; I think decimal inputAsNumber; is better, because it ensures you won't use the default value. Mar 17, 2013 at 15:55 • > First off, for somebody who is just learning C# you use it better than some of the people I work with. @Eva For sure it looks good! Mar 22, 2013 at 21:49 Have you considered subclassing TermExpression for the various operations? This way you don't have to worry, in Evaluate, if the state of the object is incorrect (which can't be, but code changes...). Building on the above: instead of using a Dictionary<char, Operator> and catch KeyNotFoundExceptions you could use a switch to build the correct subclass of TermExpression. This way you only switch once, while building, and then every subclass can just do math without having to check who it is everytime. Another thing I'm not completely convinced with is your use of Dictionary everywhere. They're usually useful for bigger collections that two or three elements. Oh, and remember that ContainsValue is O(n). Although n is fixed in your code, bear that in mind. As Jeff said, the code is clean and idiomatic. • I actually use dictionaries quite often for small "decision tables" as well - it's easily understood and easily extended. Mar 16, 2013 at 14:39 • You're right. Dictionary has been my Swiss army knife whenever I have key-value pairs or just a list of pairs I want to keep together. What should I use instead of Dictionary? – Eva Mar 16, 2013 at 17:20 • Dictionaries are usually good, but consider using a Good Old Switch, or polymorphism. This isn't a rule in either direction though, it depends on readability/speed requirements Mar 18, 2013 at 21:43
2022-10-05 01:27:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18179847300052643, "perplexity": 7735.548875179573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00101.warc.gz"}
http://agenturastrom.cz/gorillaz-all-moeg/b717a6-growing-geometrically-meaning
Pro 19 For the human population, current growth rate is 1.18%, so r = 0.0117. Geometric definition, of or relating to geometry or to the principles of geometry. Meaning of geometrically in English The valley was geometrically divided into vine and olive plantations. The spirals are mathematically described as Fibbonaci spirals and these are formed due to specific angels in which new organs are formed at the border region of the shoot apical meristem of the plant. As you can see from the image below, the orthogonal projection of $\vec A$ on $\vec B$ has length $|\vec A|\,\cos\theta$. It consists of two things: an reference point called the origin, and a number of base vectors defining the principal axes of the system. graphically speaking, over time (usually), the curve increases its upward trajectory on the y-axis until a small increase in time (or whatever x-value) results in an almost vertical increase in whatever you're measuring. (b) Assuming the same geometric growth rate, calculate the population size after eight years. FUnderstand the meaning of a population’s carrying capacity. Another word for geometrical. Contrast arithmetic growth, exponential growth. Linear Growth: If the stock continues to increase by $0.20 per month, each share will be worth So, your investment will be worth You can use a spreadsheet to graph the linear growth. Using simple shapes such as circles, triangles and lines in a decorative object. Mind & Meaning How to Create a Breakthrough in Any Area of Your Life Manage Your Strategies, Your Story, and Your State Posted by: Tony Robbins Fulfilling your dreams and your ability to thrive in the areas of your life that matter most can be simplified by breakthroughs, those moments in time when the impossible becomes possible. 5 out of 10 ecology textbooks on my shelves make this distinction: geometric models are for populations with discrete pulses of births, while exponential models are for populations with continuous births. Tragedy of the Commons (historical) He said the disease was increasing "geometrically" on the continent, and that at current prices a total of US250 billion would have to be invested in Africa every year merely to fight it. FTrack the changing rate of a population growing geometrically at a fixed rate. Exponential Growth: If the stock continues to grow at a rate of 4.6% in 4 years each share will be worth So, your investment will be worth about You can use a spreadsheet to graph the exponential growth. Quick Reference A pattern of growth that increases at a geometric rate over a specified time period, such as 2, 4, 8, 16 (in which each value is double the previous one). The year after Business Mastery, LHBCo went from producing 3,000 barrels to 15,400 – a staggering 413.3% increase in production – and was named the fastest growing regional brewery in the nation by the Brewers Association, an incredible feat to accomplish for such a young brewery.$\begingroup$Because you use "exponential distribution" in a non-standard way here, your answer is likely to be misunderstood until you edit it to explain what you mean by exponential distribution. * The architect used geometric techniques to design her home. The present value of growing perpetuity is a way to get the current value of an infinite series of cash flows that grow at a proportionate rate. * Bacteria exhibit geometric increase in numbers when the environment is not limiting. Find more ways to say geometrical, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. How to use geometric in a sentence. In exponential growth, the population grows proportional to the size of the population, so as the population gets larger, the same percent growth will yield a larger numeric growth. Both personal and professional relationships can fall into predictable patterns, with blind spots that can grow geometrically and do serious damage. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself. increasing or decreasing in a geometric progression. If 2010 is time t = 0 and N (0) = 6.8 billion, population size in one year N (1) = 6.8 × e 0.0117, or 6.88 billion. A geometric series is the sum of the numbers in a geometric progression. From: geometric growth in A Dictionary of Environment and Conservation » Coming Boom in Travel, Personal Care, and Restaurants. 11th grade was a long time ago. Taking a … (a) Assuming that the population is growing geometrically, what will the harp seal population be in two years? Flat Organizational Structure Fail #3: You miss or become overwhelmed by problems Without hierarchy, eventually, any leader will be overwhelmed. This zombie idea needs to die. Linear growth Exponential growth is a specific way that a quantity may increase over time. Which Business Categories Will Grow Most in 2021? Creating a coordinate systems. On the one hand, you point out that, Pareto like, hierarchies create inequality. 4 years ago. Mathematically, the growth rate is the intrinsic rate of natural increase, a constant called r, for this population of size N. r is the birth rate b minus the death rate d of the population. Learn term:populations grow = geometrically with free interactive flashcards. While the general best practice is no more than 10 direct reports, in some cases, leaders have tried to have as many as 100 people all reporting up to a single person.. With no structure, the quantity of problems becomes a major issue. Put simply, it is the present value of a series of payment which grows (or declines) at a constant rate each period. another word for exponential. geometrically: 1 adv with respect to geometry “this shape is geometrically interesting” adv in a geometric fashion “it grew geometrically ” Antonyms: linearly in a linear manner Population, as Malthus said, naturally tends to grow "geometrically," or, as we would now say, exponentially. A coordinate system is a way of dividing space. geometrically laid-out streets He thought the girl's geometrically cut and excessively … Growing perpetuity can also be referred to as an increasing or graduating perpetuity. That happens naturally. Because the births and deaths at each time point do not change over time, the growth rate of the population in this image is constant. Whenever you have exponential growth, whatever it is that's growing will double its presence/population in a given amount of time. geometric definition: 1. Dec. 16, 2020 – Making a range of assumptions about recovery to interest levels prior to the pandemic, inquiries to Personal Care and Travel and Lodging franchises are likely to grow geometrically in 2021. Solution (a) In year 1, the population change = 950 seals (births) − 150 seals (deaths) = 800 seals Initial population N(0) = 2000 seals, I'd explain it, but all I remember is geometric growth is faster than linear growth but slower than exponential growth. If a population is growing geometrically or exponentially, a plot of the natural logarithm of population size versus time will result in a straight line. Geometric definition is - of, relating to, or according to the methods or principles of geometry. Eq 1 is a very important equation. Almost all of the pictures contain spirals of leaves or flowers. Let's say you start with a …$\endgroup\$ – whuber ♦ Oct 6 '14 at 17:53 fantastical; Anecdotal evidence would indicate that statistically, empirically, or in most of the cases, the 1st form is directly adjectival, whereas, Think for a moment about what you have pointed out and what you suggest as a “solution”. Choose from 431 different sets of term:populations grow = geometrically flashcards on Quizlet. See more. FUnderstand the difference between the static reserve of a nonrenewable resource and its exponential reserve, and calculate the … The exponential growth equation Learn more. Today we’re going to take a look at a very useful system of representing and working with monomials and polynomials: algebra tiles. Here for example, we have for alpha greater than 0, first of all on the top, the case where the magnitude of alpha is greater than 1, so that the sequence is exponentially or geometrically growing. For example: + + + = + × + × + ×. geometrically, electrically; Some words skip the 1st form altogether, so that these words are not used or rarely used. whimsic, theoretic; Some words tend to discourage the use of the 2nd form. Adjective (en adjective) Of, or relating to geometry. It basically defines what coordinates and coordinate systems mean. I think his understanding of the mathematical terms might be greater than yours. A geometric pattern or arrangement is made up of shapes such as squares, triangles, or…. They’re very intuitive and easy to use, and we’re currently incorporating them into our platform so that our students have this resource available when working with polynomials in their daily Smartick sessions. It is both wrong and enourmously confusing to students. Use of the 2nd form geometric increase in numbers when the environment is not limiting words skip the 1st altogether... Or relating to, or according to the methods or principles of geometry olive plantations from 431 different sets term. 1.18 %, so r = 0.0117 leaves or flowers Some words skip the 1st form altogether, that... But slower than exponential growth such as circles, triangles and lines in a decorative.... On the one hand, you point out that, Pareto like, create. The architect used geometric techniques to design her home coming Boom in,... Care, and Restaurants Assuming the same geometric growth rate is 1.18 %, so r = 0.0117 not! Or according to the methods or principles of geometry is faster than linear growth slower. A moment about what you have exponential growth, whatever it is that 's growing will double presence/population... System is a way of dividing space the architect used geometric techniques to her! Hand, you point out that, Pareto like, hierarchies create inequality ( ). Than exponential growth geometric series is the sum of the pictures contain spirals of leaves flowers. Of leaves or flowers of geometrically in English the valley was geometrically divided into vine and olive plantations to.... For the human population, current growth rate, calculate the population is geometrically! As circles, triangles, or… be referred to as an increasing or graduating perpetuity 431 different sets term! Definition is - of, or relating to geometry that these words are not used or rarely.! Growth but slower than exponential growth is that 's growing will double its presence/population a! Pareto like, hierarchies create inequality about what you suggest as a “ solution ” in two years is limiting! Or rarely used divided into vine and olive plantations learn term: populations grow = geometrically free... Not used or rarely used her home a geometric series is the sum of the 2nd form calculate the is... Geometrically, electrically ; Some words tend to discourage the use of the pictures contain of. You point out that, Pareto like, hierarchies create inequality Boom in Travel, Personal Care, Restaurants... Geometrically flashcards on Quizlet for example: + + = + × + × rarely... Term: populations grow = geometrically with free interactive flashcards population ’ s carrying capacity - of relating. Will double its presence/population in a given amount of time as circles triangles., but all i remember is geometric growth growing geometrically meaning faster than linear growth slower. After eight years use of the pictures contain spirals of leaves or flowers 's growing will double its in. = + × made up of shapes such as squares, triangles or…... So r = 0.0117 that 's growing will double its presence/population in a given of. Rate of a population growing geometrically at a fixed rate hierarchies create.! Also be referred to as an increasing or graduating perpetuity after eight years faster. = 0.0117 will the harp seal population be in two years decorative growing geometrically meaning! × + × + × Assuming that the population is growing geometrically at a fixed rate use of the contain. As a “ solution ” that the population is growing geometrically, electrically ; Some words tend discourage! Have exponential growth, but all i remember is geometric growth is faster than linear growth but slower exponential... A way of dividing space * Bacteria exhibit geometric increase in numbers the. To the methods or principles of geometry like, hierarchies create inequality in years! Skip the 1st form altogether, so r = 0.0117 the one hand, you out. Made up of shapes such as circles, triangles and lines in a geometric.! Growth but slower than exponential growth a way of dividing space squares, triangles, or… growth, it. Like, hierarchies create inequality, what will the harp seal population be two... Or relating to geometry growth rate, calculate the population is growing at... 431 different sets of term: populations grow = geometrically flashcards on Quizlet coming Boom in,... What will the harp seal population be in two years flashcards on.. Triangles and lines in a geometric progression of a population growing geometrically at a fixed rate numbers when environment... Of a population growing geometrically at a fixed rate: + + + = + × ×... The harp seal population be in two years interactive flashcards is both wrong enourmously! Create inequality whimsic, theoretic ; Some words skip the 1st form altogether so! Wrong and enourmously confusing to students coordinates and coordinate systems mean up of shapes as. Relating to geometry the architect used geometric techniques to design her home to the methods or principles geometry! The pictures contain spirals of leaves or flowers altogether, so r = 0.0117 as circles triangles... Solution ” Some words skip the 1st form altogether, so that these words are used! According to the growing geometrically meaning or principles of geometry design her home whatever it is both wrong enourmously... S carrying capacity words are not used or rarely used + + = + × + × ×... You suggest as a “ solution ” of time will double its presence/population a... Coming Boom in Travel, Personal Care, and Restaurants term: populations grow geometrically! And what you have exponential growth + + + + + = + × simple shapes as! Personal Care, and Restaurants environment is not limiting or relating to.! Have pointed out and what you have exponential growth double its presence/population in a object!
2021-07-27 17:47:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5016961693763733, "perplexity": 2232.5241662949093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00417.warc.gz"}
https://brilliant.org/problems/one-solution-means-a-lot/
# One solution means a lot! Algebra Level 4 $\log_a(x^2 - x + 2) > \log_a(-x^2 + 2x + 3)$ If $$x = \frac{4}{9}$$ satisfies the above inequality, then find the sum of all integer solutions of $$x$$ to the inequality. ###### Try my set. × Problem Loading... Note Loading... Set Loading...
2018-03-25 05:25:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951415419578552, "perplexity": 2267.7596266189325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651820.82/warc/CC-MAIN-20180325044627-20180325064627-00365.warc.gz"}
http://farside.ph.utexas.edu/teaching/389/Quantum/node45.html
Next: Spin Angular Momentum Up: Orbital Angular Momentum Previous: Energy Levels of Hydrogen # Exercises 1. Demonstrate directly from the fundamental commutation relations for angular momentum, (4.11), that (a) , (b) , and (c) . 2. Demonstrate from Equations (4.74)-(4.79) that where , are conventional spherical angles. In addition, show that 3. A system is in the state . Evaluate , , , and . 4. Derive Equations (4.108) and (4.109) from Equation (4.107). 5. Find the eigenvalues and eigenfunctions (in terms of the angles and ) of . Express the eigenfunctions in terms of the spherical harmonics. 6. Consider a beam of particles with . A measurement of yields the result . What values will be obtained by a subsequent measurement of , and with what probabilities? Repeat the calculation for the cases in which the measurement of yields the results 0 and . 7. The Hamiltonian for an axially symmetric rotator is given by where and are the moments of inertia about the -axis (which corresponds to the symmetry axis), and about an axis lying in the - plane, respectively. What are the eigenvalues of ? [53] 8. The expectation value of in any stationary state is a constant. Calculate for a Hamiltonian of the form Hence, show that in a stationary state. This is another form of the Virial theorem. (See Exercise 9.) [53] 9. Use the Virial theorem of the previous exercise to prove that for an energy eigenstate of the hydrogen atom whose principal quantum number is . 10. Suppose that a particle's Hamiltonian is Show that and . [Hint: Use the Schrödinger representation.] Hence, deduce that [Hint: Use the Heisenberg picture.] Demonstrate that if , where , then 11. Let where is a non-negative integer. Show that 12. Demonstrate that the first few properly normalized radial wavefunctions of the hydrogen atom take the form: 13. Demonstrate that for the hydrogen ground state. In addition, show that 14. Show that the most probable value of in the hydrogen ground state is . 15. Demonstrate that where denotes a properly normalized energy eigenket of the hydrogen atom corresponding to the standard quantum numbers , , and . 16. Let denote the expectation value of for an energy eigenstate of the hydrogen atom characterized by the standard quantum numbers , , and . 1. Demonstrate that where and is a well-behaved solution of the differential equation 2. Integrating by parts, show that and as well as 3. Demonstrate from the governing differential equation for that 4. Combine the final result of part (b) with the governing differential equation to prove that 5. Combine the results of parts (c) and (d) to show that Hence, derive Kramers' relation: 6. Use Kramers' relation to prove that 17. Let , where is a properly normalized radial hydrogen wavefunction corresponding to the conventional quantum numbers and , and is the Bohr radius. 1. Demonstrate that 2. Show that in the limit . 3. Demonstrate that 4. Hence, deduce that for . Next: Spin Angular Momentum Up: Orbital Angular Momentum Previous: Energy Levels of Hydrogen Richard Fitzpatrick 2016-01-22
2018-02-24 04:02:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663278818130493, "perplexity": 857.4525582962301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815318.53/warc/CC-MAIN-20180224033332-20180224053332-00099.warc.gz"}
https://lyness.io/matching-on-boolean-values-using-switch-statements
# Matching on boolean values using switch statements All Turing-complete programming languages provide some way to branch conditionally into different sections of code. This is true in the most basic programming languages (i.e. assembly language): In most higher-level languages (e.g. the JavaScript below), this is implemented in the form of an if..then..else statement: In fact, even an instruction set with a single instruction (OISC) will implement conditional branching in some way! However, consider the following slightly messy code: Although this makes programmatic sense, and after a few seconds we can tell what is going on, we allow ourselves a bit of syntactic sugar when we want to branch out on a few different paths — based on the result of one variable — using a switch statement. So, we could equivalently write the above as: This is the normal use for a switch statement — it allows the program to branch into different sub-routines based on the system’s evaluation of a variable. However, consider the following code: This code could (will) start to get messy if we start introducing more branches or additional instructions in each branch. Thankfully, the switch statement can help us again. The subject of a switch statement — the a variable in the switch example above — does not need to be a single variable declaration. It can instead be any logical expression with a [truthy (or falsey) value](http://james.padolsey.com/javascript/truthy-falsey/, against which each of the cases can be evaluated for comparison. For example, the above snippet could be re-written as: This implementation of a switch statement — switching on a boolean value and matching one case (or multiple cases, if you remove the break statements) — is a little-known paradigm and one that I’ve found hugely useful. Now I just have to work out who thought it was a good idea that Python not have switch statements at all!
2018-11-13 04:35:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3958747982978821, "perplexity": 792.0322700347479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741219.9/warc/CC-MAIN-20181113041552-20181113063552-00155.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/mcrf.2017002
# American Institute of Mathematical Sciences • Previous Article A discrete hierarchy of double bracket equations and a class of negative power series • MCRF Home • This Issue • Next Article Optimal size of business and dividend strategy in a nonlinear model with refinancing and liquidation value March  2017, 7(1): 21-40. doi: 10.3934/mcrf.2017002 ## Construction of gevrey functions with compact support using the bray-mandelbrojt iterative process and applications to the moment method in control theory Ceremade, Université Paris-Dauphine & CNRS, UMR 7534, PSL, 75016 Paris, France * Corresponding author: Pierre Lissy Received  December 2015 Revised  October 2016 Published  December 2016 In this paper, we construct some interesting Gevrey functions of order $α$ for every $α>1$ with compact support by a clever use of the Bray-Mandelbrojt iterative process. We then apply these results to the moment method, which will enable us to derive some upper bounds for the cost of fast boundary controls for a class of linear equations of parabolic or dispersive type that partially improve the existing results proved in [P. Lissy, On the Cost of Fast Controls for Some Families of Dispersive or Parabolic Equations in One Space Dimension SIAM J. Control Optim., 52(4), 2651-2676]. However this construction fails to improve the results of [G. Tenenbaum and M. Tucsnak, New blow-up rates of fast controls for the Schrödinger and heat equations, Journal of Differential Equations, 243 (2007), 70-100] in the precise case of the usual heat and Schrödinger equation. Citation: Pierre Lissy. Construction of gevrey functions with compact support using the bray-mandelbrojt iterative process and applications to the moment method in control theory. Mathematical Control & Related Fields, 2017, 7 (1) : 21-40. doi: 10.3934/mcrf.2017002 ##### References: [1] F. Ammar-Khodja, A. Benabdallah, M. Gonzalez-Burgos and L. de Teresa, The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials, J. Math. Pures Appl., 96 (2011), 555-590.  doi: 10.1016/j.matpur.2011.06.005.  Google Scholar [2] J. -M. Coron, Control and Nonlinearity, Volume 136 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, 2007.  Google Scholar [3] H. O. Fattorini and D. L. Russell, Exact controllability theorems for linear parabolic equations in one space dimension, Arch. Ration. Mech. Anal., 43 (1971), 272-292.   Google Scholar [4] E. Güichal, A lower bound of the norm of the control operator for the heat equation, J. Math. Anal. Appl., 110 (1985), 519-527.  doi: 10.1016/0022-247X(85)90313-0.  Google Scholar [5] X. Guo and M. Xu, Some physical applications of fractional Schrödinger equation J. Math. Phys., 47 (2006), 082104, 9pp. doi: 10.1063/1.2235026.  Google Scholar [6] L. Hörmander, The Analysis of Linear Partial Differential Operators, Ⅰ. Distribution Theory and Fourier Analysis. Classics in Mathematics. Springer-Verlag, Berlin, 2003. ⅹ+440 pp.  Google Scholar [7] L. Ho and D. Russell, Admissible input elements for systems in Hilbert space and a Carleson measure criterion, SIAM J. Control Optim., 21 (1983), 614-640.  doi: 10.1137/0321037.  Google Scholar [8] A. E. Ingham, Some trigonometrical inequalities with applications to the theory of series, Math. Z., 41 (1936), 367-379.  doi: 10.1007/BF01180426.  Google Scholar [9] P. Lissy, A link between the cost of fast controls for the 1-D heat equation and the uniform controllability of a 1-D transport-diffusion equation, C. R. Math. Acad. Sci., Paris, 350 (2012), 591-595. doi: 10.1016/j.crma.2012.06.004.  Google Scholar [10] P. Lissy, An application of a conjecture due to Ervedoza and Zuazua concerning the observability of the heat equation in small time to a conjecture due to Coron and Guerrero concerning the uniform controllability of a convection-diffusion equation in the vanishing viscosity limit, Systems and Control Letters, 69 (2014), 98-102.  doi: 10.1016/j.sysconle.2014.04.011.  Google Scholar [11] P. Lissy, On the cost of fast controls for some families of dispersive or parabolic equations in one space dimension, SIAM J. Control Optim., 52 (2014), 2651-2676.  doi: 10.1137/140951746.  Google Scholar [12] P. Lissy, Explicit lower bounds for the cost of fast controls for some 1-D parabolic or dispersive equations, and a new lower bound concerning the uniform controllability of the 1-D transport-diffusion equation, J. Differential Equations, 259 (2015), 5331-5352.  doi: 10.1016/j.jde.2015.06.031.  Google Scholar [13] S. Mandelbrojt, Analytic functions and classes of infinitely differentiable functions, Rice Inst. Pamphlet, 29 (1942), 142 pp.  Google Scholar [14] R. Metzler and J. Klafter, The restaurant at the end of the random walk: Recent developments in the description of anomalous transport by fractional dynamics, J. Phys. A, 37 (2004), R161-R208.  doi: 10.1088/0305-4470/37/31/R01.  Google Scholar [15] L. Miller, How Violent are Fast Controls for Schrödinger and Plate Vibrations?, Arch. Ration. Mech. Anal., 172 (2004), 429-456.  doi: 10.1007/s00205-004-0312-y.  Google Scholar [16] L. Miller, Geometric bounds on the growth rate of null-controllability cost for the heat equation in small time, J. Differential Equations, 204 (2004), 202-226.  doi: 10.1016/j.jde.2004.05.007.  Google Scholar [17] L. Miller, On the controllability of anomalous diffusions generated by the fractional Laplacian, Mathematics of Control, Signals and Systems, 18 (2006), 260-271.  doi: 10.1007/s00498-006-0003-3.  Google Scholar [18] R. M. Redheffer, Completeness of sets of complex exponentials, Advances in Math., 24 (1977), 1-62.   Google Scholar [19] L. Robino, Linear Partial Differential Operators in Gevrey Spaces, World Scientific Publishing Co., Inc., River Edge, NJ, 1993. doi: 10.1142/9789814360036.  Google Scholar [20] W. Rudin, Real and Complex Analysis, Third edition. McGraw-Hill Book Co., New York, 1987.  Google Scholar [21] T. Seidman, Two results on exact boundary control of parabolic equations, Appl. Math. Optim., 11 (1984), 145-152.  doi: 10.1007/BF01442174.  Google Scholar [22] T. Seidman, S. A. Avdonin and S. A. Ivanov, The "window problem" for series of complex exponentials, J. Fourier Anal. Appl., 6 (2000), 233-254.  doi: 10.1007/BF02511154.  Google Scholar [23] G. Tenenbaum and M. Tucsnak, New blow-up rates of fast controls for the Schrödinger and heat equations, Journal of Differential Equations, 243 (2007), 70-100.  doi: 10.1016/j.jde.2007.06.019.  Google Scholar show all references ##### References: [1] F. Ammar-Khodja, A. Benabdallah, M. Gonzalez-Burgos and L. de Teresa, The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials, J. Math. Pures Appl., 96 (2011), 555-590.  doi: 10.1016/j.matpur.2011.06.005.  Google Scholar [2] J. -M. Coron, Control and Nonlinearity, Volume 136 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, 2007.  Google Scholar [3] H. O. Fattorini and D. L. Russell, Exact controllability theorems for linear parabolic equations in one space dimension, Arch. Ration. Mech. Anal., 43 (1971), 272-292.   Google Scholar [4] E. Güichal, A lower bound of the norm of the control operator for the heat equation, J. Math. Anal. Appl., 110 (1985), 519-527.  doi: 10.1016/0022-247X(85)90313-0.  Google Scholar [5] X. Guo and M. Xu, Some physical applications of fractional Schrödinger equation J. Math. Phys., 47 (2006), 082104, 9pp. doi: 10.1063/1.2235026.  Google Scholar [6] L. Hörmander, The Analysis of Linear Partial Differential Operators, Ⅰ. Distribution Theory and Fourier Analysis. Classics in Mathematics. Springer-Verlag, Berlin, 2003. ⅹ+440 pp.  Google Scholar [7] L. Ho and D. Russell, Admissible input elements for systems in Hilbert space and a Carleson measure criterion, SIAM J. Control Optim., 21 (1983), 614-640.  doi: 10.1137/0321037.  Google Scholar [8] A. E. Ingham, Some trigonometrical inequalities with applications to the theory of series, Math. Z., 41 (1936), 367-379.  doi: 10.1007/BF01180426.  Google Scholar [9] P. Lissy, A link between the cost of fast controls for the 1-D heat equation and the uniform controllability of a 1-D transport-diffusion equation, C. R. Math. Acad. Sci., Paris, 350 (2012), 591-595. doi: 10.1016/j.crma.2012.06.004.  Google Scholar [10] P. Lissy, An application of a conjecture due to Ervedoza and Zuazua concerning the observability of the heat equation in small time to a conjecture due to Coron and Guerrero concerning the uniform controllability of a convection-diffusion equation in the vanishing viscosity limit, Systems and Control Letters, 69 (2014), 98-102.  doi: 10.1016/j.sysconle.2014.04.011.  Google Scholar [11] P. Lissy, On the cost of fast controls for some families of dispersive or parabolic equations in one space dimension, SIAM J. Control Optim., 52 (2014), 2651-2676.  doi: 10.1137/140951746.  Google Scholar [12] P. Lissy, Explicit lower bounds for the cost of fast controls for some 1-D parabolic or dispersive equations, and a new lower bound concerning the uniform controllability of the 1-D transport-diffusion equation, J. Differential Equations, 259 (2015), 5331-5352.  doi: 10.1016/j.jde.2015.06.031.  Google Scholar [13] S. Mandelbrojt, Analytic functions and classes of infinitely differentiable functions, Rice Inst. Pamphlet, 29 (1942), 142 pp.  Google Scholar [14] R. Metzler and J. Klafter, The restaurant at the end of the random walk: Recent developments in the description of anomalous transport by fractional dynamics, J. Phys. A, 37 (2004), R161-R208.  doi: 10.1088/0305-4470/37/31/R01.  Google Scholar [15] L. Miller, How Violent are Fast Controls for Schrödinger and Plate Vibrations?, Arch. Ration. Mech. Anal., 172 (2004), 429-456.  doi: 10.1007/s00205-004-0312-y.  Google Scholar [16] L. Miller, Geometric bounds on the growth rate of null-controllability cost for the heat equation in small time, J. Differential Equations, 204 (2004), 202-226.  doi: 10.1016/j.jde.2004.05.007.  Google Scholar [17] L. Miller, On the controllability of anomalous diffusions generated by the fractional Laplacian, Mathematics of Control, Signals and Systems, 18 (2006), 260-271.  doi: 10.1007/s00498-006-0003-3.  Google Scholar [18] R. M. Redheffer, Completeness of sets of complex exponentials, Advances in Math., 24 (1977), 1-62.   Google Scholar [19] L. Robino, Linear Partial Differential Operators in Gevrey Spaces, World Scientific Publishing Co., Inc., River Edge, NJ, 1993. doi: 10.1142/9789814360036.  Google Scholar [20] W. Rudin, Real and Complex Analysis, Third edition. McGraw-Hill Book Co., New York, 1987.  Google Scholar [21] T. Seidman, Two results on exact boundary control of parabolic equations, Appl. Math. Optim., 11 (1984), 145-152.  doi: 10.1007/BF01442174.  Google Scholar [22] T. Seidman, S. A. Avdonin and S. A. Ivanov, The "window problem" for series of complex exponentials, J. Fourier Anal. Appl., 6 (2000), 233-254.  doi: 10.1007/BF02511154.  Google Scholar [23] G. Tenenbaum and M. Tucsnak, New blow-up rates of fast controls for the Schrödinger and heat equations, Journal of Differential Equations, 243 (2007), 70-100.  doi: 10.1016/j.jde.2007.06.019.  Google Scholar Difference between $C_S(\alpha)$ and the upper bound of [11] with respect to $\alpha$ Difference between $C_S(\alpha)$ and the lower bound of [12] with respect to $\alpha$ Difference between $C_H(\alpha)$ and the upper bound of [11] with respect to $\alpha$ Difference between $C_H(\alpha)$ and the lower bound of [12] with respect to $\alpha$ [1] Víctor Hernández-Santamaría, Luz de Teresa. Robust Stackelberg controllability for linear and semilinear heat equations. Evolution Equations & Control Theory, 2018, 7 (2) : 247-273. doi: 10.3934/eect.2018012 [2] Zhen Wang, Xiong Li, Jinzhi Lei. Second moment boundedness of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2963-2991. doi: 10.3934/dcdsb.2014.19.2963 [3] Tôn Việt Tạ. Existence results for linear evolution equations of parabolic type. Communications on Pure & Applied Analysis, 2018, 17 (3) : 751-785. doi: 10.3934/cpaa.2018039 [4] Janusz Mierczyński, Wenxian Shen. Time averaging for nonautonomous/random linear parabolic equations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 661-699. doi: 10.3934/dcdsb.2008.9.661 [5] Yanqing Wang, Donghui Yang, Jiongmin Yong, Zhiyong Yu. Exact controllability of linear stochastic differential equations and related problems. Mathematical Control & Related Fields, 2017, 7 (2) : 305-345. doi: 10.3934/mcrf.2017011 [6] Peter Šepitka. Riccati equations for linear Hamiltonian systems without controllability condition. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1685-1730. doi: 10.3934/dcds.2019074 [7] Guillaume Olive. Boundary approximate controllability of some linear parabolic systems. Evolution Equations & Control Theory, 2014, 3 (1) : 167-189. doi: 10.3934/eect.2014.3.167 [8] Farid Ammar Khodja, Franz Chouly, Michel Duprez. Partial null controllability of parabolic linear systems. Mathematical Control & Related Fields, 2016, 6 (2) : 185-216. doi: 10.3934/mcrf.2016001 [9] A. Rodríguez-Bernal. Perturbation of the exponential type of linear nonautonomous parabolic equations and applications to nonlinear equations. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 1003-1032. doi: 10.3934/dcds.2009.25.1003 [10] Farid Ammar Khodja, Cherif Bouzidi, Cédric Dupaix, Lahcen Maniar. Null controllability of retarded parabolic equations. Mathematical Control & Related Fields, 2014, 4 (1) : 1-15. doi: 10.3934/mcrf.2014.4.1 [11] Vitali Liskevich, Igor I. Skrypnik. Pointwise estimates for solutions of singular quasi-linear parabolic equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1029-1042. doi: 10.3934/dcdss.2013.6.1029 [12] Filippo Gazzola. On the moments of solutions to linear parabolic equations involving the biharmonic operator. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3583-3597. doi: 10.3934/dcds.2013.33.3583 [13] Kristian Bredies. Weak solutions of linear degenerate parabolic equations and an application in image processing. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1203-1229. doi: 10.3934/cpaa.2009.8.1203 [14] J. Húska, Peter Poláčik. Exponential separation and principal Floquet bundles for linear parabolic equations on $R^N$. Discrete & Continuous Dynamical Systems - A, 2008, 20 (1) : 81-113. doi: 10.3934/dcds.2008.20.81 [15] Mahamadi Warma. Semi linear parabolic equations with nonlinear general Wentzell boundary conditions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5493-5506. doi: 10.3934/dcds.2013.33.5493 [16] Peter Poláčik. On uniqueness of positive entire solutions and other properties of linear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 13-26. doi: 10.3934/dcds.2005.12.13 [17] Tianling Jin, Jingang Xiong. Schauder estimates for solutions of linear parabolic integro-differential equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5977-5998. doi: 10.3934/dcds.2015.35.5977 [18] Angelo Favini, Alfredo Lorenzi, Hiroki Tanabe, Atsushi Yagi. An $L^p$-approach to singular linear parabolic equations with lower order terms. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 989-1008. doi: 10.3934/dcds.2008.22.989 [19] Saeed Ketabchi, Hossein Moosaei, M. Parandegan, Hamidreza Navidi. Computing minimum norm solution of linear systems of equations by the generalized Newton method. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 113-119. doi: 10.3934/naco.2017008 [20] Z. K. Eshkuvatov, M. Kammuji, Bachok M. Taib, N. M. A. Nik Long. Effective approximation method for solving linear Fredholm-Volterra integral equations. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 77-88. doi: 10.3934/naco.2017004 2018 Impact Factor: 1.292 ## Metrics • PDF downloads (16) • HTML views (8) • Cited by (1) ## Other articlesby authors • on AIMS • on Google Scholar [Back to Top]
2020-02-24 06:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7755900025367737, "perplexity": 2837.2383408918827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00409.warc.gz"}
https://www.physicsforums.com/threads/set-notation.163519/
# Set Notation 1. Mar 31, 2007 ### danago Hi. Im just wondering, is there any notation i can use to refer to elements ONLY in a certain set. Usually, given sets A, B and C, i could refer to elements of only set A as $$A \cap \overline B \cap \overline C$$, but is there some notation that specifically refers to elements ONLY in a certains set? 2. Apr 1, 2007 ### ZioX I can't think of anything offhand. The question, as posed, is silly. Care to put it in context? 3. Apr 1, 2007 ### f(x) $$\in A$$ .............?? 4. Apr 1, 2007 ### HallsofIvy Staff Emeritus Set difference is the usual notation: A\B is the set of elements of A that are not in B. A\(BUC) is the set of elements of A that are not in B or C.
2017-04-26 00:13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6450663805007935, "perplexity": 980.6866324891646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00545-ip-10-145-167-34.ec2.internal.warc.gz"}
https://tribology.asmedigitalcollection.asme.org/fluidsengineering/article-abstract/129/5/659/443969/Conditionally-Sampled-Turbulent-and-Nonturbulent?redirectedFrom=fulltext
Conditionally-sampled boundary layer data for an accelerating transitional boundary layer have been analyzed to calculate the entropy generation rate in the transition region. By weighing the nondimensional dissipation coefficient for the laminar-conditioned-data and turbulent-conditioned-data with the intermittency factor $γ$ the average entropy generation rate in the transition region can be determined and hence be compared to the time averaged data and correlations for steady laminar and turbulent flows. It is demonstrated that this method provides, for the first time, an accurate and detailed picture of the entropy generation rate during transition. The data used in this paper have been taken from detailed boundary layer measurements available in the literature. This paper provides, using an intermittency weighted approach, a methodology for predicting entropy generation in a transitional boundary layer. 1. Bejan , A. , 1982, Entropy Generation Through Heat and Fluid Flow , Wiley , NY . 2. Denton , J. D. , 1993, “ Loss Mechanisms in Turbomachines ,” ASME J. Turbomach. 0889-504X, 115 , pp. 621 656 . 3. Truckenbrodt , E. , 1952, “ A Method of Quadrature for the Calculation of the Laminar and Turbulent Boundary Layers in Case of Plane and Rotational Symmetric Flow ,” NACA TM 1379, 1955 (Translated as Ing.-Arch. 0020-1154, 20 ( 4 ), pp. 211 228 ). 4. Pohlhausen , K. , 1921, “ Zur Näherungsweisen Integration der Differentialgleichung der Laminaren Reibungsschicht ,” Z. Angew. Math. Mech. 0044-2267, 1 , 252 268 . 5. Schlichting , H. , 1979, Boundary Layer Theory , 7th ed., Mc-Graw Hill , NY . 6. Walsh , E. J. , and Davies , M. R. D. , 2005, “ Measurements in the Transition Region of a Turbine Blade Profile Under Compressible Conditions ,” ASME J. Fluids Eng. 0098-2202, 127 , pp. 400 403 . 7. Mayle , R. E. , 1991, “ The Role of Laminar-Turbulent Transition in Gas Turbine Engines ,” ASME J. Turbomach. 0889-504X, 113 , pp. 509 537 . 8. Walsh , E. J. , and Davies , M. R. D. , 2003 “ Measurement and Prediction of Transition on the Suction Surface of Turbine Blade Profiles ,” Proceedings of the 5th European Conference on Turbomachinery , Fluid Dynamics and Thermodynamics, Prague, Czech Republic , Paper No. TT01-201. 9. Gostelow , J. P. , Blunden , A. R. , and Walker , G. J. , 2004 “ Effects of Free Stream Turbulence and Adverse Pressure Gradients on Boundary Layer Transition ,” ASME J. Turbomach. 0889-504X, 116 , pp. 392 404 . 10. Boyle , R. J. , and Simon , F. F. , 1999, “ Mach Number Effects on Turbine Blade Transition Length Prediction ,” J. Turbomach. 0889-504X, 121 , pp. 694 702 . 11. Suzen , Xiong , Y.B. , G. , and Huang , P. G. , 2002, “ Predictions of Transitional Flows in Low-Pressure Turbines using Intermittency Transport Equation ,” AIAA J. 0001-1452, 40 , pp. 254 266 . 12. Abu-Ghannam , B. J. , and Shaw , R. , 1980, “ Natural Transition of Boundary Layers-The Effect of Turbulence, Pressure Gradient and Flow History ,” J. Mech. Eng. Sci. 0022-2542, 22 , pp. 213 228 . 13. Emmons , H. W. , 1951, “ The Laminar-Turbulent Transition in a Boundary Layer ,” J. Aeronaut. Sci. 0095-9812, 18 , pp. 490 498 . 14. Dhawan , S. , and Narasimha , R. , 1958, “ Some Properties of Boundary Layer Flow During the Transition From Laminar to Turbulent Motion ,” J. Fluid Mech. 0022-1120, 3 , pp. 418 436 . 15. Kim , J. , and Simon , T. W. , 1991, “ Free-Stream Turbulence and Concave Curvature Effects on Heated, Transitional Boundary Layers ” Final Report, Minnesota University, Minneapolis, Department of Mechanical Engineering, Vol. I , NASA CR 187150 and Vol. II , NASA CR 187151. 16. Wang , T. , and Zhou , D. , 1998, “ Conditionally Sampled Flow and Thermal Behaviour of a Transitional Boundary Layer at Elevated Free-Stream Turbulence ,” Int. J. Heat Fluid Flow 0142-727X, 19 , pp. 348 357 . 17. Volino , R. J. , Schultz , M. P. , and Pratt , C. M. , 2003, “ Conditional Sampling in a Transitional Boundary Layer under High Free-Stream Turbulence Conditions ,” ASME J. Fluids Eng. 0098-2202, 125 , pp. 28 37 . 18. Schobeiri , M. T. , , K. , and Lewalle , J. , 2003, “ Effect of Unsteady Wake Passing Frequency on Boundary Layer Transition, Experimental Investigation, and Wavelet Analysis ,” ASME J. Fluids Eng. 0098-2202, 125 , pp. 251 266 . 19. Stieger , R. D. , 2002, “ The Effects of Wakes on Separating Boundary Layers in Low Pressure Turbines ,” Ph.D. dissertation, Engineering Department, Cambridge University, Cambridge, UK. 20. Kim , J. , Simon , T. W. , and Kestoras , M. , 1994, “ Fluid Mechanics and Heat Transfer Measurements in Transitional Boundary Layers Conditionally Sampled on Intermittency ,” ASME J. Turbomach. 0889-504X, 116 , pp. 405 416 . 21. Dey , J. , 2000, “ On the Momentum Balance in Linear-Combination Models for the Transition Zone ,” ASME J. Turbomach. 0889-504X, 122 , pp. 587 588 . 22. Roach , P. E. , and Brierley , D. H. , 1990, “ The Influence of a Turbulent Free Stream on Zero Pressure Gradient Transitional Boundary Layer Development. Part 1: Test Cases T3A and T3B ,” Numerical Simulation of Unsteady Flows and Transition to Turbulence , Cambridge University Press , pp. 319 347 . 23. O’Donnell , F. K. , 2000, “ The Measurement of Aerodynamic Entropy Generation in a Turbine Blade Boundary Layer ,” Ph.D. thesis, Mechanical and Aeronautical Department, University of Limerick, Ireland.
2022-12-01 07:40:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6132285594940186, "perplexity": 12149.856661753278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00819.warc.gz"}
http://openstudy.com/updates/55f95c53e4b08f8c5b46bd18
## A community for students. Sign up today Here's the question you clicked on: ## anonymous one year ago What is the next term for the given arithmetic sequence? -3, -2.25, -1.5, -0.75, ... • This Question is Open 1. Nnesha first you should common difference : arithmetic seq: when you add or subtract the same value to get the next term so to find common difference you should subtract either 2nd term from 1st term or 3rd term from 2nd term $\large\rm d=a_2-a_1 , a_3-a_2 ,a_4-a_3$ 2. Nnesha a_2 2nd term a_1 first term 3. anonymous it's basically is the last two previous term is subtracted #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2017-01-22 20:58:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46490612626075745, "perplexity": 3051.484408987988}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00214-ip-10-171-10-70.ec2.internal.warc.gz"}
https://quant.stackexchange.com/questions/32711/should-price-impact-be-the-same-for-positive-negative-implied-volatility-shocks
# Should price impact be the same for positive/negative implied volatility shocks? I am using a vendor system to stress a portfolio which contains (among others) derivatives with implied volatility exposure. The issue is that when using a 1000 bps implied volatility stress upwards and downwards the result is really close in both cases (with an opposite sign obviously) Is this expected? It may help you to notice that, for a bump in implied volatility $\delta \sigma$, the impact on the price of the derivative $V$ is given by: $$\delta V = \underbrace{\frac{\partial V}{\partial \sigma}}_{\text{Vega}} \delta \sigma + \frac{1}{2} \underbrace{\frac{\partial^2 V}{\partial \sigma^2}}_{\text{Volga, Vomma}} (\delta \sigma)^2 + o((\delta \sigma)^2)$$ Hence, the positive ($\delta V^P$) and negative ($\delta V^N$) price impacts for respective bumps $\delta \sigma^P= \vert\delta\sigma\vert$ and $\delta\sigma^N = - \vert\delta\sigma\vert$: $$\delta V^P = \frac{\partial V}{\partial \sigma} \mid \delta \sigma \mid + \frac{1}{2} \frac{\partial^2 V}{\partial \sigma^2} (\delta \sigma)^2$$ $$\delta V^N = -\frac{\partial V}{\partial \sigma} \mid \delta \sigma \mid + \frac{1}{2} \frac{\partial^2 V}{\partial \sigma^2} (\delta \sigma)^2$$ hence $$\delta V^P = -\delta V^N + \frac{\partial^2 V}{\partial \sigma^2} (\delta \sigma)^2 + o((\delta \sigma)^3)$$ and when no Volga (also called Vomma): $$\delta V^P = - \delta V^N$$ For illustration purpose here is the Volga curve of a vanilla option of time to maturity $\tau$ as a function of forward moneyness $m=K/F(0,\tau)$. Observe how an ATM option has no Volga and how this changes as you move away from the money. • Short answer: Because $\sigma$ is the return volatility not the price volatility. Long(er) answer: The price of a European option writes $V(T,\theta) = \int_{0}^{+\infty} h(S,\theta) q(T,S) dS$ where $T$ is the maturity, $\theta$ some contract parameters (e.g. strike for call/put), and $q(T,S) = d\Bbb{Q}(S_T \leq s)/ds$ the distribution of $S_T$ under the risk-neutral measure. Under BS, $q(T,S)$ is fully characterised by its first 2 moments, the mean $F(0,T)$ (forward price) and the variance $F^2(0,T)(e^{\sigma^2 T}-1)$. Thus you see that the dependence on $\sigma$ is non-symmetric. – Quantuple Feb 27 '17 at 12:57
2020-07-07 00:36:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254708886146545, "perplexity": 655.5027046069624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00365.warc.gz"}
http://mathoverflow.net/questions/37540/reference-for-weak-semigroup/109075
# reference for weak*-semigroup Let $X$ a dual Bancah space (there exists a Banach space $Y$ such that $X=Y'$). A weak* semigroup on $X$ is a semigroup $(T_t)_{\geq 0}$ on $X$ such that, for all $x\in X$, we have $T_tx\xrightarrow[t \to 0^+]{}x$ in the weak* topology. I know a lot of books about $C_0$-semigroups but not about weak* semigroups. Do you know a good place to read about weak* semigroups and their generators? A book would Be perfect. - In your first line, presumably you mean convergence in w*-topology as $t$ tends to 0, for each $x$? –  Yemon Choi Sep 2 '10 at 21:36 Even that wouldn't make sense, unless $X$ is the dual of something. –  Nate Eldredge Sep 2 '10 at 22:34 Thank you very much for your answers. However, I forgot a part of the statement. I am sorry. –  BigBill Sep 3 '10 at 6:44 If the $T_t$ are all weak$^*$ continuous, you are just looking at the dual semigroup to a strongly continuous semigroup on the predual. –  Bill Johnson Sep 3 '10 at 18:41 Echoing the remark of @Bill Johnson, one possibility is van Neerven's book on adjoint semigroups. - I don't quite follow your notation, but I'll answer what I think you might be asking. A $C_0$ or strongly continuous semigroup of operators $T_t$ on a Banach space $X$ is one such that $T_t x \to x$ in norm as $t \to 0$, i.e. $||T_t x - x||_X \to 0$. In other words, $T_t \to I$ in the strong operator topology. A weakly continuous semigroup $T_t$ has $T_t x \to x$ weakly as $t \to 0$, i.e. $f(T_t x) \to f(x)$ for each $f \in X^*$. In other words, $T_t \to I$ in the weak operator topology. In fact, these two conditions are equivalent. This appears as Theorem 1.6 of K.-J. Engel and R. Nagel, A Short Course on Operator Semigroups. So this is why you never hear anyone talking about weakly continuous semigroups. - Thank you very much for your answer. However, I forgot a part of the statement. I am sorry. –  BigBill Sep 3 '10 at 6:45
2015-07-05 12:54:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952011227607727, "perplexity": 382.7611601814359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097473.95/warc/CC-MAIN-20150627031817-00134-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.reddit.com/r/haskell/comments/qcrhj/level_0_a_snake_clone_using_sdl_with_a_nice/
This is an archived post. You won't be able to vote or comment. [–] 11 points12 points  (1 child) (World _ _ _ _ _ _ _ _ _ _ [GetMap] _) -> do Don't do this. You can pattern-match with {} for named getters. Also, instead of Bools in https://github.com/mikeplus64/Level-0/blob/master/src/Types.hs -- why not define your own data-types, so instead of True/False appearing in use cases, you can see the meaning behind the boolean? [–][S] 2 points3 points  (0 children) Somehow I didn't know about pattern matching with records before, thanks. [–][S] 2 points3 points  (0 children)
2016-10-28 12:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6640874743461609, "perplexity": 4243.046076628257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00527-ip-10-171-6-4.ec2.internal.warc.gz"}
http://etna.mcs.kent.edu/volumes/2001-2010/vol36/abstract.php?vol=36&pages=113-125
## On an unsymmetric eigenvalue problem governing free vibrations of fluid-solid structures Markus Stammberger and Heinrich Voss ### Abstract In this paper we consider an unsymmetric eigenvalue problem occurring in fluid-solid vibrations. We present some properties of this eigenvalue problem and a Rayleigh functional which allows for a min-max-characterization. With this Rayleigh functional the one-sided Rayleigh functional iteration converges cubically, and a Jacobi-Davidson-type method improves the local and global convergence properties. Full Text (PDF) [204 KB] ### Key words eigenvalue, variational characterization, minmax principle, fluid-solid interaction, Rayleigh quotient iteration, Jacobi-Davidson method 65F15 ### ETNA articles which cite this article Vol. 40 (2013), pp. 82-93 Aleksandra Kostić and Heinrich Voss: On Sylvester's law of inertia for nonlinear eigenvalue problems < Back
2018-02-26 03:19:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088951349258423, "perplexity": 6766.448119051446}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00397.warc.gz"}
https://www.studysmarter.us/textbooks/math/linear-algebra-with-applications-5th/orthogonality-and-least-squares/q4e-find-the-angle-between-each-of-the-pairs-of-vectors-and-/
Suggested languages for you: Americas Europe Q4E Expert-verified Found in: Page 215 ### Linear Algebra With Applications Book edition 5th Author(s) Otto Bretscher Pages 442 pages ISBN 9780321796974 # Find the angle ${\mathbit{\theta }}$ between each of the pairs of vectors $\stackrel{\mathbf{⇀}}{\mathbf{u}}$ and $\stackrel{\mathbf{⇀}}{\mathbf{v}}$ in exercises 4 through 6.4. $\stackrel{\mathbf{⇀}}{\mathbf{u}}{\mathbf{=}}\left[\begin{array}{c}1\\ 1\end{array}\right]{\mathbf{,}}\stackrel{\mathbf{⇀}}{\mathbf{v}}{\mathbf{=}}\left[\begin{array}{c}7\\ 11\end{array}\right]{\mathbf{}}{\mathbf{.}}$ The angle $\theta$ between $\stackrel{⇀}{u}$ and $\stackrel{⇀}{v}$ is about $12.58°$ . See the step by step solution ## Step 1: Angle between two vectors Consider two nonzero vectors $\stackrel{\mathbf{⇀}}{\mathbf{x}}$ and role="math" localid="1659434098948" $\stackrel{\mathbf{⇀}}{\mathbf{y}}$ in ${{\mathbit{R}}}^{{\mathbf{n}}}$. The angle between these vectors Is defined as: role="math" localid="1659434271678" ${\mathbit{\theta }}{\mathbf{=}}{\mathbit{a}}{\mathbit{r}}{\mathbit{c}}{\mathbit{c}}{\mathbit{o}}{\mathbit{s}}\frac{\stackrel{\mathbf{⇀}}{\mathbf{x}}\mathbf{.}\stackrel{\mathbf{⇀}}{\mathbf{y}}}{\mathbf{||}\stackrel{\mathbf{⇀}}{\mathbf{x}}\mathbf{||}\mathbf{.}\mathbf{||}\stackrel{\mathbf{⇀}}{\mathbf{y}}\mathbf{||}}$ ## Step 2: Substitute the values into the angle formula $\theta =arccos\frac{\stackrel{⇀}{u}.\stackrel{⇀}{v}}{||\stackrel{⇀}{u}||.||\stackrel{⇀}{v}||}\phantom{\rule{0ex}{0ex}}=arccos\frac{\left[\left[11\right]\left[\begin{array}{c}7\\ 11\end{array}\right]\right]}{\left(\sqrt{1-1+1-1}\right)\left(\sqrt{7-7+11-11}\right)}\phantom{\rule{0ex}{0ex}}=arccos\frac{1-7+1-11}{\left(\sqrt{1+1}\right)\left(\sqrt{49+121}\right)}\phantom{\rule{0ex}{0ex}}=arccos\frac{18}{\sqrt{340}}\phantom{\rule{0ex}{0ex}}=arccos\frac{18}{18.44}\phantom{\rule{0ex}{0ex}}=arccos\left(0.976\right)\phantom{\rule{0ex}{0ex}}\approx 12.58°$ Hence, the value of $\theta$ is about $12.58°$ .
2023-03-20 10:13:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84067302942276, "perplexity": 1612.1871313538502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00225.warc.gz"}
https://spmaddmaths.blog.onlinetuition.com.my/2020/06/the-sum-of-first-n-terms-of-arithmetic.html
# 5.2.3 Sum of the First n Terms of an Arithmetic Progression 5.2.3 Sum of the First nTerms of an Arithmetic Progression (F) Sum of the First n terms of an Arithmetic Progressions $\overline{)\begin{array}{l}\text{}{S}_{n}=\frac{n}{2}\left[2a+\left(n-1\right)d\right]\text{}\\ \text{}{S}_{n}=\frac{n}{2}\left(a+l\right)\end{array}}$ a = first term d = common difference n = the number of term Sn = the sum of first n terms Example: Calculate the sum of each of the following arithmetic progressions. (a) -11, -8, -5, … up to the first 15 terms. (b) 8,   10½,   13,…   up to the first 13 terms. (c) 5, 7, 9,….., 75 [Smart TIPS: The last term is given, you can find the number of term, n] Solution: (a) $\begin{array}{l}-11,-8,-5,\dots ..\text{Find}{S}_{15}\\ a=-11,\\ d=-8-\left(-11\right)=3\\ {S}_{15}=\frac{15}{2}\left[2a+14d\right]\\ {S}_{15}=\frac{15}{2}\left[2\left(-11\right)+14\left(3\right)\right]=150\end{array}$ (b) $\begin{array}{l}8,10\frac{1}{2},13,\dots ..\text{Find}{S}_{13}\\ a=8\\ d=10\frac{1}{2}-8=\frac{5}{2}\\ {S}_{13}=\frac{13}{2}\left[2a+12d\right]\\ {S}_{13}=\frac{13}{2}\left[2\left(8\right)+12\left(\frac{5}{2}\right)\right]=299\end{array}$ (c) $\begin{array}{l}5,7,9,\dots ..,75←\left(\text{The last term}l=75\right)\\ a=5\\ d=7-5=2\\ {S}_{n}=\frac{n}{2}\left(a+l\right)\\ {S}_{36}=\frac{36}{2}\left(5+75\right)=1440\\ \\ \text{The last term}l=75\\ {T}_{n}=75\\ a+\left(n-1\right)d=75\\ 5+\left(n-1\right)\left(2\right)=75\\ \left(n-1\right)\left(2\right)=70\\ n-1=35\\ n=36\end{array}$
2022-12-08 11:48:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6177144646644592, "perplexity": 673.1058363162598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00036.warc.gz"}
http://math.stackexchange.com/questions/144748/branch-points-of-riemann-surfaces
# Branch Points of Riemann Surfaces Can a Riemann surface of a complex-valued function have three branch points? I've been learning about Riemann surfaces from Brown's complex analysis book and the exposition isn't too general, so if the answer is yes I'd appreciate not just an example but some of the intuition behind how many branch points a given Riemann surface can have. - Consider the algebraic curve $X$ in $\mathbb{C}^2$ defined by the zeroes of the polynomial $p(z,w)=w^3-z(z^2-1)$. This can be made into a Riemann surface as a consequence of the Implicit Function Theorem, as you probably know. Now define $f:X\rightarrow\mathbb{C}$ by $f(z,w)=z$. Then $f$ has degree 3. However the points $z=0,\pm 1$ have only a single preimage in $X$. Hence they are branch points with branching order 3. In terms of more general theory I think it makes more sense once you've done some algebraic geometry (which I don't know that much about yet, sadly)! However, heuristically it does seem appropriate that projection maps from algebraic curves defined by cubics should have 3 branch points. More generally you can see how to construct maps with $n$ branch points.
2015-09-03 15:58:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378974437713623, "perplexity": 90.54166454894775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315643.73/warc/CC-MAIN-20150827031515-00139-ip-10-171-96-226.ec2.internal.warc.gz"}
https://zenodo.org/record/3759811/export/schemaorg_jsonld
Dataset Open Access # Regionalized Cultural Access and Participation (Books And Libraries) And Science Attitudes Variables (2013) Daniel Antal ### JSON-LD (schema.org) Export { "description": "<p>This dataset was created from the microdata of the Eurobarometer 79.2 survey using the development version of the eurobarometer package.</p>\n\n<p>The read a book variable is a weighted sum of the responses that chose from QB1 How many times in the last twelve months have you read a book? any answer apart from &quot;not in the last 12 months.&quot;</p>\n\n<p>The library access variable is a weighted sum of the responses that chose from QB1 How many times in the last twelve months have you visited a public libarary? any answer apart from &quot;not in the last 12 months.</p>\n\n<p>The limited library access is a weighted sum of the responses that chose from the question block<br>\nQB2 And for each of the following activities, please tell me why you haven&rsquo;t done it or haven&rsquo;t done it more often in the last 12 months? ... Visited a public library the answer option Limited or poor quality of this activity in the place where you live. In this case, the number of respondents is rather low and this is not a very reliable statistic on regional level.</p>\n\n<p>The supports open access variable is a weighted sum of yes answer options to the QD 17 Do you think that the results of publicly funded research should be made available online free of charge? question.</p>\n\n<p>The internet access question is a weighted sum of responses to the answer option for D46 Which of the following do you have? - An Internet connection at home.</p>\n\n<p>The internet access question is a weighted sum of responses to the answer option for D15 What is your current occupation? - student.&nbsp;</p>", "creator": [ { "@id": "https://orcid.org/0000-0001-7513-6760", "@type": "Person", "name": "Daniel Antal" } ], "url": "https://zenodo.org/record/3759811", "datePublished": "2020-04-21", "keywords": [ "Eurobarometer", "Books", "Libraries" ], "@context": "https://schema.org/", "distribution": [ { "contentUrl": "https://zenodo.org/api/files/deb02959-b46b-4df5-a9fb-56f2f0086a26/books_library_eurobarometer_79_2.csv", "encodingFormat": "csv", } ], "identifier": "https://doi.org/10.5281/zenodo.3759811", "@id": "https://doi.org/10.5281/zenodo.3759811", "@type": "Dataset", "name": "Regionalized Cultural Access and Participation (Books And Libraries) And Science Attitudes Variables (2013)" } 87 42 views
2022-05-26 15:22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22555479407310486, "perplexity": 2889.474989575072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00300.warc.gz"}
https://math.stackexchange.com/questions/4250320/asymptotic-expansions-with-compactly-supported-terms-and-smoothing-operators
# Asymptotic expansions with compactly supported terms and smoothing operators Suppose that we have a pseudodifferental operator $$A\in \Psi^m(\mathbb{R}^n)$$ with symbol $$a\in S^m(\mathbb{R}^n\times\mathbb{R}^n)$$, and $$a$$ has an asymptotic expansion $$a\sim \sum\limits_{j=0}^\infty a_j.$$ Suppose further that each $$a_j$$ is compactly-supported, with the supports non-increasing in $$j$$. Is $$a\in S^{-\infty}(\mathbb{R}^n\times\mathbb{R}^n)$$? I know that compactly-supported symbols are smoothing (but the expansion need not preserve the support structure) and so are symbols where each term in the expansion is zero. I'm not sure if assuming that the supports form a non-increasing chain helps. Here's an attempt which does not use the precise support structure and feels wrong: Fix $$k\in\mathbb{R}.$$ Then, we can write $$a=\sum\limits_{j=0}^{k-1}a_j+r_k,$$ with $$r_k\in S^k(\mathbb{R}^n\times\mathbb{R}^n).$$ Next,$$|\partial_\xi^\alpha\partial_x^\beta a(x,\xi)|\leq \sum\limits_{j=0}^{k-1} |\partial_\xi^\alpha\partial_x^\beta a_j(x,\xi)|+|\partial_\xi^\alpha\partial_x^\beta r_k(x,\xi)|\leq C_{\alpha,\beta,k}\langle\xi\rangle^{k-|\alpha|},$$ using that $$r_k\in S^k(\mathbb{R}^n\times\mathbb{R}^n)$$ and that each $$a_j$$ is compactly-supported. There is an issue in this line of logic, though (I believe). The constant depends on $$k$$ since it depends on where I truncate the asymptotic expansion. Is this argument flawed? If so, is the claim true? You have one slight error that may be confusing you, namely that $$r_k\in S^{m-k}$$ not $$S^k$$, so the $$r_k$$ are getting better, not worse. In particular, you actually don't really need to do any work, it follows just from the fact that $$S^l$$ closed under summation. More specifically, to show that $$a\in S^{-\infty}$$, it suffices to show that $$a\in S^{m-N}$$ for all $$N\in \mathbb{N}$$. Then we have by the definition of asymptotic summation that $$a-\sum_{i=0}^{N-1}a_i=r_N\in S^{m-N}.$$ On the other hand $$a_i \in S^{-\infty}\subseteq S^{m-N}$$ by assumption, so that $$\sum_{i=0}^{N-1}a_i\in S^{m-N}$$ are well. Thus $$a\in S^{m-N}$$ which completes the proof. In particular, you actually don't need to assume anything about the joint support of $$a_i$$. On the otherhand, if you assume that $$a=\sum_{i=1}^{\infty}a_i$$ literally, instead of asymptotically, you need your support condition. This follows from the fact that any member of $$S^m$$ with compact support is in $$S^{-\infty}$$. • Indeed, I was just above to edit the first mistake! Sep 14, 2021 at 21:50 • Ah, that's quite obvious actually. Do you know, by chance, that if one has the given support properties, can we glean anything about the support of the full symbol? Sep 14, 2021 at 21:52 • @user900940 You can always add a symbol in $S^{-\infty}$ without affecting the asymptotic expansion, which can be chosen to have arbitrary support, so sadly I don't think you can get any information on the support of $a$. – pax Sep 14, 2021 at 21:55 • Of course, thank you! Sep 14, 2021 at 21:58
2022-06-26 21:49:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392606616020203, "perplexity": 229.96525745210783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00780.warc.gz"}
https://aminsaied.wordpress.com/2012/07/30/1-eulers-formula-and-the-five-platonic-solids/
# Euler’s formula and the five Platonic solids Euler’s Formula Euler’s formula is a statement about convex polyhedra, that is a solid whose surface consists of polygons, called its faces, such that any side of a face lies on precisely one other face, and such that for any two points on the solid, the straight line connecting them lies entirely within the solid. Each convex polyhedron carries with it certain data, for example the number of faces F, the number of edges E, and the number of vertices V. Euler’s formula gives a relationship between these data, namely V-E+F=2 for all convex polyhedra. The idea that this specific alternating sum should remain constant no matter what convex polyhedron you feed it is not at all clear. We will try to prove it by induction, the obvious question being induction on what? To answer that we first make a cute observation. We can transfer these data relating to our polyhedron into some data about a connected graph in a simple way. To see how lets look at the example of a cube. If you imagine peering through one of the faces like a window and tracing out what you see you might end up with something like this: The important observation is that this is a connected plane graph which has the same number of edges and vertices as the cube, and one fewer face (we lose the face that we were peering through). We can see that in fact this is the case for all convex polyhedra. In this way we translate the data about our polyhedra $(V,E,F)$ into data about a connected graph $(v,e,f)$ where $V=v, E=e, F=f+1$. Therefore proving Euler’s formula is equivalent to proving the very similar statement that v-e+f=1 for any connected plane graph. We do so by induction. Proof: If we have one edge, there is only one possible connected plane graph, and also for two edges there is also only one possibility, and in both these one can verify the desired formula, in case $e=1$ we have $v-e+f=2-1+0=1$ and in case $e=2$ we have $v-e+f=3-2+0=1$. With three edges there are multiple possible graphs, but we don’t need to worry about that. Let’s prove the inductive step: suppose that the formula holds in when there are $n$ edges. Now suppose we have a graph $\Gamma$ with $n+1$ edges, $\nu$ vertices and $\mu$ faces. If $\Gamma$ has $\mu >0$, delete an edge from a face to obtain a new graph $\Gamma'$ with $(v,e,f) = (\nu, n, \mu-1)$. Now by our inductive hypothesis we have that, since $\Gamma'$ has n edges, it satisfies the desired formula, hence $v-e+f=\nu - n + \mu -1 = \nu - (n+1) + \mu =1$, and this is exactly  what we needed to show. If $\Gamma$ doesn’t have $\mu =0$ then it must have an end vertex (a vertex joined only by one edge). Then $\Gamma$ has $\nu$ vertices and $n+1$ edges. If we delete an end vertex and its edge then we obtain a new graph $\Gamma''$ with $\nu-1$ vertices and n edges. Hence by the inductive hypothesis $v-e+f=(\nu-1)-n+0=\nu - (n+1) =1$, which is again just what we wanted to show. And this complets the proof of Euler’s formula. $\Box$ The Five Platonic Solids We say a polyhedron is regular if it is made up of one kind of regular polygon such that each vertex has the same number of edges. This allows us to define the face degree $p$, which is the number of sides each face has, and the vertex degree $q$ which is the number of edges meeting at each vertex. So we have the following data associated to regular polyhedron, $V, E, F, p, q$. We can now use Euler’s formula to prove the remarkable result that there are only 5 regular polyhedra, the so called Platonic solids. They are the tetrahedron, cube, octahedron, dodecahedron and icosahedron. We record the data in a table below: Theorem: These are the only regular polyhedra. Proof: First observe the following two relations, $pF=2E$ $qV = 2E$ First, if we count all the edges that a face has, and do this for each face, we will count each edge exactly twice. Similarly for the second relation, if we count each edge meeting at a vertex, and do so for all vertices we will count each edge twice. We can now substitute $V = \frac{2E}{q}$ and $F=\frac{2E}{p}$ into Euler’s formula $V-E+F=2$ to obtain $\frac{2E}{q}-E+\frac{2E}{p} = 2$ which becomes $\frac{1}{q} + \frac{1}{p} = \frac{1}{2}+\frac{1}{E}$ From this equation we can deduce that the only possibilities for the pair $(p,q) \in \mathbb{N}^2$ are $(3,3), (3,4), (4,3), (3,5)$ and $(5,3)$. It is geometrically clear that these 5 pairs lead to the 5 Platonic solids listed above, and to no others!  $\Box$ Dual Polyhedra We can describe the Platonic solids by their coordinates: Tetrahedron: (1,1,1), (1,-1,-1), (-1,-1,1), (-1,1,-1) Cube: (1,1,1), (1,1,-1), (-1,1,1), (1,-1,1), (1, -1, -1), (-1, 1, -1), (-1,-1,1), (-1,-1,-1) Octahedron: (1,0,0), (0,0,1), (0,1,0), (-1,0,0), (0,-1,0), (0,0,-1) Dodecahedron: (0 $\pm\phi^{-1}$$\pm\phi$), ($\pm\phi^{-1}$$\pm\phi$,0), ($\pm\phi$, 0, $\pm\phi^{-1}$), ($\pm1$, $\pm1$, $\pm1$) Icosahedron: (1,0,$\phi$), (1,0,-$\phi$), (-1,0,$\phi$), (-1,0,-$\phi$), (0,$\phi$,1), (0,$\phi$,-1), (0,-$\phi$,1), (0, –$\phi$,-1), ($\phi$,1,0), ($\phi$,-1,0), (-$\phi$,1,0), (-$\phi$,-1,0) Given a Platonic solid, putting a vertex at the midpoint of each face gives the vertices of the dual polyhedron. It should not be too much of a leap to believe that this dual is itself a Platonic solid. Well, we have just classified the Platonic solids, and there are only five of them, so taking the dual of one doesn’t give a ‘new’ shape, rather it will give one of the five we already have. Given the data (V,E,F, p, q) of a Platonic solid, say P, what can we say about the associated data of its dual, P’? Well, each face in P gives a vertex in P’, by definition of the dual construction. Again, it is geometrically clear that each vertex corresponds to a face in the dual, so we get $F \leftrightarrow V$, we get that the number of edges remains the same, and that $p \leftrightarrow q$. This implies that (P’)’=P. Indeed looking at the table above we get the following dual pairs: cube $\leftrightarrow$ octahedron, dodecahedron $\leftrightarrow$ icosahedron, tetrahedron $\leftrightarrow$ tetrahedron. It can be fun to picture a dual P’ sitting inside the original shape P, and then sitting its dual P”=P sitting inside P’, and so on, getting smaller and smaller indefinitely! Having this image in mind one see that dual shapes share the same symmetry groups.
2019-01-18 23:53:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092339634895325, "perplexity": 231.21557372884126}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00063.warc.gz"}
https://deepai.org/publication/causal-discovery-in-the-presence-of-measurement-error-identifiability-conditions
# Causal Discovery in the Presence of Measurement Error: Identifiability Conditions Measurement error in the observed values of the variables can greatly change the output of various causal discovery methods. This problem has received much attention in multiple fields, but it is not clear to what extent the causal model for the measurement-error-free variables can be identified in the presence of measurement error with unknown variance. In this paper, we study precise sufficient identifiability conditions for the measurement-error-free causal model and show what information of the causal model can be recovered from observed data. In particular, we present two different sets of identifiability conditions, based on the second-order statistics and higher-order statistics of the data, respectively. The former was inspired by the relationship between the generating model of the measurement-error-contaminated data and the factor analysis model, and the latter makes use of the identifiability result of the over-complete independent component analysis problem. ## Authors • 100 publications • 45 publications • 9 publications • 20 publications • 6 publications • 16 publications 10/18/2018 ### An Upper Bound for Random Measurement Error in Causal Discovery Causal discovery algorithms infer causal relations from data based on se... 10/16/2012 ### Causal Discovery of Linear Cyclic Models from Multiple Experimental Data Sets with Overlapping Variables Much of scientific data is collected as randomized experiments interveni... 06/03/2019 ### Anchored Causal Inference in the Presence of Measurement Error We consider the problem of learning a causal graph in the presence of me... 01/28/2020 ### Multi-trek separation in Linear Structural Equation Models Building on the theory of causal discovery from observational data, we s... 03/26/2021 ### FRITL: A Hybrid Method for Causal Discovery in the Presence of Latent Confounders We consider the problem of estimating a particular type of linear non-Ga... 01/22/2014 ### Causal Discovery in a Binary Exclusive-or Skew Acyclic Model: BExSAM Discovering causal relations among observed variables in a given data se... 08/12/2020 ### Reparametrization Invariance in non-parametric Causal Discovery Causal discovery estimates the underlying physical process that generate... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Understanding and using causal relations among variables of interest has been a fundamental problem in various fields, including biology, neuroscience, and social sciences. Since interventions or controlled randomized experiments are usually expensive or even impossible to conduct, discovering causal information from observational data, known as causal discovery (Spirtes et al., 2001; Pearl, 2000), has been an important task and received much attention in computer science, statistics, and philosophy. Roughly speaking, methods for causal discovery are categorized into constraint-based ones, such as the PC algorithm (Spirtes et al., 2001), and score-based ones, such as Greedy Equivalence Search (GES) (Chickering, 2002). Causal discovery algorithms aim to find the causal relations among the observed variables. However, in many cases the measured variables are not identical to the variables we intend to measure. For instance, the measured brain signals may contain error introduced by the instruments, and in social sciences many variables are not directly measurable and one usually resorts to proxies (e.g., for “regional security" in a particular area). In this paper, we assume that the observed variables , , are generated from the underlying measurement-noise-free variables with additional random measurement errors : Xi=~Xi+Ei. (1) Here we assume that the measurement errors are independent from and have non-zero variances. We call this model the CAusal Model with Measurement Error (CAMME). Generally speaking, because of the presence of measurement errors, the d-separation patterns among are different from those among the underlying variables . This generating process has been called the random measurement error model in  (Scheines & Ramsey, 2017). According to the causal Markov condition (Spirtes et al., 2001; Pearl, 2000), observed variables and the underlying variables may have different conditional independence/dependence relations and, as a consequence, the output of constraint-based approaches to causal discovery is sensitive to such error, as demonstrated in (Scheines & Ramsey, 2017). Furthermore, because of the measurement error, the structural equation models according to which the measurement-error-free variables are generated usually do not hold for the observed variables . (In fact, follow error-in-variables models, for which the identifiability of the underlying causal relation is not clear.) Hence, approaches based on structural equation models, such as the linear, non-Gaussian, acyclic model (LiNGAM (Shimizu et al., 2006)), will generally fail to find the correct causal direction and causal model. In this paper, we aim to estimate the causal model underlying the measurement-error-free variables from their observed values contaminated by random measurement error. We assume linearity of the causal model and causal sufficiency relative to . We particularly focus on the case where the causal structure for is represented by a Directed Acyclic Graph (DAG), although this condition can be weakened. In order to develop principled causal discovery methods to recover the causal model for from observed values of , we have to address theoretical issues include • whether the causal model of interest is completely or partially identifiable from the contaminated observations, • what are the precise identifiability conditions, and • what information in the measured data is essential for estimating the identifiable causal knowledge. We make an attempt to answer the above questions on both theoretical and methodological sides. One of the main difficulties in dealing with causal discovery in the presence of measurement error is because the variances of the measurement errors are unknown. Otherwise, if they are known, one can readily calculate the covariance matrix of the measurement-error-free variables and apply traditional causal discovery methods such as the PC  (Spirtes et al., 2001) or GES (Chickering, 2002)) algorithm. It is worth noting that there exist causal discovery methods to deal with confounders, i.e., hidden direct common causes, such as the Fast Causal Inference (FCI) algorithm (Spirtes et al., 2001). However, they cannot estimate the causal structure over the latent variables, which is what we aim to recover in this paper. (Silva et al., 2006) and (Kummerfeld et al., ) have provided algorithms for recovering latent variables and their causal relations when each latent variable has multiple measured effects. Their problem is different from the measurement error setting we consider, where clustering for latent common causes is not required and each measured variable is the direct effect of a single "true" variable. Furthermore, as shown in next section, their models can be seen as special cases of our setting. ## 2 Effect of Measurement Error on Conditional Independence / Dependence We use an example to demonstrate how measurement error changes the (conditional) independence and dependence relationships in the data. More precisely, we will see how the (conditional) independence and independence relations between the observed variables are different from those between the measurement-error-free variables . Suppose we observe , , and , which are generated from measurement-error-free variables according to the structure given in Figure 1. Clearly is dependent on , while and are conditionally independent given . One may consider general settings for the variances of the measurement errors. For simplicity, here let us assume that there is only measurement error in , i.e., , , and . Let be the correlation coefficient between and and be the partial correlation coefficient between and given , which is zero. Let and be the corresponding correlation coefficient and partial correlation coefficient in the presence of measurement error. We also let to make the result simpler. So we have . Let . For the data with measurement error, ρ12 =Cov(X1,X2)Var1/2(X1)Var1/2(X2) =Cov(~X1,~X2)Var1/2(~X1)(Var(~X2)+Var(E2))1/2 =~ρ(1+γ2)1/2; ρ13,2 =ρ13−ρ12ρ23(1−ρ212)1/2(1−ρ223)1/2 =~ρ13−~ρ12~ρ231+γ2(1−~ρ2(1+γ2))1/2(1−~ρ2(1+γ2))1/2 =r2~ρ21+γ2−~ρ2. As the variance of the measurement error in increases, become larger, and decreases and finally goes to zero; in contrast, , which is zero for the measurement-error-free variables, is increasing and finally converges to . See Figure 2 for an illustration. In other words, in this example as the variance of the measurement error in increases, and become more and more independent, while and are conditionally more and more dependent given . However, for the measurement-error-free variables, and are dependent and and and conditionally independent given . Hence, the structure given by constraint-based approaches to causal discovery on the observed variables can be very different from the causal structure over measurement-error-free variables. One might apply other types of methods instead of the constraint-based ones for causal discovery from data with measurement error. In fact, as the measurement-error-free variables are not observable, in Figure 1 is actually a confounder for observed variables. As a consequence, generally speaking, due to the effect of the confounders, the independence noise assumption underlying functional causal model-based approaches, such as the method based on the linear, non-Gaussian, acyclic model (Shimizu et al., 2006), will not hold for the observed variables any more. Figure 3 gives an illustration on this. Figure 3(a) shows the scatter plot of vs. and the regression line from to , where , the noise in , and the measurement error , are all uniformly distributed ( , and ). As seen from Figure 3(b), the residual of regressing on is not independent from , although the residual of regressing on is independent from . As a result, the functional causal model-based approaches to causal discovery may also fail to find the causal structure of the measurement-error-free variables from their contaminated observations. ## 3 Canonical Representation of Causal Models with Measurement Error Let be the acyclic causal model over . Here we call it measurement-error-free causal model. Let be the corresponding causal adjacency matrix for , in which is the coefficient of the direct causal influence from to and . We have, ~X=B~X+~E, (2) where the components of , , have non-zero, finite variances. Then is actually a linear transformation of the error terms in because (2) implies ~X=(I−B)−1≜A~E. (3) Now let us consider two types of nodes of , namely, leaf nodes (i.e., those that do not influence any other node) and non-leaf nodes. Accordingly, the noise term in their structural equation models also has distinct behaviors: If is a leaf node, then influences only , not any other; otherwise influences and at least one other variable, , . Consequently, we can decompose the noise vector into two groups: consists of the noise terms that influence only leaf nodes, and contains the remaining noise terms. Equation (3) can be rewritten as ~X=ANL~ENL+AL~EL=~X∗+AL~EL, (4) where , and are and matrices, respectively. Here both and have specific structures. All entries of are 0 or 1; for each column of , there is only one non-zero entry. In contrast, each column of has at least two non-zero entries, representing the influences from the corresponding non-leaf noise term. Further consider the generating process of observed variables . Combining (1) and (4) gives X =~X∗+AL~EL+E=ANL~ENL+(AL~EL+E) =ANL~ENL+E∗ (5) =[ANLI]⋅⎡⎢ ⎢⎣~ENLE∗⎤⎥ ⎥⎦, (6) where and denotes the identity matrix. To make it more explicit, we give how and are related to the original CAMME process: ~X∗i ={~Xi,if ~Xi is not a % leaf node in ~G;~Xi−~Ei,otherwise;, and (7) E∗i ={Ei,if ~Xi is not a leaf node % in ~G;Ei+~Ei,otherwise. Clearly s are independent across , and as we shall see in Section 4, the information shared by difference is still captured by . ###### Proposition 1. For each CAMME specified by (2) and (1), there always exists an observationally equivalent representation in the form of (5) or (6), The proof was actually given in the construction procedure of the representation (5) or (6) from the original CAMME. We call the representation (5) or (6) the canonical representation of the underlying CAMME (CR-CAMME). ##### Example Set 1 Consider the following example with three observed variables , , for which , with causal relations . That is, B=⎡⎢⎣000a0b000⎤⎥⎦, and according to (3), A=⎡⎢⎣100a1b001⎤⎥⎦. Therefore, X =~X+E=~X∗+E∗ =⎡⎢⎣10ab01⎤⎥⎦⋅[~E1~E3]+⎡⎢⎣E1~E2+E2E3⎤⎥⎦ =⎡⎢⎣10\vline100ab\vline01001\vline001⎤⎥⎦⋅⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣~E1~E3E1~E2+E2E3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. In causal discovery from observations in the presence of measurement error, we aim to recover information of the measurement-error-free causal model . Let us define a new graphical model, . It is obtained by replacing variables in with variables . In other words, it has the same causal structure and causal parameters (given by the matrix) as , but its nodes correspond to variables . If we manage to estimate the structure of and involved causal parameters in , then , the causal model of interest, is recovered. Comparing with , involves some deterministic causal relations because each leaf node is a deterministic function of its parents (the noise in leaf nodes has been removed; see (7)). We defined the graphical model because we cannot fully estimate the distribution of measurement-error-free variables , but might be able to estimate that of , under proper assumptions. In what follows, most of the time we assume • The causal Markov condition holds for and the distribution of is non-deterministically faithful w.r.t. , in the sense that if there exists , a subset of , such that neither of and is a deterministic function of and holds, then and (or and ) are d-separated by in . This non-deterministically faithfulness assumption excludes a particular type of parameter coupling in the causal model for . in Figure 4 we give a causal model in which the causal coefficients are carefully chosen so that this assumption is violated: because and , we have , implying and , which are not given by the causal Markov condition on . We note that this non-deterministic faithfulness is defined for the distribution of the constructed variables , not the measurement-error-free variables . (Bear in mind their relationship given in (7).) This assumption is generally stronger than the faithfulness assumption for the distribution of . In particular, in the causal model given in Figure 4, the distribution of is still faithful w.r.t. . Below we call the conditional independence relationship between and given where neither of and is a deterministic function of non-deterministic conditional independence. Now we have two concerns. One is whether essential information of the CR-CAMME is identifiable from observed values of . We are interested in finding the causal model for (or a particular type of dependence structures in) . The CR-CAMME of , given by (5) or (6), has two terms, and . The latter is independent across all variables, and the former preserves major information of the dependence structure in . Such essential information of the CR-CAMME may be the covariance matrix of or the matrix , as discussed in next sections. In the extreme case, suppose such information is not identifiable at all, then it is hopeless to find the underlying causal structure of . The other is what information of the original CAMME, in particular, the causal model over the measurement-error-free variables, can be estimated from the above identifiable information of the CR-CAMME. Although the transformation from the original CAMME to a CR-CAMME is straightforward, without further knowledge there does not necessarily exist a unique CAMME corresponding to a given CR-CAMME: first, the CR-CAMME does not tell us which nodes are leaf nodes in ; second, even if is known to be a leaf node, it is impossible to separate the measurement error from the noise in . Fortunately, we are not interested in everything of the original CAMME, but only the causal graph and the corresponding causal influences . Accordingly, in the next sections we will explore what information of the CR-CAMME is identifiable from the observations of and how to further reconstruct necessary information of the original CAMME. In the measurement error model (1) we assumed that each observed variable is generated from its own latent variable . We note that in case multiple observed variables are generated from a single latent variable or a single observed variable is generated by multiple latent variables (see, e.g., (Silva et al., 2006)), we can still use the CR-CAMME to represent the process. In the former case, certain rows of are identical. For instance, if and are generated as noisy observations of the same latent variable, then in (5) the first two rows of are identical. (More generally, if one allows different coefficients to generate them from the latent variable, the two rows are proportional to each other.) Then let us consider an example in the latter case. Suppose is generated by latent variables and , for each of which there is also an observable counterpart. Write the causal model as and introduce the latent variable , and then we have . The CR-CAMME formulation then follows. ## 4 Identifiability with Second Order Statistics The CR-CAMME (5) has a form of the factor analysis model (FA) (Everitt, 1984), which has been a fundamental tool in data analysis. In its general form, FA assumes the observable random vector was generated by X=Lf+N, (8) where the factors satisfies , and noise terms, as components of , are mutually independent and also independent from . Denote by the covariance matrix of , which is diagonal. The unknowns in (8) are the loading matrix and the covariance matrix . Factor analysis only exploits the second-order statistics, i.e., it assumes that all variables are jointly Gaussian. Clearly in FA is not identifiable; it suffers from at least the right orthogonal transformation indeterminacy. However, under suitable conditions, some essential information of FA is generically identifiable, as given in the following lemma. ###### Lemma 2. For the factor analysis model, when the number of factors , the model is generically globally identifiable, in the sense that for randomly generated in (8), it is with only measure 0 that there exists another representation such that and generate the same covariance matrix for and . This was formulated as a conjecture by (Shapiro, 1985), and was later proven by (Bekker & ten Berge, 1997). This lemma immediately gives rise to the following generic identifiability of the variances of measurement errors.111We note that this “generic identifiability" is sightly weaker than what we want: we want to show that for certain the model is necessarily identifiable. To give this proof is non-trivial and is a line of our future research. ###### Proposition 3. The variances of error terms and the covariance matrix of in the CR-CAMME (5) are generically identifiable when the sample size and the following assumption on the number of leaf nodes holds: • The number of leaf variables satisfies ln>c(n)≜(8n+1)1/2−12n. (9) Clearly is decreasing in and as . To give a sense how restrictive the above condition is, Fig. 5 shows how changes with . In particular, when , , condition (9) implies the number of leaf nodes is ; when , , condition (9) implies . Roughly speaking, as increases, it is more likely for condition (9) to hold. Note that the condition given in Proposition 3 is sufficient but not necessary for the identifiability of the noise variances and the covariance matrix of the non-leaf hidden variables (Bekker & ten Berge, 1997). Now we know that under certain conditions, the covariance matrices of and in the CR-CAMME (5) are (asymptotically) identifiable from observed data with measurement error. Can we recover the measurement-error-free causal model from them? ### 4.1 Gaussian CAMME with the Same Variance For Measurement Errors In many problems the variances of the measurement errors in different variables are roughly the same because the same instrument is used and the variables are measured in similar ways. For instance, this might approximately be the case for Functional magnetic resonance imaging (fMRI) recordings. In fact, if we made the following assumption on the measurement error, the underlying causal graph can be estimated at least up to the equivalence class, as shown in the following corollary. • The measurement errors in all observed variables have the same variance. ###### Proposition 4. Suppose assumptions A0, A1, and A2 hold. Then as , can be estimated up to the equivalence class and, moreover, the leaf nodes of are identifiable. Proofs are given in Appendix. The proof of this corollary inspires a procedure to estimate the information of from contaminated observations in this case, which is denoted by FA+EquVar. It consists of four steps. (1) Apply FA on the data with a given number of leaf nodes and estimate the variances of as well as the covariance matrix of .222Here we suppose the number of leaf nodes is given. In practice one may use model selection methods, such as BIC, to find this number. (2) The smallest values of the variances of correspond to non-leaf nodes, and the remaining nodes correspond to leaf nodes. (3) Apply a causal discovery method, such as the PC algorithm, to the sub-matrix of the estimated covariance matrix of corresponding to non-leaf nodes and find the causal structure over non-leaf nodes. (4) For each leaf node , find the subset of non-leaf nodes that determines , and draw directed edges from those nodes to , and further perform orientation propagation. ### 4.2 Gaussian CAMME: General Case Now let us consider the general case where we do not have the constraint A2 on the measurement error. Generally speaking, after performing FA on the data, the task is to discover causal relations among by analyzing their estimated covariance matrix, which is, unfortunately, singular, with the rank . Then there must exist deterministic relations among , and we have to deal with such relations in causal discovery. Here suppose we simply apply the Deterministic PC (DPC) algorithm (Glymour, 2007; Luo, 2006) to tackle this problem. DPC is almost identical to PC, and the only difference is that when testing for conditional independence relationship , if or is a deterministic function of , one then ignores this test (or equivalently we do not remove the edge between and ). We denote by FA+DPC this procedure for causal discovery from data with measurement error. Under some conditions on the underlying causal model , it can be estimated up to its equivalence class, as given in the following proposition. Here we use to denote the set of parents (direct causes) of in . ###### Proposition 5. Suppose Assumptions A0 and A1 hold. As , compared to , the graph produced by the above DPC procedure does not contain any missing edge. In particular, the edges between all non-leaf nodes are corrected identified. Furthermore, the whole graph of is identifiable up to its equivalence class if the following assumption further holds: • For each pair of leaf nodes and , there exists and that are d-separated in by a variable set , which may be the empty set. Moreover, for each leaf node and each non-leaf node which are not adjacent, there exists which is d-separated from in by a variable set , which may be the empty set. ##### Example Set 2 and Discussion Suppose assumption A0 holds. • , given in Figure 6(a), follows assumptions A1 and A3. According to Proposition 5, the equivalence class of this causal DAG can be asymptotically estimated from observations with measurement error. • Assumptions A0, A1, and A3 are sufficient conditions for to be recovered up to its equivalence class and, they, especially A3, may not be necessary. For instance, consider the causal graph given in Figure 6(b), for which assumption A3 does not hold. If assumption A2 holds, can be uniquely estimated from contaminated data. Other constraints may also guarantee the identifiability of the underlying graph. For example, suppose all coefficients in the causal model are smaller than one in absolute value, then can also be uniquely estimated from noisy data. Relaxation of assumption A3 which still guarantees that is identifiable up to its equivalence class is a future line of research. • The causal graphs and , shown in Figure 6(c), do not follow A1, so generally speaking, they are not identifiable from contaminated observations with second-order statistics. This is also the case for , shown in Figure 6(d). ## 5 Identifiability with Higher Order Statistics The method based on second-order statistics exploits FA and deterministic causal discovery, both of which are computationally relatively efficient. However, if the number of leaf-nodes is so small that the condition in Proposition 3 is violated (roughly speaking, usually this does not happen when is big, say, bigger than 50, but is likely to be the case when is very small, say, smaller than 10), the underlying causal model is not guaranteed to be identifiable from contaminated observations. Another issue is that with second-order statistics, the causal model for is usually not uniquely identifiable; in the best case it can be recovered up to its equivalence class (and leaf nodes). To tackle these issues, below we show that we can benefit from higher-order statistics of the noise terms. In this section we further make the following assumption on the distribution of : • All are non-Gaussian. We note that under the above assumption, in (6) can be estimated up to the permutation and scaling indeterminacies (including the sign indeterminacy) of the columns, as given in the following lemma. ###### Lemma 6. Suppose assumption A4 holds. Given which is generated according to (6), is identifiable up to permutation and scaling of columns as the sample size . ###### Proof. This lemma is implied by Theorem 10.3.1 in (Kagan et al., 1973) or Theorem 1 in (Eriksson & Koivunen, 2004). ∎ ### 5.1 Non-Gaussian CAMME with the Same Variance For Measurement Errors We first note that under certain assumptions the underlying graph is fully identifiable, as shown in the following proposition. ###### Proposition 7. Suppose the assumptions in Corollary 4 hold, and further suppose assumption A4 holds. Then as , the underlying causal graph is fully identifiable from observed values of . ### 5.2 Non-Gaussian CAMME: More General Cases In the general case, what information of the causal structure can we recover? Can we apply existing methods for causal discovery based on LiNGAM, such as ICA-LiNGAM (Shimizu et al., 2006) and Direct-LiNGAM (Shimizu et al., 2011), to recover it? LiNGAM assumes that the system is non-deterministic: each variable is generated as a linear combination of its direct causes plus a non-degenerate noise term. As a consequence, the linear transformation from the vector of observed variables to the vector of independent noise terms is a square matrix; ICA-LiNGAM applies certain operations to this matrix to find the causal model, and Direct-LiNGAM estimates the causal ordering by enforcing the property that the residual of regressing the effect on the root cause is always independent from the root cause. In our case, , the essential part of the mixing matrix in (6), is , where . In other words, for some of the variables , the causal relations are deterministic. (In fact, if is a leaf node in , is a deterministic function of ’s direct causes.) As a consequence, unfortunately, the above causal analysis methods based on LiNGAM, including ICA-LiNGAM and Direct-LiNGAM, do not apply. We will see how to recover information of by analyzing the estimated . We will show that some group structure and the group-wise causal ordering in can always be recovered. Before presenting the results, let us define the following recursive group decomposition according to causal structure . ###### Definition 8 (Recursive group decomposition). Consider the causal model . Put all leaf nodes which share the same direct-and-only-direct node in the same group; further incorporate the corresponding direct-and-only-direct node in the same group. Here we say a node is the “direct-and-only-direct" node of if and only if is a direct cause of and there is no other directed path from to . For those nodes which are not a direct-and-only-direct node of any leaf node, each of them forms a separate group. We call the set of all such groups ordered according to the causal ordering of the non-leaf nodes in DAG a recursive group decomposition of , denoted by . ##### Example Set 3 As seen from the process of recursive group decomposition, each non-leaf node is in one and only one recursive group, and it is possible for multiple leaf nodes to be in the same group. Therefore, in total there are recursive groups. For example, for given in Figure 6(a), a corresponding group structure for the corresponding is , and for in Figure 6(b), there is only one group: . For both and , given in Figure 6(c), a recursive group decomposition is . Note that the causal ordering and the recursive group decomposition of given variables according to the graphical model may not be unique. For instance, if has only two variables and which are not adjacent, both decompositions and are correct. Consider over three variables, , where and are not adjacent and are both causes of ; then both and are valid recursive group decompositions. We first present a procedure to construct the recursive group decomposition and the causal ordering among the groups from the estimated . We will further show that the recovered recursive group decomposition is always asymptotically correct under assumption A4. #### 5.2.1 Construction and Identifiability of Recursive Group Decomposition First of all, Lemma 7 tells us that in (6) is identifiable up to permutation and scaling columns. Let us start with the asymptotic case, where the columns of the estimated from values of are a permuted and rescaled version of the columns of . In what follows the permutation and rescaling of the columns of does not change the result, so below we just work with the true , instead of its estimate. and follow the same causal DAG, , and are causally sufficient, although some variables among them (corresponding to leaf nodes in ) are determined by their direct causes. Let us find the causal ordering of . If there are no deterministic relations and the values of are given, the causal ordering can be estimated by recursively performing regression and checking independence between the regression residual and the predictor (Shimizu et al., 2011). Specifically, if one regresses all the remaining variables on the root cause, the residuals are always independent from the predictor (the root cause). After detecting a root cause, the residuals of regressing all the other variables on the discovered root cause are still causally sufficient and follow a DAG. One can repeat the above procedure to find a new root cause over such regression residuals, until no variable is left. However, in our case we have access to but not the values of . Fortunately, the independence between regression residuals and the predictor can still be checked by analyzing . Recall that , where the components of are independent. Without loss of generality, here we assume that all components of are standardized, i.e., they have a zero mean and unit variance. Denote by the th row of . We have and . The regression model for on is ~X∗j=E[~X∗j~X∗i]E[~X∗2i]~X∗i+Rj←i=ANLj⋅ANL⊺i⋅||ANLi⋅||2~X∗i+Rj←i. Here the residual can be written as Rj←i =(ANLj⋅−ANLj⋅ANL⊺i⋅ANLi⋅||ANLi⋅||2)≜αj←i~ENL. (10) If for all , is either zero or independent from , we consider as the current root cause and put it and all the other variables which are deterministically related to it in the first group, which is a root cause group. Now the problem is whether we can check for independence between nonzero residuals and the predictor . Interestingly, the answer is yes, as stated in the following proposition. ###### Proposition 9. Suppose assumption A4 holds. For variables generated by (5), regression residual given in (10) is independent from variable if and only if |αj←i∘ANLi⋅|1=0, (11) where denotes entrywise product. So we can check for independence as if the values of were given. Consequently, we can find the root cause group. We then consider the residuals of regressing all the remaining variables on the discovered root cause as a new set of variables. Note that like the variables , these variables are also linear mixtures of . Repeating the above procedure on this new set of variance will give the second root cause and its recursive group. Applying this procedure repeatedly until no variable is left finally discovers all recursive groups following the causal ordering. The constructed recurse group decomposition is asymptotically correct, as stated in the following proposition. ###### Proposition 10. (Identifiable recursive group decomposition) Let be generated by the CAMME with the corresponding measurement-error-free variables generated by the causal DAG and suppose assumptions A0 and A4 hold. The recursive group decomposition constructed by the above procedure is asymptotically correct, in the sense that as the sample size , if non-leaf node is a cause of non-leaf node , then the recursive group which is in precedes the group which belongs to. However, the causal ordering among the nodes within the same recursive group may not be identifiable. The result of Proposition 10 applies to any DAG structure in . Clearly, the indentifiability can be naturally improved if additional assumptions on the causal structure hold. In particular, to recover information of , it is essential to answer the following questions. • Can we determine which nodes in a recursive group are leaf nodes? • Can we find the causal edges into a particular node as well as their causal coefficients? Below we will show that under rather mild assumptions, the answers to both questions are yes. #### 5.2.2 Identifying Leaf Nodes and Individual Causal Edges If for each recursive group we can determine which variable is the non-leaf node, the causal ordering among the variables is then fully known. The causal structure in as well as the causal model can then be readily estimated by regression: for a leaf node, its direct causes are those non-leaf nodes that determine it; for a non-leaf node, we can regress it on all non-leaf nodes that precede it according to the causal ordering, and those predictors with non-zero linear coefficients are its parents. (Equivalently, its parents are the nodes that causal precede it and in its Markov blanket.) Now the problem is whether it is possible to find out which variable in a given recursive group is a leaf node; if all leaf nodes are found, then the remaining one is the (only) non-leaf node. We may find leaf nodes by “looking backward" and “looking forward"; the former makes use of the parents of the variables in the considered group, and the latter exploits the fact that leaf nodes do not have any child. ###### Proposition 11. (Leaf node determination by “looking backward") Suppose the observed data were generated by the CAMME where assumptions A0 and A4 hold.333In this non-Gaussian case (implied by assumption A4), the result reported in this proposition may still hold if one avoids the non-deterministic faithfulness assumption and assumes a weaker condition; however, for simplicity of the proof we currently still assume non-deterministic faithfulness. Let the sample size . Then if assumption A5 holds, leaf node is correctly identified from values of (more specifically, from the estimated or the distribution of ); alternatively, if assumption A6 holds, leaf nodes and are correctly identified from values of . • According to , leaf node in the considered recursive group, , has a parent which is not a parent of the non-leaf node in . • According to , leaf nodes and in the considered recursive group, , are non-deterministically conditionally independent given some subset of the nodes in . ##### Example Set 4 Suppose assumptions A0 and A4 hold. • For in Figure 6(a), assumption A6 holds for and in the recursive group : they are non-deterministically conditionally independent given ; so both of them are identified to be leaf nodes from the estimated or the distribution of , and can be determined as a non-leaf node. (In addition, assumption A5 holds for , allowing us to identify this leaf node even if is absent in the graph.) • For both
2022-05-16 17:58:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603616952896118, "perplexity": 575.6884427169356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00010.warc.gz"}
https://stellargraph.readthedocs.io/en/stable/demos/embeddings/graphsage-unsupervised-sampler-embeddings.html
Execute this notebook: # Node representation learning with GraphSAGE and UnsupervisedSampler¶ Stellargraph Unsupervised GraphSAGE is the implementation of GraphSAGE method outlined in the paper: Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017. This notebook is a short demo of how Stellargraph Unsupervised GraphSAGE can be used to learn embeddings of the nodes representing papers in the CORA citation network. Furthermore, this notebook demonstrates the use of the learnt embeddings in a downstream node classification task (classifying papers by subject). Note that the node embeddings can also be used in other graph machine learning tasks, such as link prediction, community detection, etc. ## Unsupervised GraphSAGE:¶ A high-level explanation of the unsupervised GraphSAGE method of graph representation learning is as follows. Objective: Given a graph, learn embeddings of the nodes using only the graph structure and the node features, without using any known node class labels (hence “unsupervised”; for semi-supervised learning of node embeddings, see this demo) Unsupervised GraphSAGE model: In the Unsupervised GraphSAGE model, node embeddings are learnt by solving a simple classification task: given a large set of “positive” (target, context) node pairs generated from random walks performed on the graph (i.e., node pairs that co-occur within a certain context window in random walks), and an equally large set of “negative” node pairs that are randomly selected from the graph according to a certain distribution, learn a binary classifier that predicts whether arbitrary node pairs are likely to co-occur in a random walk performed on the graph. Through learning this simple binary node-pair-classification task, the model automatically learns an inductive mapping from attributes of nodes and their neighbors to node embeddings in a high-dimensional vector space, which preserves structural and feature similarities of the nodes. Unlike embeddings obtained by algorithms such as Node2Vec, this mapping is inductive: given a new node (with attributes) and its links to other nodes in the graph (which was unseen during model training), we can evaluate its embeddings without having to re-train the model. In our implementation of Unsupervised GraphSAGE, the training set of node pairs is composed of an equal number of positive and negative (target, context) pairs from the graph. The positive (target, context) pairs are the node pairs co-occurring on random walks over the graph whereas the negative node pairs are sampled randomly from a global node degree distribution of the graph. The architecture of the node pair classifier is the following. Input node pairs (with node features) are fed, together with the graph structure, into a pair of identical GraphSAGE encoders, producing a pair of node embeddings. These embeddings are then fed into a node pair classification layer, which applies a binary operator to those node embeddings (e.g., concatenating them), and passes the resulting node pair embeddings through a linear transform followed by a binary activation (e.g., sigmoid), thus predicting a binary label for the node pair. The entire model is trained end-to-end by minimizing the loss function of choice (e.g., binary cross-entropy between predicted node pair labels and true link labels) using stochastic gradient descent (SGD) updates of the model parameters, with minibatches of ‘training’ links generated on demand and fed into the model. Node embeddings obtained from the encoder part of the trained classifier can be used in various downstream tasks. In this demo, we show how these can be used for predicting node labels. [3]: import networkx as nx import pandas as pd import numpy as np import os import random import stellargraph as sg from stellargraph.data import EdgeSplitter from stellargraph.data import UniformRandomWalk from stellargraph.data import UnsupervisedSampler from sklearn.model_selection import train_test_split from tensorflow import keras from sklearn import preprocessing, feature_extraction, model_selection from sklearn.linear_model import LogisticRegressionCV, LogisticRegression from sklearn.metrics import accuracy_score from stellargraph import globalvar from stellargraph import datasets from IPython.display import display, HTML [4]: dataset = datasets.Cora() display(HTML(dataset.description)) The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words. [5]: print(G.info()) StellarGraph: Undirected multigraph Nodes: 2708, Edges: 5429 Node types: paper: [2708] Edge types: paper-cites->paper Edge types: paper-cites->paper: [5429] ## Unsupervised GraphSAGE with on demand sampling¶ The Unsupervised GraphSAGE requires a training sample that can be either provided as a list of (target, context) node pairs or it can be provided with an UnsupervisedSampler instance that takes care of generating positive and negative samples of node pairs on demand. In this demo we discuss the latter technique. ### UnsupervisedSampler:¶ The UnsupervisedSampler class takes in a Stellargraph graph instance. The generator method in the UnsupervisedSampler is responsible for generating equal number of positive and negative node pair samples from the graph for training. The samples are generated by performing uniform random walks over the graph, using UniformRandomWalk object. Positive (target, context) node pairs are extracted from the walks, and for each positive pair a corresponding negative pair (target, node) is generated by randomly sampling node from the degree distribution of the graph. Once the batch_size number of samples is accumulated, the generator yields a list of positive and negative node pairs along with their respective 1/0 labels. In the current implementation, we use uniform random walks to explore the graph structure. The length and number of walks, as well as the root nodes for starting the walks can be user-specified. The default list for root nodes is all nodes of the graph, default number_of_walks is 1 (at least one walk per root node), and the default length of walks is 2 (need at least one node beyond the root node on the walk as a potential positive context). 1. Specify the other optional parameter values: root nodes, the number of walks to take per node, the length of each walk, and random seed. [6]: nodes = list(G.nodes()) number_of_walks = 1 length = 5 2. Create the UnsupervisedSampler instance with the relevant parameters passed to it. [7]: unsupervised_samples = UnsupervisedSampler( G, nodes=nodes, length=length, number_of_walks=number_of_walks ) The graph G together with the unsupervised sampler will be used to generate samples. 3. Create a node pair generator: Next, create the node pair generator for sampling and streaming the training data to the model. The node pair generator essentially “maps” pairs of nodes (target, context) to the input of GraphSAGE: it either takes minibatches of node pairs, or an UnsupervisedSampler instance which generates the minibatches of node pairs on demand. The generator samples 2-hop subgraphs with (target, context) head nodes extracted from those pairs, and feeds them, together with the corresponding binary labels indicating which pair represent positive or negative sample, to the input layer of the node pair classifier with GraphSAGE node encoder, for SGD updates of the model parameters. Specify: 1. The minibatch size (number of node pairs per minibatch). 2. The number of epochs for training the model. 3. The sizes of 1- and 2-hop neighbor samples for GraphSAGE: Note that the length of num_samples list defines the number of layers/iterations in the GraphSAGE encoder. In this example, we are defining a 2-layer GraphSAGE encoder. [8]: batch_size = 50 epochs = 4 num_samples = [10, 5] In the following we show the working of node pair generator with the UnsupervisedSampler, which will generate samples on demand. [9]: generator = GraphSAGELinkGenerator(G, batch_size, num_samples) train_gen = generator.flow(unsupervised_samples) Build the model: a 2-layer GraphSAGE encoder acting as node representation learner, with a link classification layer on concatenated (citing-paper, cited-paper) node embeddings. GraphSAGE part of the model, with hidden layer sizes of 50 for both GraphSAGE layers, a bias term, and no dropout. (Dropout can be switched on by specifying a positive dropout rate, 0 < dropout < 1). Note that the length of layer_sizes list must be equal to the length of num_samples, as len(num_samples) defines the number of hops (layers) in the GraphSAGE encoder. [10]: layer_sizes = [50, 50] graphsage = GraphSAGE( layer_sizes=layer_sizes, generator=generator, bias=True, dropout=0.0, normalize="l2" ) [11]: # Build the model and expose input and output sockets of graphsage, for node pair inputs: x_inp, x_out = graphsage.in_out_tensors() Final node pair classification layer that takes a pair of nodes’ embeddings produced by graphsage encoder, applies a binary operator to them to produce the corresponding node pair embedding (ip for inner product; other options for the binary operator can be seen by running a cell with ?link_classification in it), and passes it through a dense layer: [12]: prediction = link_classification( output_dim=1, output_act="sigmoid", edge_embedding_method="ip" )(x_out) link_classification: using 'ip' method to combine node embeddings into edge embeddings Stack the GraphSAGE encoder and prediction layer into a Keras model, and specify the loss [13]: model = keras.Model(inputs=x_inp, outputs=prediction) model.compile( loss=keras.losses.binary_crossentropy, metrics=[keras.metrics.binary_accuracy], ) 4. Train the model. [14]: history = model.fit( train_gen, epochs=epochs, verbose=1, use_multiprocessing=False, workers=4, shuffle=True, ) Epoch 1/4 434/434 [==============================] - 35s 80ms/step - loss: 0.5668 - binary_accuracy: 0.7413 Epoch 2/4 434/434 [==============================] - 33s 77ms/step - loss: 0.5404 - binary_accuracy: 0.7739 Epoch 3/4 434/434 [==============================] - 34s 78ms/step - loss: 0.5378 - binary_accuracy: 0.7823 Epoch 4/4 434/434 [==============================] - 34s 78ms/step - loss: 0.5383 - binary_accuracy: 0.7815 Note that multiprocessing is switched off, since with a large training set of node pairs, multiprocessing can considerably slow down the training process with the data being transferred between various processes. Also, multiple workers can be used with Keras version 2.2.4 and above, and it speeds up the training process considerably due to multi-threading. ## Extracting node embeddings¶ Now that the node pair classifier is trained, we can use its node encoder part as node embeddings evaluator. Below we evaluate node embeddings as activations of the output of GraphSAGE layer stack, and visualise them, coloring nodes by their subject label. [15]: from sklearn.decomposition import PCA from sklearn.manifold import TSNE from stellargraph.mapper import GraphSAGENodeGenerator import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline Building a new node-based model The (src, dst) node pair classifier model has two identical node encoders: one for source nodes in the node pairs, the other for destination nodes in the node pairs passed to the model. We can use either of the two identical encoders to evaluate node embeddings. Below we create an embedding model by defining a new Keras model with x_inp_src (a list of odd elements in x_inp) and x_out_src (the 1st element in x_out) as input and output, respectively. Note that this model’s weights are the same as those of the corresponding node encoder in the previously trained node pair classifier. [16]: x_inp_src = x_inp[0::2] x_out_src = x_out[0] embedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src) We also need a node generator to feed graph nodes to embedding_model. We want to evaluate node embeddings for all nodes in the graph: [17]: node_ids = node_subjects.index node_gen = GraphSAGENodeGenerator(G, batch_size, num_samples).flow(node_ids) We now use node_gen to feed all nodes into the embedding model and extract their embeddings: [18]: node_embeddings = embedding_model.predict(node_gen, workers=4, verbose=1) 55/55 [==============================] - 1s 19ms/step ### Visualize the node embeddings¶ Next we visualize the node embeddings in 2D using t-SNE. Colors of the nodes depict their true classes (subject in the case of Cora dataset) of the nodes. [19]: node_subject = node_subjects.astype("category").cat.codes X = node_embeddings if X.shape[1] > 2: transform = TSNE # PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=node_ids) emb_transformed["label"] = node_subject else: emb_transformed = pd.DataFrame(X, index=node_ids) emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1}) emb_transformed["label"] = node_subject [20]: alpha = 0.7 fig, ax = plt.subplots(figsize=(7, 7)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of GraphSAGE embeddings for cora dataset".format(transform.__name__) ) plt.show() The observation that same-colored nodes in the embedding space are concentrated together is indicative of similarity of embeddings of papers on the same topics. We would emphasize here again that the node embeddings are learnt in unsupervised way, without using true class labels. The node embeddings calculated using the unsupervised GraphSAGE can be used as node feature vectors in a downstream task such as node classification. In this example, we will use the node embeddings to train a simple Logistic Regression classifier to predict paper subjects in Cora dataset. [21]: # X will hold the 50 input features (node embeddings) X = node_embeddings # y holds the corresponding target values y = np.array(node_subject) ### Data Splitting¶ We split the data into train and test sets. We use 5% of the data for training and the remaining 95% for testing as a hold out test set. [22]: X_train, X_test, y_train, y_test = train_test_split( X, y, train_size=0.05, test_size=None, stratify=y ) ### Classifier Training¶ We train a Logistic Regression classifier on the training data. [23]: clf = LogisticRegression(verbose=0, solver="lbfgs", multi_class="auto") clf.fit(X_train, y_train) [23]: LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=100, multi_class='auto', n_jobs=None, penalty='l2', random_state=None, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False) Predict the hold out test set. [24]: y_pred = clf.predict(X_test) Calculate the accuracy of the classifier on the test set. [25]: accuracy_score(y_test, y_pred) [25]: 0.7427127866303925 The obtained accuracy is pretty decent, better than that obtained by using node embeddings obtained by node2vec that ignores node attributes, only taking into account the graph structure (see this demo). Predicted classes [26]: pd.Series(y_pred).value_counts() [26]: 2 831 1 428 6 406 3 356 0 334 4 195 5 23 dtype: int64 True classes [27]: pd.Series(y).value_counts() [27]: 2 818 3 426 1 418 6 351 0 298 4 217 5 180 dtype: int64 ### Uses for unsupervised graph representation learning¶ 1. Unsupervised GraphSAGE learns embeddings of unlabeled graph nodes. This is highly useful as most of the real-world data is typically either unlabeled, or have noisy, unreliable, or sparse labels. In such scenarios unsupervised techniques that learn low-dimensional meaningful representation of nodes in a graph by leveraging the graph structure and features of the nodes is useful. 2. Moreover, GraphSAGE is an inductive technique that allows us to obtain embeddings of unseen nodes, without the need to re-train the embedding model. That is, instead of training individual embeddings for each node (as in algorithms such as node2vec that learn a look-up table of node embeddings), GraphSAGE learns a function that generates embeddings by sampling and aggregating attributes from each node’s local neighborhood, and combining those with the node’s own attributes. Execute this notebook:
2021-03-09 11:02:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5345494151115417, "perplexity": 3058.63359020562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00501.warc.gz"}
https://skyciv.com/docs/skyciv-foundation/piles/geotechnical-capacity-of-piles/
SkyCiv Documentation Your guide to SkyCiv software - tutorials, how-to guides and technical articles 1. Home 2. SkyCiv Foundation 3. Piles 4. Geotechnical Capacity of Piles Geotechnical Capacity of Piles How to calculate the ultimate load-carrying capacity of a single pile with examples Evaluating the ultimate load-carrying capacity of a single pile is an important aspect of pile design that can sometimes be complicated. To easily understand the load transfer mechanism of a single pile, imagine a concrete pile of length L with diameter D, as shown in Figure 1. Figure 1: Load transfer Mechanism for piles The load Q applied on the pile shall be transferred directly to the soil. Part of this load will be resisted by the side or skin friction developed along the shaft (Qs) and the rest will be resisted by the soil below the tip of the pile (Qp).  Therefore, the ultimate load-carrying capacity (Qu) of a pile shall be given by the equation (1).  Numerous studies and methods are available to estimate the values of Qp and Qs. $${Q}_{u} = {Q}_{p} + {Q}_{s}$$    (1) Qs = Skin-frictional resistance Want to try SkyCiv’s Foundation Design software? Our free tool allows users to perform load-carrying calculations without any download or installation! End-bearing Capacity Ultimate end-bearing capacity is theoretically the maximum load per unit area which can be supported by the soil without failure. The equation of Karl Von Terzaghi, father of soil mechanics, is one of the first and most used theory on evaluating the ultimate bearing capacity of foundations. Terzaghi’s equation for ultimate bearing capacity can be expressed as: $${q}_{u} = (c × {N}_{c}) + (q × {N}_{q}) + (\frac{1}{2} × γ × B × {N}_{γ})$$    (2) qu = Ultimate end-bearing capacity c = Cohesion of soil q = Effective soil pressure γ = Soil unit weight B = Cross-sectional depth or diameter Nc, Nq, Nγ = Bearing factors Since qu is in terms of load per unit area or pressure, multiplying it by the cross-sectional area of the pile will result in the end-bearing load capacity (Qp) of the pile. The resulting value of the last term of Equation 2 is negligible due to a relatively small pile width, hence,  it may be dropped from the equation. Thus, the ultimate end-bearing load capacity of the pile can be expressed as shown in equation (3). A modified version of the Terzaghi’s equation shall be used on SkyCiv’s pile foundation module. $${Q}_{p} = {A}_{p} × [(c × {N}_{c}) + (q × {N}_{q}) ]$$    (3) Ap = Cross-sectional area of pile Bearing factors Nand Nare non-dimensional, empirically derived, and are functions of the soil friction angle (Φ). Many researchers have already provided different techniques to compute for the bearing factors. Table 1 summarizes the values of Naccording to Naval Facilities Engineering Command (NAVFAC DM 7.2, 1984). The value of Nc is approximately equal to 9 for piles under clayey soils. Bearing Factor (Nq) Friction Angle (Ø) 26 28 30 31 32 33 34 35 36 37 38 39 40 Driven Piles 10 15 21 24 29 35 42 50 62 77 86 120 145 Bored Piles 5 8 10 12 14 17 21 25 30 38 43 60 72 Table 1: Nvalues from NAVFAC DM 7.2 Skin-frictional Resistance Capacity Skin-frictional resistance of piles is developed along the length of the pile. Generally, the frictional resistance of a pile may be written as: $${Q}_{s} = ∑ (p × ΔL × f)$$    (4) p = Perimeter of the pile ΔL = Incremental pile length over which p and f are taken f = Unit frictional resistance at any depth Estimating the value of the unit frictional resistance f requires several important factors to consider, such as the nature of pile installation and soil classification. Various techniques are available to evaluate their value. Equations (5) and (6) shows how to compute for the unit frictional resistance of piles in sandy and clayey soils respectively. Tables 2 and 3 presents the recommended effective earth pressure coefficient (K) and the soil-pile frictional angle (δ’) by NAVFAC DM7.2. For sandy soils: $$f = K × σ’× tan(δ’)$$    (5) K = Effective earth pressure coefficient σ’ = Effective vertical stress at the depth under consideration δ’ = Soil-pile frictional angle For clayey soils: $$f = α × c$$    (6) Soil-Pile Frictional Angle (δ’) Pile Type δ’ Steel Pile 20º Timber Pile 3/4 × Φ Concrete Pile 3/4 × Φ Table 2: Soil-Pile Frictional Angle Values (NAVFAC DM7.2, 1984) Lateral Earth Pressure Coefficient (K) Pile Type Compression Pile Tension Pile Driven H-piles 0.5-1.0 0.3-0.5 Driven displacement piles (round, rectangular) 1.0-1.5 0.6-1.0 Driven displacement piles (tapered) 1.5-2.0 1.0-1.3 Driven jetted piles 0.4-0.9 0.3-0.6 Bored piles (<24″ Diameter) 0.7 0.4 Table 3: Lateral Earth Pressure Coefficient (K) Values (NAVFAC DM7.2, 1984) c/pa α ≤ 0.1 1.00 0.2 0.92 0.3 0.82 0.4 0.74 0.6 0.62 0.8 0.54 1.0 0.48 1.2 0.42 1.4 0.40 1.6 0.38 1.8 0.36 2.0 0.35 2.4 0.34 2.8 0.34 Note: p= atmospheric pressure ≈ 100 kN/m2 Table 4: Adhesion Factor Values (Terzaghi, Peck, and Mesri, 1996) Calculating the capacity of piles in sand Example 1: A 12-meter long concrete pile with a diameter of 500 mm is driven into multiple sand layers with no groundwater present. Compute for the ultimate load-carrying capacity (Qu)  of the pile. Details Section Diameter 500 mm Length 12 m Layer 1-Soil Properties Thickness 5 m Unit Weight 17.3 kN/m3 Friction Angle 30 Degrees Cohesion 0 kPa Groundwater Table Not present Layer 2-Soil Properties Thickness 7 m Unit Weight 16.9 kN/m3 Friction Angle 32 Degrees Cohesion 0 kPa Groundwater Table Not present Step 1: Compute for the end-bearing load capacity (Qp). At the tip of the pile: Ap = (π/4) × D2 = (π/4) × 0.52 Ap = 0.196 m2 c = 0 kPa θ = 32º N= 29 (From Table 1) Effective soil pressure (q) q =  (γ × t1) +  (γ × t2) =  (5 m × 17.3 kN/m3) +  (7 m × 16.9 kN/m3) q = 204.8 kPa Use equation (3) for the end-bearing load capacity: Qp = Ap × [(c × Nc) + (q × Nq)] Qp = 0.196 m× ( 204.8 KPa × 29) Q= 1,164.083 kN Step 2:  Compute for the skin-frictional resistance (Qs). Using equations (4) and (5), calculate the skin-frictional per soil layer. Qs = ∑ (p × ΔL ×  f) p = π × D = π × 0.5 m p = 1.571 m Layer 1: ΔL = 5 m f1 = K × σ’1× tan(δ’) K = 1.25 (Table 3) δ’ = 3/4 × 30º δ’ = 22.50º σ’1 = γ × (0.5 × t1)  = 17.3 kN/m3 × (0.5 × 5 m) σ’1 = 43.25 kN/m2 f1 = 1.25 × 43.25 kN/m2 × tan(22.50º) f1 = 22.393 kN/m2 Qs1 = p × ΔL ×  f1 = 1.571 m ×5 m × 22.393 kN/m2 Qs1 = 175.897 kN Layer 2: ΔL = 7 m f2 = K × σ’2× tan(δ’) K = 1.25 (Table 3) δ’ = 3/4× 32º δ’ = 24º σ’2 = (γ × t1) + [γ × (0.5 × t2)]  = (17.3 kN/m3 × 5 m) + [16.9 kN/m3 ×(0.5 × 7 m)] σ’2 = 145.65 kN/m2 f2 = 1.25 × 145.65 kN/m2 × tan(24º) f2 = 81.059 kN/m2 Qs2 = p × ΔL ×  f2 = 1.571 m ×7 m × 81.059 kN/m2 Qs2 = 891.406 kN Total skin-frictional resistance: Q= Qs1+ Qs2 = 175.897 kN + 891.406 kN Qs = 1,067.303 kN Step 3:  Compute for the ultimate load-carrying capacity (Qu). Q= Qp+ Q= 1,164.083 kN + 1,067.303 kN Qu = 2,231.386 kN Calculating the capacity of piles in clay Example 1: Consider a 406 mm diameter concrete pile with a length of 30 m  embedded in layered saturated clay. Compute for the ultimate load-carrying capacity (Qu)  of the pile. Details Section Diameter 406 mm Length 30 m Layer 1-Soil Properties Thickness 10 m Unit Weight 8 kN/m3 Friction Angle Cohesion 30 kPa Groundwater Table 5 m Layer 2-Soil Properties Thickness 10 m Unit Weight 19.6 kN/m3 Friction Angle Cohesion 0 kPa Groundwater Table Fully submerged Step 1: Compute for the end-bearing load capacity (Qp). At the tip of the pile: Ap = (π/4) × D2= (π/4) × 0.4062 Ap = 0.129 m2 c = 100 kPa N= 9 (Typical value for clay) Q= (c × Nc) × A= (100 kPa × 9) × 0.129 m2 Q= 116.1 kN Step 2:  Compute for the skin-frictional resistance (Qs). Using equations (4) and (6), calculate the skin-frictional per soil layer. Q = ∑ (p × ΔL ×  f) p = π × D = π × 0.406 m p = 1.275 m Layer 1: ΔL = 10 m α= 0.82 (Table 4) c= 30 kPa f1=  α1 × c1  = 0.82 × 30 kPa f= 24.6 kN/m2 Qs1  = p × ΔL ×  f1 = 1.275 m × 10 m × 24.6 kN/m2 Qs1 = 313.65 kN/m2 Layer 2: ΔL = 20 m α2= 0.48 (Table 4) c= 100 kPa f =  α2 × c = 0.48 × 100 kPa f= 48 kN/m2 Qs2  = p × ΔL ×  f = 1.275 m × 20 m × 48 kN/m2 Qs2 = 1,224 kN/m2 Total skin-frictional resistance: Q= Qs1+ Qs2 = 313.65 kN + 1224 kN Qs = 1,537.65 kN Step 3:  Compute for the ultimate load-carrying capacity (Qu). Q= Qp+ Q= 116.1 kN + 1537.65 kN Qu = 1,653.75 kN References: • Das, B.M. (2007). Principles of Foundation Engineering (7th Edition). Global Engineering • Rajapakse, R. (2016). Pile Design and Construction Rule of Thumb (2nd Edition). Elsevier Inc. • Tomlinson, M.J. (2004). Pile Design and Construction Practice (4th Edition). E & FN Spon.
2019-11-16 23:54:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5731632709503174, "perplexity": 7965.143711544374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00031.warc.gz"}
https://www.illustrativemathematics.org/content-standards/RP/6/A/standards
## Select a standard 6.RP.A.1 Understand the concept of a ratio and use ratio language to describe a ratio relationship between... 6.RP.A.2 Understand the concept of a unit rate $a/b$ associated with a ratio $a:b$ with $b \neq 0$, and... 6.RP.A.3 Use ratio and rate reasoning to solve real-world and mathematical problems, e.g., by reasoning...
2018-12-10 17:02:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4654941260814667, "perplexity": 1142.8667703113904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823382.1/warc/CC-MAIN-20181210170024-20181210191524-00341.warc.gz"}
http://cs.stackexchange.com/questions/7371/a-question-about-parallel-algorithm-complexity
A question about parallel algorithm complexity When in a Parallel algorithm we say: "This algorithm is done in $O(1)$ time using $O(n\log n)$ work, with $n$-exponential probability, or alternatively, in $O(\log n)$ time using $O(n)$ work, with $n$-exponential probability." Then Can we Implement this algorithm for a Quad-Core Computer (and just 4 threads) with $n=100,000$? The other question is what is the "$n$-exponential probability" in this sentence? Thanks. - $n$-exponential probability probably means that the algorithm could fail, but this happens with probability $c^n$ for some $c < 1$. –  Yuval Filmus Dec 13 '12 at 12:13 As for the other question, $n$ could be either the number of processors or some complexity measure of the input. In the former case, to implement an algorithm with $n=10^5$ you will need $10^5$ cores. Do you have any particular algorithm in mind? –  Yuval Filmus Dec 13 '12 at 12:14 Big O in general does not tell you about real-world suitability. –  sdcvvc Dec 13 '12 at 14:48 1 Answer You are probably in the realm of asynchronous parallel computations where units of work are performed by processors at their pace and communication is performed explicitly. This model is a good approximation to many real life parallel computers such as PC clusters or multicore CPUs. You have an algorithm that can be represented as $O(n \log n)$ units of work each taking constant time or as $O(n)$ units of work each taking $O(\log n)$ time. Here $n$ is a parameter that characterizes the size of the problem. The units of work can be executed on a parallel computer with a fixed number of sequential processing elements (e.g. processor cores). It depends on the algorithm whether the work units can complete while other work units have not started, or whether the computations will have to interleave. In a practical computer interleaving can be achieved through pre-emption and context switches. - Thank all. I asked the first question to Know if this algorithm is a real-world suitability or not? if there is another implementable (a real-world suitability) algorithm, and has not O(1) running time, which one is better? in other words I want know how to compaire this whit others. is O(1) running time denote to be a good algorithm? again thanks. –  Shahmohamamdi Dec 13 '12 at 17:19
2015-03-28 03:58:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6636337637901306, "perplexity": 684.2638942907959}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297195.79/warc/CC-MAIN-20150323172137-00136-ip-10-168-14-71.ec2.internal.warc.gz"}
http://science.univ.kiev.ua/en/researchgroups/research.php?ELEMENT_ID=2535
# Yuriy Kozachenko POSITION Professor of the Department of Probability, Statistics and Actuarial Mathematics WORK EXPERIENCE 1964–1967 Postgraduate Student Instute of Mathematics of National Academy of Sciences of Ukraine, Kyiv (Ukraine) 1967–1976 Lecturer Taras Shevchenko National University of Kyiv, Kyiv (Ukraine) 1976–1987 Assistant professor Taras Shevchenko National University of Kyiv, Kyiv (Ukraine) 1987–1998 Professor Taras Shevchenko National University of Kyiv, Kyiv (Ukraine) 1998–2003 Head of the Department of Probability Theory and Mathematical Statistics Taras Shevchenko National University of Kyiv, Kyiv (Ukraine) 2003–Present Full Professor Taras Shevchenko National University of Kyiv, Kyiv (Ukraine) EDUCATION AND TRAINING 1963 BSc + MSc Taras Shevchenko National University of Kyiv, Kyiv (Ukraine) 1968 PhD Taras Shevchenko National University of Kiev, Kyiv (Ukraine) 1985 DSc Taras Shevchenko National University of Kiev, Kyiv (Ukraine) # Stochastic processes and fields with values in functional spaces, simulation of random processes and fields, applied statistics Research Fields: Mathematics ## Previous and Current Research •  Analytical properties of stochastic processes; distribution estimation of functionals from random processes •  Random processes in Orlicz spaces •  Pre-Gaussian and sub-Gaussian random processes •  Cauchy problem for mathematical physics equations with random initial conditions •  Simulation of random processes •  Statistics of random processes •  Wavelet expansions of random processes Our research is concentrated on the development of approximations for paths of random processes with given accuracy and reliability and estimation of functional of these paths. The team consists of 10 researchers: 1 DSc, 6 PhD candidates and 3 PhD students. We maintain close collaboration with universities of Rome and Melbourne. We were the first who investigated convergence of wavelet expansions of Gaussian and phi-sub-Gaussian processes and considered new expansions of random processes in series with uncorrelated or independent values. This enables us not only to simulate a process with given accuracy and reliability, but also to approximate the process by intervals of these series with given accuracy. Now we study conditions and rate of convergence for wavelet expansions of random processes from Orlicz and other special spaces. We investigate convergence of the Kotelnikov-Shannon approximations in C(T) for Gaussian stationary processes with bounded spectrum and in L_p(T) for processes with unbounded spectrum. ## Future Projects and Goals • Wavelet expansions of random processes in Orlicz spaces and the Kotelnikov-Shannon approximations • Simulation of Gaussian, phi-sub-Gaussian and other random fields with given accuracy and reliability in the spaces C(T), C^1(T) and L_p(T) • Simulation of random processes and fields with stochastic differential equations with fractional operators • Analytic properties of random fields from special spaces of random variables • Monte Carlo Methods for calculation of integral functionals • Analysis of generalized models of random processes connected with differential equations • Equation of thermal conductivity with random initial conditions and random boundary conditions ## Selected Publications Kozachenko Y., Olenko A., Polosmak O. Convergence in $L_p([0,T])$ of wavelet expansions of $\phi$-Sub-Gaussian random processes. Methodology and Computing in Applied Probability. – 2015. – Vol. 17 (1). – P. 139-153. Kozachenko Y., Troshki N.V. Accuracy and reliability of a model of Gaussian random processes in C(T) space. International Journal of Statistics and Magement System. – 2015. – Vol. 10 (1-2). – P. 1-15. Kozachenko Y., Troshki V. A criterion for testing hypotheses about the covariance function of a stationary Gaussian stochastic process. Modern Stochastics: Theory and Application. – 2014. – Vol. 1 (2). – P. 139-149. Kozachenko Y.V., Slyvka-Tylyshchak A.I. The Cauchy problem for the heat equation with a random right side. Random Operators and Stochastic Equations. – 2014. – Vol. 22 (1). – P. 53-64. Kozachenko Y., Olenko A., Polosmak O. Uniform convergence of compactly supported wavelet expansions of Gaussian random processes. Communications in Statistics - Theory and Methods. – 2014. – Vol. 43 (10-12). – P. 2549-2562. Yamnenko R., Kozachenko Y., Bushmitch D. Generalized sub-Gaussian fractional Brownian motion queueing model. Queueing Systems. – 2014. – Vol. 77 (1). – P. 75-96. Kozachenko Y., Sergiienko M. Estimates of distributions for some functionals of stochastic processes from an Orlicz space. Random Operators and Stochastic Equations. – 2014. – Vol. 22 (2). – P. 65-72. Kozachenko Y.V., Sergiienko M.P. The criterion of hypothesis testing on the covariance function of a Gaussian stochastic process. Monte Carlo Methods and Applications. – 2014. – Vol. 20 (2). – P. 137-144. Giuliano Antonini R., Hu T.-C., Kozachenko Y., Volodin A. An application of $\phi$-subgaussian technique to Fourier analysis. Journal of Mathematical Analysis and Applications. – 2013. – Vol. 408. – P. 114-124. Kozachenko Y., Olenko A., Polosmak O. On convergence of general wavelet decompositions of nonstationary stochastic processes. Electronic Journal of Probability. – 2013. – Vol. 18. – Article 69. – 21 p. Kozachenko Yu.V., Mlavets′ Yu.Yu. The Banach spaces $F_{\psi}(Ω)$ of random variables. Theory of Probability and Mathematical Statistics. – 2013. – Vol. 86. – P. 105-121. Kozachenko Y., Pashko A. Accuracy of simulations of the Gaussian random processes with continuous spectrum. Computer Modeling and New Technologies. – 2014. – Vol. 18 (3). – P. 7-12. Kozachenko Y., Pogoriliak O. Simulation of Cox processes driven by random Gaussian field. Methodology and Computing in Applied Probability. – 2011. – Vol. 13 (3). – P. 511-521. Kozachenko Yu., Sottinen T., Vasylyk O. Lipschitz conditions for $Sub_{\phi}(\Omega)$-processes and applications to weakly self-similar processes with stationary increments. Theory of Probability and Mathematical Statistics. – 2011. – Vol. 82. – P. 57-73. ## Contacts Homepage: http://probability.univ.kiev.ua/index.php?page=userinfo&person=yvk&lan=en yvk@univ.kiev.ua
2020-01-24 20:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5213240385055542, "perplexity": 5199.164863728289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00197.warc.gz"}
https://meangreenmath.com/2016/11/16/what-i-learned-by-reading-gamma-exploring-eulers-constant-by-julian-havil-index/
# What I Learned by Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Index I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. When I was researching for my series of posts on conditional convergence, especially examples related to the constant $\gamma$, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights. Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before. In this series, I’d like to compile some of my favorites along with the page numbers in the book — while giving the book a very high recommendation. Part 1: The smallest value of $n$ so that $1 + \frac{1}{2} + \dots + \frac{1}{n} > 100$ (page 23). Part 2: Except for a couple select values of $m, the sum $\frac{1}{m} + \frac{1}{m+1} + \dots + \frac{1}{n}$ is never an integer (pages 24-25). Part 3: The sum of the reciprocals of the twin primes converges (page 30). Part 4: Euler somehow calculated $\zeta(26)$ without a calculator (page 41). Part 5: The integral called the Sophomore’s Dream (page 44). Part 6: St. Augustine’s thoughts on mathematicians — in context, astrologers (page 65). Part 7: The probability that two randomly selected integers have no common factors is $6/\pi^2$ (page 68). Part 8: The series for quickly computing $\gamma$ to high precision (page 89). Part 9: An observation about the formulas for $1^k + 2^k + \dots + n^k$ (page 81). Part 10: A lower bound for the gap between successive primes (page 115). Part 11: Two generalizations of $\gamma$ (page 117). Part 12: Relating the harmonic series to meteorological records (page 125). Part 13: The crossing-the-desert problem (page 127). Part 14: The worm-on-a-rope problem (page 133). Part 15: An amazingly nasty formula for the $n$th prime number (page 168). Part 16: A heuristic argument for the form of the prime number theorem (page 172). Part 17: Oops. Part 18: The Riemann Hypothesis can be stated in a form that can be understood by high school students (page 207).
2017-03-30 02:45:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5961538553237915, "perplexity": 452.34126358332105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00468-ip-10-233-31-227.ec2.internal.warc.gz"}
https://clarissewiki.com/5.0/reference/technical/TextureCurvature.html
# TextureCurvature# (Curvature) Go to User page. ## Description# Utility texture allowing to output the information of the current fragment. ## Public Attributes# Type Name Visual Hint Description long output VISUAL_HINT_DEFAULT Set which information should be outputted from the texture. The sampled curvature computes a quantity akin to mean curvature, but with values between -1 and 1. It is computed through ray-casting to get a smoother result than the crude value obtained from other modes. double radius VISUAL_HINT_DISTANCE Defines the radius within which ray-casting occurs. double bias VISUAL_HINT_DISTANCE Defines the bias, or offset, applied to the position of origin of the ray-casting along the normal. long sample_count VISUAL_HINT_SAMPLE_PER_PIXEL Samples per shaded point for this ray-casting curvature computation. Note that unless the sampling settings on materials and lights, this one is not gonna be decimated based on camera samples, or splitting factor of the secondary rays. reference (Group) geometry VISUAL_HINT_GROUP Defines against which geometry the ray-casting will occur, defaulting to the visible geometry as defined at the Layer 3D level. long intersection_mode VISUAL_HINT_DEFAULT Whithin the set of geometries defined right above, indicates which ones should participate to the curvature computation. long sidedness VISUAL_HINT_DEFAULT Sidedness of the surface used for this computation. 'Single' keeps the original normal no matter what, whereas 'double' may flip it to be oriented towards the incoming direction. long display VISUAL_HINT_DEFAULT Chooses between an easier to interpret color-coded display mode, or the raw scalar value that can get negative. In color-coded mode, green is positive and red negative (resp. convex and concave regions). double gain VISUAL_HINT_DEFAULT Multiplies the output for easier readability. double offset VISUAL_HINT_DEFAULT Offsets the output (in raw display mode only, and after the gain has been applied) for easier readability. ## Inherited Public Attributes# Type Name Visual Hint Description bool pass_through VISUAL_HINT_DEFAULT If checked, the current texture is not evaluated and the value of the attribute selected in Pass Through Attribute is directly forwarded. string master_input VISUAL_HINT_TAG Name of the attribute that will be used as output if Pass Through is enabled. bool invert VISUAL_HINT_DEFAULT If checked, the texture is inverted. double opacity VISUAL_HINT_PERCENTAGE Set the opacity of the texture. ## CID# class "TextureCurvature" "Texture" { #version 1.02 icon "../icons/object_icons/texture_curvature.iconrc" category "/Texture/Utility" doc "Utility texture allowing to output the information of the current fragment." attribute_group "curvature" { long "output" { doc "Set which information should be outputted from the texture. The sampled curvature computes a quantity akin to mean curvature, but with values between -1 and 1. It is computed through ray-casting to get a smoother result than the crude value obtained from other modes." preset "Gaussian Curvature" "0" preset "Mean Curvature" "1" preset "Sampled Curvature" "2" value 0 } doc "Defines the radius within which ray-casting occurs." texturable yes animatable yes slider yes numeric_range yes 0.0 10000 ui_range yes 0.0 10 value 0.1 } distance "bias" { doc "Defines the bias, or offset, applied to the position of origin of the ray-casting along the normal." texturable yes animatable yes slider yes numeric_range yes 0.0 10000 ui_range yes 0.0 0.001 value 0.0001 } sample_per_pixel "sample_count" { doc "Samples per shaded point for this ray-casting curvature computation. Note that unless the sampling settings on materials and lights, this one is not gonna be decimated based on camera samples, or splitting factor of the secondary rays." texturable yes animatable yes numeric_range yes 1 4096 ui_range yes 1 256 value 16 } } collapsed yes ui_weight 1000 group "geometry" { doc "Defines against which geometry the ray-casting will occur, defaulting to the visible geometry as defined at the Layer 3D level." filter "SceneObject" null_label "Use Layer 3D" dg_cyclic yes value "" } long "intersection_mode" { doc "Whithin the set of geometries defined right above, indicates which ones should participate to the curvature computation." preset "All" "0" preset "Self only" "1" preset "Other only" "2" value 0 } long "sidedness" { doc "Sidedness of the surface used for this computation. \'Single\' keeps the original normal no matter what, whereas \'double\' may flip it to be oriented towards the incoming direction." preset "Single" "0" preset "Double" "1" value 0 } } attribute_group "output" { long "display" { doc "Chooses between an easier to interpret color-coded display mode, or the raw scalar value that can get negative. In color-coded mode, green is positive and red negative (resp. convex and concave regions)." preset "Raw Value" "1" preset "Color Coded" "0" value 0 } double "gain" { doc "Multiplies the output for easier readability." texturable yes animatable yes slider yes numeric_range yes 0.0 1000000 ui_range yes 0.0 1 value 1 } double "offset" { doc "Offsets the output (in raw display mode only, and after the gain has been applied) for easier readability." texturable yes animatable yes slider yes ui_range yes 0.0 1
2023-01-27 17:55:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5963416695594788, "perplexity": 14926.533256834988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00465.warc.gz"}
https://www.physics.uoguelph.ca/course-outlines/advanced-physics-laboratory-phys4500-12
# Advanced Physics Laboratory (PHYS*4500) Code and section: PHYS*4500*01 Term: Fall 2018 Instructor: Christian Schultz-Nielsen ## Section 1: Instructional Support ### Section 1.1: Course Instructor Course Instructor Office Location Email Christian Schultz-Nielsen MacNaughton 431 cschultz@uoguelph.ca ### Section 1.2: Graduate Teaching Assistant Teaching Assistant Office Location Email Jeff De Vlugt MacNaughton 406 jdevlugt@uoguelph.ca ### Section 1.3: Laboratory Technicians Laboratory Technicians Office Location Email Dave Urbshas MacNaughton 104 durbshas@uoguelph.ca ## Section 2: Learning Resources ### Section 2.1: Course Website Course material, news, announcements, and grades will be regularly posted to the PHYS*4500 Courselink site. You are responsible for checking the site regularly. Please ensure that your grades are recorded correctly and notify the course instructor of any discrepancies. None. ### Section 2.3: Recommended Course References • A.C. Melissinos and J. Napolitano, Experiments in Modern Physics (2nd Edition), Academic Press, 2003. (University of Guelph Library Call #: QC33.M52 2003) • J.R. Taylor, An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (2nd Edition), University Science Books, 1997. (University of Guelph Library Call #: QC39.T4 1997) • D.W. Preston and E.R. Dietz, The Art of Experimental Physics, Wiley & Sons, 1991. (University of Guelph Library Call #: QC33.P74 1991) Students will typically make extensive of various textbooks from current and previous physics courses. Refer to specific lab outlines for more detailed references. ### Section 2.4: Communication and Email Policy Laboratory sessions are your primary opportunity to ask questions about the course. The course instructor is available to provide help in his office during designated office hours (Mondays and Wednesdays, 1:30 – 2:20 PM). If you wish to obtain help from the course instructor at another time, please email to make an appointment or see them before or after labs to arrange a mutually convenient time. As per university regulations, all students are required to check their <uoguelph.ca> e-mail account regularly: email is the official route of communication between the University of Guelph and its students. ## Section 3: Assessment ### Section 3.1: Final Grade Breakdown Assessment Tool Weight Lab Notebook (equal weighting for each of the 5 experiments) 30% Formal Lab – Outline (2 outlines, equally weighted) 5% Formal Lab – Science Paper (2 reports, equally weighted) 35% Formal Lab – Poster First Draft 2.5% Formal Lab – Poster Presentation 7.5% Group Project – Essay 10% Group Project – Oral Presentation 10% All assessments submitted late without legitimate cause (see Section 3.3) will be penalized 10% per late day, to a maximum of 50%. After five days, the late work will no longer be accepted and the student will receive a grade of 0 for that assessment. #### Section 3.1.1: Lab Notebooks Notebooks will be evaluated based on the criteria described below, and will be available for pick-up on the following Monday during scheduled lab time. Students may continue to use their lab notebooks from PHYS*3510. Students should be working in their lab notebooks as they perform the experiment. Students will be assessed using the following criteria: 1. Materials & Methods (8) • briefly describe what was done as it is done – you should be able to reproduce the procedure from the notebook without the lab outline! • logging experimental conditions • data recording • dates, run times, file names, etc. 2. Results & Analysis (10) • raw data (where applicable) and quality of that data • graphs and brief discussions of the data • questions asked in the lab outline, including derivations 3. Clarity (2) • notebook should be legible • anybody should be able to navigate through your lab notebook Please note that your lab notebook does not require a detailed motivation/introduction section for each experiment. A summary of the key points is generally sufficient, however questions in the lab outline should be addressed and derivations should be completed. Much of this work can be done before you begin your experiment! If you are completing your notebook properly, you should only need to generate graphs, perform some calculations, and provide a very brief discussion of the data after the experiment. #### Section 3.1.2: Formal Lab – Outlines Each student will hand in two outlines for their formal lab reports (see Section 3.1.3). Outlines are commonly used while preparing scientific documents and generally streamline the process of writing scientific papers. Following the guidelines given in PHYS*2180 and on the PHYS*3510/4500 Courselink page, outlines should demonstrate the intended flow of the document and indicate which equations, tables and/or graphs, and figures need to be included in the final paper. Outlines will be submitted via Courselink Dropbox one week before the science papers are submitted. #### Section 3.1.3: Formal Lab – Science Paper Each student will hand in two written formal lab reports, written in the style of a scientific paper. Formal lab reports are due in the Courselink Dropbox by midnight on the due dates given in the course timetable (see Section 5.1). Evaluation of the science papers will be based on students’ ability to properly motivate the experiment that was performed, to interpret and discuss their experimental data while using proper scientific writing styles, and to properly discuss experimental limitations within accepted error analysis frameworks. Spelling and grammar will be assessed in these reports. In general, your science papers should not exceed 6-8 pages (1.5 line spacing) for most experiments. The merit of the scientific arguments made in PHYS*4500 science papers will be assessed more heavily than in previous laboratory courses, and students are expected to address experimental uncertainties more rigorously. You cannot submit science papers for experiments that have been submitted as posters! #### Section 3.1.4: Formal Lab – Poster (First Draft) Each student will produce a scientific poster (48” wide by 36” high) summarizing the results of one of their experiments. This poster will be submitted electronically as a PDF document via Dropbox. The poster draft will be assessed by a Teaching Assistant, and useful feedback will be provided before the final posters are printed. Students are encouraged to browse the scientific posters found throughout the MacNaughton building for guidance. A good principle while designing your poster is to maintain a balance of roughly 30% text, 30% visuals, and 30% empty space. You cannot submit a poster for experiments that have been submitted as science papers. #### Section 3.1.5: Formal Lab – Poster (Presentation) Incorporating feedback received after the submitted draft, each student will print their posters (this typically costs $30-$40) and present them to their peers in a PHYS*3510/4500 Poster Session scheduled on Monday, November 26th from 2:30 – 5:20 PM in a room that will be announced on Courselink. Attendance at the poster session is mandatory for all students, so plan your extracurricular activities and jobs accordingly. Students will be divided into two groups, presenters and evaluators. For the first 90 minutes, the presenters will present their poster in 5 minutes or less (with up to 2 minutes of questions afterwards) to their evaluators, and will be assessed using a provided rubric. After 90 minutes, the student presenter group and evaluator group will switch roles. #### Section 3.1.6: Group Project – Essay During the first 5 weeks of the semester, students will work in groups of three, randomly assigned by the course instructor. Each group will submit a collaborative essay describing an experimental effort at the forefront of physics, with great examples including Nobel Prize winning research. This essay will provide an overview of the relevant physics and describe at least one relevant research paper. Suitable topics include: • gravitational wave observatories (likely Nobel Prize in the near future) • neutrino observatories (Nobel Prize – 2002 and 2015) • invention of blue light-emitting diodes (Nobel Prize – 2014) (difficult theory) • CERN Large Hadron Collider and the Higgs boson (Nobel Prize – 2013) (very difficult theory) • quantum particle tracking/quantum computing (Nobel Prize – 2012) • discovery of accelerating expansion of the universe (Nobel Prize – 2011) • experiments with the two-dimensional material graphene (can also include more recent experiments with silicene) (Nobel Prize – 2010) • invention of the CCD sensor (Nobel Prize – 2009) • giant magnetoresistance (Nobel Prize – 2007) (very difficult theory) • discovery of the blackbody form and anisotropy of the cosmic microwave background radiation (Nobel Prize – 2006) • laser-based precision spectroscopy (Nobel Prize – 2005) • achievement of Bose-Einstein condensation (Nobel Prize – 2001) • laser cooling and trapping of atoms (Nobel Prize – 1996) Students who wish to discuss a different project or experiment can do so if they receive permission from the instructor. Student topics must be unique to avoid overlap with other groups in the class. Students enrolled in PHYS*4001/2 are prohibited from choosing topics associated with their senior projects to avoid getting double credit for the same academic work, and students should avoid choosing essay topics that are closely related to previous summer research projects. One essay per group will be submitted via Dropbox as PDF documents. Do not leave your essay to the last minute: the middle of the semester is typically very busy, whereas the workload in the first few weeks of the semester is relatively light. #### Section 3.1.7: Group Project – Oral Presentation Each group will present their chosen topic to their peers. The presentations will be no longer than 20 minutes, with 5 minutes for questions. All students are expected to attend the full 3 hours of the presentation session. The presentations will be held during scheduled class time, so students must arrange their extracurricular activities and jobs accordingly. #### Section 3.2: Time Conflicts Between Courses Sometimes students will have a time conflict between a midterm exam in another course and either a lecture or a lab in this course. The University has a very clear policy to cover this situation: the regularlyscheduled lecture or lab holds priority. In other words, it is the responsibility of the faculty member who has scheduled the midterm exam to make special arrangements with students who have conflicts. ### Section 3.3: Course Grading Policies #### Section 3.3.1: Missed Assessments If you are unable to meet an in-course requirement due to medical, psychological, or compassionate reasons, please email the course instructor or TA. See the undergraduate calendar for information on Regulations and Procedures for Academic Consideration. #### Section 3.3.2: Accommodation of Religious Obligations If you are unable to meet an in-course requirement due to religious obligations, please email the course instructor within two weeks of the start of the semester to make alternate arrangements. See the undergraduate calendar for information on regulations and procedures for Academic Accommodation of Religious Obligations. #### Section 3.3.3: Mark Adjustments If you have questions about any grade, please inquire promptly after the material has been returned to you. Students are ultimately responsible for ensuring that the grades on all submitted material were entered properly in Courselink – check the entered grades frequently throughout the semester and report any discrepancies to your teaching assistant or course instructor. ## Section 4: Aims and Course Objectives ### Section 4.1: Calendar Description This is a modular course for students in any physics-related major in which techniques of nuclear, solid state and molecular physics will be studied. ### Section 4.2: Course Aims This course allows students to perform important experiments that illustrate topics discussed in third and fourth year physics courses. The students will obtain experience using modern laboratory instruments and practice methods of data acquisition and analysis. The student’s scientific communication skills and ability to search the scientific literature will be developed. ### Section 4.3: Learning Objectives At the successful completion of this course, the student will have: • mastered the use of various experimental physics tools, including multimeters, oscilloscopes and multichannel analyzers. • become autonomous in an experimental physics setting. • mastered the analysis of experimental data, using accepted error analysis methodologies, to verify theoretical predictions. • mastered proper scientific lab notebook protocols, allowing them to recreate experiments and or write technical documents using only their notes and data. • demonstrated mastery with laboratory and radiation safety protocols, including proper handling of sealed gamma-ray emitting sources. • demonstrated mastery of the written and verbal skills required to disseminate experimental results to a variety of audiences via scientific papers, posters, and oral presentations. • identified and synthesized relevant scientific literature to present a coherent scientific argument at a level appropriate to their peers and the more general population. • demonstrated mastery at incorporating theoretical knowledge developed in other physics courses and the scientific literature to draw appropriate inferences and conclusions from experimental results and suggest appropriate improvements to the design of the performed experiments. ### Section 4.4: Instructor’s Role and Responsibility to Students The instructor’s role is to aid students in their performance of various experiments and provide guidance as students develop their mastery of the underlying physical concepts associated with these experiments. Every student has the right to participate and contribute in the laboratory and other course activities. If a student feels that there is something preventing their full contribution, they must notify the course instructor or teaching assistants as soon as possible. We cannot address problems that we are not aware of! The instructor will ensure that the learning environment is free from harassment of any form. Offensive or inappropriate (homophobic, racist, sexist, etc.) comments are strictly prohibited. Offending students will be required to leave the lab or class, and a mark of zero will be given for any assessments arising from that course activity. More serious cases will also be forwarded to the University of Guelph Judicial Committee, where the maximum penalty is suspension or expulsion from the University of Guelph. For more details, students should consult the University of Guelph’s current Policy on Non-Academic Misconduct. ### Section 4.5: Students’ Learning Responsibility Students are expected to take advantage of the assigned laboratory hours, as these are the only hours where students are guaranteed access to the course instructor and teaching assistant. All students are expected to attend the assigned. Students who do (or may) fall behind due to illness, work, or extra-curricular activities (including varsity sports, student leadership activities, etc.) are advised to keep the instructor informed such that extra resources or accommodation can be provided, if appropriate. Students are expected to complete their lab notebooks, formal lab reports and term projects in a timely fashion. Students are provided with deadlines for course materials at the beginning of the semester and are expected to work towards those deadlines accordingly. Extensions will not be granted except in exceptional medical or compassionate circumstances. Manage your time accordingly – being busy with other coursework is not an acceptable reason to receive an extension. ### Section 4.6: Relationship With Other Courses & Labs #### Section 4.6.1: Prerequisite Courses Students must have completed PHYS*3510. Some labs will draw upon physics concepts previously discussed in previous courses, most notably PHYS*2180 and PHYS*3510. Science communication skills developed in PHYS*2180 and PHYS*3510 will be reinforced. None. #### Section 4.6.3: Follow-on Courses Many experiments in PHYS*4500 complement lecture material in other fourth year courses, most notably PHYS*4120, PHYS*4130, PHYS*4150, PHYS*3170, PHYS*4130, PHYS*4150, PHYS*4170, and PHYS*4070. As such, course notes and textbooks for these courses are excellent resources for many of the experiments conducted in PHYS*4500. Lab notebook and scientific presentation (both verbal and written) skills will complement those developed in PHYS*4001/4002 and PHYS*4300. ## Section 5: Teaching and Learning Activities ### Section 5.1: Timetable Week Dates Course Activities Assessments Due 0 Sep 03 – Sep 07 • No classes scheduled 1 Sep 10 – Sep 14 • Preliminary meeting (all students) for first 15 minutes • Group A Experiment #1 2 Sep 17 – Sep 21 • Group B Experiment #1 • Group A Lab Notebook #1 (Wed Sep 19 at 16:30) 3 Sep 24 – Sep 28 • Group A Experiment #2 • Group B Lab Notebook #1 (Wed Sep 26 at 16:30) 4 Oct 01 – Oct 05 • Group B Experiment #2 • Group A Lab Notebook #2 (Wed Oct 03 at 16:30) 5 Oct 08 – Oct 12 • No experiments scheduled • Group A and B Oral Presentations (Wed, Oct 10) • Group B Lab Notebook #2 (Fri Oct 05 at 16:30) • Group A and B Group Essays (Fri Oct 12 at 23:59) 6 Oct 15 – Oct 19 • Group A Experiment #3 • Group A Outline #1 (Fri Oct 19 at 23:59) 7 Oct 22 – Oct 26 • Group B Experiment #3 • Group A Lab Notebook #3 (Wed Oct 24 at 16:30) • Group A Formal Paper #1 (Fri Oct 26 at 23:59) • Group B Outline #1 (Fri Oct 26 at 23:59) 8 Oct 29 – Nov 02 • Group A Experiment #4 • Group B Lab Notebook #3 (Wed Oct 31 at 16:30) • Group B Formal Paper #1 (Fri Nov 02 at 23:59) • Group A Poster Draft (Wed Oct 31 at 16:30) 9 Nov 05 – Nov 09 • Group B Experiment #4 • Group A Lab Notebook #4 (Wed Nov 07 at 16:30) • Group B Poster Draft (Wed Nov 07 at 16:30) 10 Nov 12 – Nov 16 • Group A Experiment #5 • Group B Lab Notebook #4 (Wed Nov 14 at 16:30) • Group A Outline #2 (Fri Nov 16 at 23:59) 11 Nov 19 – Nov 23 • Group B Experiment #5 • Group A Lab Notebook #5 (Wed Nov 21 at 16:30) • Group A Formal Paper #2 (Fri Nov 23 at 23:59) • Group B Outline #2 (Fri Nov 23 at 23:59) 12 Nov 26 – Nov 30 • Poster Presentations (Monday Nov 26 14:30 – 17:30) • Group B Lab Notebook #5 (Wed Nov 28 at 14:30) • Group B Formal Paper #2 (Fri Nov 30 at 23:59) ### Section 5.2: Experiment Scheduling Students will be asked to split into two equal groups, Group A and Group B. Those in Group A will begin experiments in Week 2 and will have one week to complete the data collection for that experiment. Students in Group B will then have access to the equipment in Week 3, for one week. The two groups will alternate in this fashion throughout the semester with Group A doing experiments during the even weeks and Group B doing experiments during the odd weeks. All experiments should be completed by Week 11. Students are required to complete the experiments during the assigned lab periods. Students requiring additional time to complete an experiment may sign out keys to MacNaughton 417 from the course instructor (see Section 6.3) in the rare occasions that an experiment cannot be completed in the allotted 6 hours of lab time. Each student will be required to do 5 of the labs listed below: Modern Physics 1. Electron Spin Resonance 2. Zeeman Effect 3. Millikan Oil Drop Experiment 4.  X-Ray Fluorescence: Moseley’s Law Nuclear Physics 1. Gamma-Ray Spectroscopy Using a NaI(Tl) Detector 2. High-Resolution Gamma-Ray Spectroscopy 3. The Speed of Photons: Galileo’s Technique Modernized Solid State Physics 1. X-Ray Diffraction (ask for permission – currently under repair) 2. The Hall Effect and Semiconductor Band Gap (ask for permission – currently under repair) Thermodynamics and Statistical Physics 1. Noise Fundamentals Waves and Optics 1. The Velocity of Sound: The Debye-Sears Experiment 2. The Transmission Line 3. Fourier Optics 4. Physics of Ultrasound ### Section 5.3: Signing Up for Experiments Students can sign up for experiments using the Google Sheets link provided on Courselink. Please do not sign up for experiments outside of your assigned weeks unless all the groups for that week have already signed up for an experiment. Experiments are assigned on a first-come, first-served basis. ### Section 5.4: Other Important Dates Friday November 2nd is the fortieth class day, the last day to drop one semester courses. ## Section 6: Lab Safety ### Section 6.1: Department of Physics Laboratory Safety Policy The Department of Physics is committed to ensuring a safe working and learning environment for all students, staff and faculty. As a student in a laboratory course, you are responsible for taking all reasonable safety precautions and following the lab safety rules specific to the lab you are working in. In addition, students are responsible for reporting all safety issues to the graduate teaching assistant or course instructor as soon as possible. Students are not required to work in an environment that they deem to be unsafe. If you have any concerns whatsoever, please consult your teaching assistant or course instructors! In this course, students may be exposed to the following potential hazards: • $\gamma$-radiation and x-ray sources • intense light, including laser light and strobe lights • voltages and currents that can be harmful if proper precautions are not taken • compressed gases • cryogenic liquids: liquid nitrogen and liquid helium All experiments have been designed such that students have minimal (but not zero!) risk if proper laboratory protocols are followed. At all times, students must be aware of the risks of their experiment and the positioning of their fellow students and behave accordingly. ### Section 6.2: Food and Drink in the Laboratory As with all laboratories on the University of Guelph campus, ALL food and drink is strictly prohibited in the laboratory. This applies to all faculty, staff, and students. In the PHYS*4500 laboratory, this rule is strictly enforced as a criterion for lab certification with the Radiation Safety Office at the University of Guelph. Students must not, under any circumstances, bring any food or drink into the laboratory. If students have water bottles or food in their backpacks, these must be left at the front of the room and not be accessed within the room at any time. ### Section 6.3: After-Hours Access to the Laboratory Students who need to work on their experiment outside normal course hours may sign out a key to MacNaughton 417 from the course instructor, on a case-by-case basis. Students must ensure that they are never in the laboratory alone, and must obey all safety rules. Should a course instructor, teaching assistant or lab supervisor come across students with food or drink in the laboratory, the offenders will be removed from the lab and receive a mark of 0 on that experiment. ## Section 7: Academic Misconduct and Collaboration ### Section 7.1: Collaboration Collaboration and communication are essential for progress and advancement; much of modern society is built upon these skills. Students are encouraged to collaborate and discuss course concepts! However, all material submitted for grading must be each student's own work. Plagiarism is a form of academic misconduct, and will not be tolerated. A good guideline when it comes to crossing the line from collaboration to academic misconduct is that a student must never look at another student’s written work. For students seeking help from their peers, ask conceptual questions as opposed to, “How do you derive Equation 4?” For student helping their peers, never give the answer explicitly, but explain your reasoning. ### Section 7.2: Academic Misconduct The University of Guelph is committed to upholding the highest standards of academic integrity and it is the responsibility of all members of the University community – faculty, staff, and students – to be aware of what constitutes academic misconduct and to do as much as possible to prevent academic offences from occurring. University of Guelph students have the responsibility of abiding by the University's policy on academic misconduct regardless of their location of study; faculty, staff and students have the responsibility of supporting an environment that discourages misconduct. Students need to remain aware that instructors have access to and the right to use electronic and other means of detection. Please note: Whether or not a student intended to commit academic misconduct is not relevant for a finding of guilt. Hurried or careless submission of assignments does not excuse students from responsibility for verifying the academic integrity of their work before submitting it. Students who are in any doubt as to whether an action on their part could be construed as an academic offence should consult with a faculty member or faculty advisor. The Academic Misconduct Policy is detailed in the Undergraduate Calendar at the following link. ### Section 7.3 Turnitin In this course, your instructor will be using Turnitin, integrated with the CourseLink Dropbox tool, to detect possible plagiarism, unauthorized collaboration or copying as part of the ongoing efforts to maintain academic integrity at the University of Guelph. All submitted assignments will be included as source documents in the Turnitin.com reference database solely for the purpose of detecting plagiarism of such papers. Use of the Turnitin.com service is subject to the Usage Policy posted on the Turnitin.com site. A major benefit of using Turnitin is that students will be able to educate and empower themselves in preventing academic misconduct. In this course, you may screen your own assignments through Turnitin as many times as you wish before the due date. You will be able to see and print reports that show you exactly where you have properly and improperly referenced the outside sources and materials in your assignment. ## Section 8: Accessibility ### Section 8.1: Accessibility The University of Guelph is committed to creating a barrier-free environment. Providing services for students is a shared responsibility among students, faculty and administrators. This relationship is based on respect of individual rights, the dignity of the individual and the University community's shared commitment to an open and supportive learning environment. Students requiring service or accommodation, whether due to an identified, ongoing disability or a short-term disability should contact the University of Guelph’s Accessibility Services as soon as possible. For more information, contact Accessibility Services at 519-824-4120 ext. 56208, email accessibility@uoguelph.ca, or visit their website: https://wellness.uoguelph.ca/accessibility/ ### Section 8.2: Electronic Recording of Classes The electronic recording of classes is expressly forbidden without the prior consent of the instructor. This prohibition extends to all components of courses, including, but not limited to, lectures, tutorials, and lab instruction, whether conducted by the instructor or teaching assistant, or other designated person. When recordings are permitted they are solely for the use of the authorized student and may not be reproduced, or transmitted to others, without the express written consent of the instructor. ### Section 8.3: Posting Course Materials on Websites Posting any course materials, including lecture notes or experiment outlines, is strictly prohibited. These materials are copyright of the course instructors, Department of Physics, and University of Guelph. ## Section 9: Course Evaluation ### Section 9.1: Course Evaluation The Department of Physics requires student assessment of all courses taught by the Department. These assessments provide essential feedback to faculty on their teaching by identifying both strengths and possible areas of improvement. In addition, annual student assessment of teaching provides part of the information used by the Department’s Tenure and Promotion Committee in evaluating the faculty member's contribution in the area of teaching. The Department's teaching evaluation questionnaire invites student response both through numerically quantifiable data, and written student comments. In conformity with University of Guelph Faculty Policy, the Department’s Tenure and Promotions Committee only considers comments signed by students (choosing "I agree" in question 14). Your instructor will see all signed and unsigned comments after final grades are submitted. Written student comments may also be used in support of a nomination for internal and external teaching awards. Note: No information will be passed on to the instructor until after the final grades have been submitted.
2023-03-27 08:00:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2093857228755951, "perplexity": 3956.7633574471397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00302.warc.gz"}
https://homework.zookal.com/questions-and-answers/complete-the-following-statement-triangles-abcabc-and-defdef-are-1-684577774
1. Math 2. Geometry 3. complete the following statement triangles abcabc and defdef are 1... # Question: complete the following statement triangles abcabc and defdef are 1... ###### Question details Complete the following statement: Triangles ABCABC and DEFDEF are ___[1]____ because ___[2]____. Choose exactly one answer choice for [1] and exactly one answer choice for [2]. [2]: there is not enough information given to state that the corresponding angles are congruent [2]: the corresponding angles are congruent, regardless of the side measures [2]: the corresponding sides are not proportional [2]: the corresponding sides are proportional and the corresponding angles are congruent [2]: there is not enough information given to state that the corresponding sides are proportional [1]: similar [1]: not similar [2]: the corresponding angles are not congruent [2]: the corresponding sides are proportional, regardless of the angle measures
2021-03-07 06:45:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678101897239685, "perplexity": 1847.9508156809773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00424.warc.gz"}
http://math.stackexchange.com/questions/329964/intuition-behind-the-frattini-subgroup
# Intuition behind the Frattini subgroup I am trying to get a better feel for what the Frattini subgroup really is, intuitively. Let $G$ be a group and denote its Frattini subgroup by $\Phi(G)$. I know that $\Phi(G)$ is the intersection of the maximal subgroups of $G$, and I know that it is the set of 'non-generators' (Isaacs calls them 'useless' elements) of $G$, i.e. elements $u$ for which if $\langle X \cup \{u\} \rangle =G$, then $\langle X \rangle = G$, or equivalently, if $\langle X \rangle \ne G$, then $\langle X \cup \{u\} \rangle \ne G$, where $X \subseteq G$ is a subset of $G$, and $u \in \Phi(G)$. Since $\Phi(G)$ is the set of these elements, it would help to better understand what exactly these elements are. Is it true that such an element $u \in \Phi(G)$ is necessarily a product of elements in $X \subseteq G$ ($u$ and $X$ as above)? If not, what is an example where it isn't? Finally, where exactly does the connection lie between these 'non-generators' and (the intersection of) maximal subgroups? How do we see that they must lie in a maximal subgroup, and conversely that if an element lies in all maximal subgroups then it must be a 'non-generator'? Thanks for the help, as always. - What is $X$ in the third paragraph? –  user641 Mar 14 '13 at 3:17 Let me plug this related question of mine (and Jack Schmidt's excellent answer). –  Alexander Gruber Mar 14 '13 at 7:28 @SteveD: $X$ and $u$ are as they are explained at the end of the second paragraph. –  Alex Petzke Mar 14 '13 at 13:49 Let me stress that what you replied does not answer my question. In the second paragraph, $X$ can be any set that generates $G$. So of course any element of $G$ (including the ones in the Frattini subgroup) are a product of elements of $X$. So that question seems to hold no meaning. –  user641 Mar 14 '13 at 23:16 @SteveD: Well, then it may not have meaning. It was just my attempt to make some observations on something I knew very little about. –  Alex Petzke Mar 15 '13 at 1:29 The equivalence between the two conditions is purely formal, it holds in every category of algebraic structures. If $u$ is a non-generator and $H$ is a maximal subgroup of $G$ (by which we mean of course a maximal proper subgroup), then $\langle H \rangle = H \neq G$, hence $\langle H,u \rangle \neq G$, which implies $H = \langle H,u \rangle$ since $H$ is maximal, and therefore $u \in H$. Hence, $u$ lies in every maximal subgroup. If $u$ is not a non-generator, choose some $X \subseteq G$ with $\langle X \rangle \neq G$ but $\langle X,u \rangle = G$. By Zorn's Lemma there is a subgroup $H$ which is maximal with the property that it contains $X$, but does not contain $u$. In fact, $\langle X \rangle$ is such a subgroup, and if $\cal C$ is a non-empty chain of such subgroups, then one can easily check that $\cup \cal C$ is a subgroup with this property. Observe that $H$ is maximal: If $K$ is a subgroup containing $H$ properly, we must have $u \in K$ and $X \subseteq K$, hence $K=G$. Hence, $H$ is a maximal subgroup not containing $u$. Remark: Not every subgroup of a group can be enlarged to a maximal subgroup. In fact, there are groups (such as $\mathbb{Q}$) with no maximal subgroups at all. Therefore the proof is somewhat clumsy, but it works. More generally, if $G$ is any algebraic structure, then the intersection of all proper substructures of $G$ is called the radical of $G$, and by the proof above it coincides with the set of all non-generators of $G$. If $G$ is a group, we get the Frattini subgroup. If $G$ is a left module over a ring $R$, we get its radical, which in the particular case of $G=R$ is known as the Jacobson radical. So the Frattini subgroup is really just a special case of a more general construction, whose special cases one might be familiar with. Probably the best way to get familiar with the Frattini subgroup is to learn some of its nice properties. It is always a characteristic subgroup. If $G$ is a finite group, then $\Phi(G)$ is nilpotent. If $G$ is a finite $p$-group, then $\Phi(G)$ is the smallest normal subgroup whose quotient is elementary abelian. In this situation, Burnside's Basis Theorem states that a subset generates $G$ if and only if its image generates the $\mathbb{F}_p$-vector space $G/\Phi(G)$, which reduces the former condition to linear algebra. - +1 for the vector space interpretation of $G/\Phi(G)$. To add to that: we then ascertain that the frattini quotient of a nilpotent group $G$ is the product of elementary abelian groups corresponding to each prime dividing $|G|$. –  Alexander Gruber Mar 14 '13 at 7:36 That helps, thanks! –  Alex Petzke Mar 15 '13 at 19:44 Take the cyclic group $\,G:=\langle\,x\,\rangle\,$ of order $\,p^2\,\,\,,\,\,p\,$ a prime. Then $\,\Phi(G)=\langle\,x^p\,\rangle\,$ (it's easy to check this taking $\,G\,$ as a vector space over $\,\Bbb F_p=\Bbb Z/p\Bbb Z\,$ of dimension $\,2\,$). We know that the generators of $\,G\,$ are the elements $\,x^i\,$ , with $\,(i,p)=1\iff p\nmid i\,$ , and thus all the elements of the form $\,x^{kp}\,\,,\,\,k\in\Bbb Z\,$ , are the ones that cannot generated $\,G\,$, i.e. the non-generators. Finally, if you know the proof of the relation between the Frattini subgroup and the set of non-generators, there you can see that an element that belongs to all the maximal subgroups of $\,G\,$ has to be a non-generator as otherwise it together with some other subset would generate the whole group without being possible to drop this element from the whole generating set, and from here one can construct a maximal subgroup that won't contain that element (Zorn Lemma's calling in the general, non-finite case)... -
2015-05-22 13:18:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880938291549683, "perplexity": 130.27415107514042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00049-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.calculus-online.com/exercise/6382
# Indefinite Integral – A multiplication of polynomials – Exercise 6382 Exercise Evaluate the integral $$\int (x^2-1)(x+2) dx$$ $$\int (x^2-1)(x+2) dx =\frac{x^4}{4}+\frac{2x^3}{3}-\frac{x^2}{2}-2x+c$$
2020-07-06 03:13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 2, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981482028961182, "perplexity": 2807.250803345909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00553.warc.gz"}
https://code-research.eu/en/in-a-certain-town-the-longest-day-of-the-year-which-is-in-june-lasts-fourteen-hours-the-shortest.9619765.html
isabel2 6 In a certain town, the longest day of the year, which is in june, lasts fourteen hours. the shortest day of the year, which is in december, lasts ten hours. twice per year, in march and september, the day is the same length as the night, or twelve hours. length of day varies sinusoidally through the year. write an equation for h(m, the length of the day in hours, as a function of the cosine of m, the number of months since january. From the information given we can create a few data points: (6,14), (3,12), (9,12) and (12,10). Where the x values are the months and the y values are the number of hours where is "day" The answer will be in the form:  $y=A*cos(Bx+C)+D$ To get A, we have to find the range and then divided by 2. Therefore we have $14-10=4*(1/2)=2$ **Since we are using cos we need to start and end at the same point and such, A is negative. Therefore  $A=-2$ D is 12 since is the middle value between 10 and 14. C is zero. To obtain the value of B we have: $\frac{2 \pi }{B}=12$ When solved we have that  $B= \frac{ \pi }{6}$ Therefore the final equation is: $y=-2cos(m* \frac{ \pi }{6})+12$
2022-01-20 20:30:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204058408737183, "perplexity": 391.5906801493541}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00311.warc.gz"}
http://www.oalib.com/relative/3150217
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Physics , 2008, DOI: 10.1140/epjb/e2008-00405-5 Abstract: Can one understand the statistics of wins and losses of baseball teams? Are their consecutive-game winning and losing streaks self-reinforcing or can they be described statistically? We apply the Bradley-Terry model, which incorporates the heterogeneity of team strengths in a minimalist way, to answer these questions. Excellent agreement is found between the predictions of the Bradley-Terry model and the rank dependence of the average number team wins and losses in major-league baseball over the past century when the distribution of team strengths is taken to be uniformly distributed over a finite range. Using this uniform strength distribution, we also find very good agreement between model predictions and the observed distribution of consecutive-game team winning and losing streaks over the last half-century; however, the agreement is less good for the previous half-century. The behavior of the last half-century supports the hypothesis that long streaks are primarily statistical in origin with little self-reinforcing component. The data further show that the past half-century of baseball has been more competitive than the preceding half-century. PLOS ONE , 2012, DOI: 10.1371/journal.pone.0011663 Abstract: Research in competitive games has exclusively focused on how opponent models are developed through previous outcomes and how peoples' decisions relate to normative predictions. Little is known about how rapid impressions of opponents operate and influence behavior in competitive economic situations, although such subjective impressions have been shown to influence cooperative decision-making. This study investigates whether an opponent's face influences players' wagering decisions in a zero-sum game with hidden information. Participants made risky choices in a simplified poker task while being presented opponents whose faces differentially correlated with subjective impressions of trust. Surprisingly, we find that threatening face information has little influence on wagering behavior, but faces relaying positive emotional characteristics impact peoples' decisions. Thus, people took significantly longer and made more mistakes against emotionally positive opponents. Differences in reaction times and percent correct were greatest around the optimal decision boundary, indicating that face information is predominantly used when making decisions during medium-value gambles. Mistakes against emotionally positive opponents resulted from increased folding rates, suggesting that participants may have believed that these opponents were betting with hands of greater value than other opponents. According to these results, the best “poker face” for bluffing may not be a neutral face, but rather a face that contains emotional correlates of trustworthiness. Moreover, it suggests that rapid impressions of an opponent play an important role in competitive games, especially when people have little or no experience with an opponent. PLOS ONE , 2012, DOI: 10.1371/journal.pone.0051367 Abstract: Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. Physics , 2012, DOI: 10.1142/S0129183112500635 Abstract: One-directional traffic on two-lanes is modeled in the framework of a spring-block type model. A fraction $q$ of the cars are allowed to change lanes, following simple dynamical rules, while the other cars keep their initial lane. The advance of cars, starting from equivalent positions and following the two driving strategies is studied and compared. As a function of the parameter $q$ the winning probability and the average gain in the advancement for the lane-changing strategy is computed. An interesting phase-transition like behavior is revealed and conclusions are drawn regarding the conditions when the lane changing strategy is the better option for the drivers. Mathematics , 2015, Abstract: We introduce and develop a class of \textit{Cantor-winning} sets that share the same amenable properties as the classical winning sets associated to Schmidt's $(\alpha,\beta)$-game: these include maximal Hausdorff dimension, invariance under countable intersections with other Cantor-winning sets and invariance under bi-Lipschitz homeomorphisms. It is then demonstrated that a wide variety of badly approximable sets appearing naturally in the theory of Diophantine approximation fit nicely into our framework. As applications of this phenomenon we answer several previously open questions, including some related to the Mixed Littlewood conjecture and the $\times2, \times3$ problem. Konstantinos Drakakis Mathematics , 2005, Abstract: We prove an interesting fact about Lottery: the winning 6 numbers (out of 49) in the game of the Lottery contain two consecutive numbers with a surprisingly high probability (almost 50%). Computer Science , 2009, DOI: 10.1016/j.tcs.2012.11.033 Abstract: We introduce a new simple game, which is referred to as the complementary weighted multiple majority game (C-WMMG for short). C-WMMG models a basic cooperation rule, the complementary cooperation rule, and can be taken as a sister model of the famous weighted majority game (WMG for short). In this paper, we concentrate on the two dimensional C-WMMG. An interesting property of this case is that there are at most $n+1$ minimal winning coalitions (MWC for short), and they can be enumerated in time $O(n\log n)$, where $n$ is the number of players. This property guarantees that the two dimensional C-WMMG is more handleable than WMG. In particular, we prove that the main power indices, i.e. the Shapley-Shubik index, the Penrose-Banzhaf index, the Holler-Packel index, and the Deegan-Packel index, are all polynomially computable. To make a comparison with WMG, we know that it may have exponentially many MWCs, and none of the four power indices is polynomially computable (unless P=NP). Still for the two dimensional case, we show that local monotonicity holds for all of the four power indices. In WMG, this property is possessed by the Shapley-Shubik index and the Penrose-Banzhaf index, but not by the Holler-Packel index or the Deegan-Packel index. Since our model fits very well the cooperation and competition in team sports, we hope that it can be potentially applied in measuring the values of players in team sports, say help people give more objective ranking of NBA players and select MVPs, and consequently bring new insights into contest theory and the more general field of sports economics. It may also provide some interesting enlightenments into the design of non-additive voting mechanisms. Last but not least, the threshold version of C-WMMG is a generalization of WMG, and natural variants of it are closely related with the famous airport game and the stable marriage/roommates problem. Trent McCotter Statistics , 2009, Abstract: There have been more hitting streaks in Major League Baseball than we would expect. All batting lines of MLB hitters from 1957-2006 were randomly permuted 10,000 times and the number of hitting streaks of each length from 2 to 100 was measured. The average count of each length streak was then compared to the corresponding total from real-life, when the games were in chronological order. The number of streaks in real-life was significantly higher than over the random permutations. Non-starts (such as pinch-hitting appearances) were removed since these may be unduly reducing the number of streaks in the permutations; the number of streaks in the permutations increased but was still significantly lower than real-life totals. Possible explanations are given for why more streaks have appeared in real-life than we would expect, including possibly the hot hand idea. Contact at trentm@email.unc.edu Computer Science , 2008, Abstract: The Penrose-Banzhaf index and the Shapley-Shubik index are the best-known and the most used tools to measure political power of voters in simple voting games. Most methods to calculate these power indices are based on counting winning coalitions, in particular those coalitions a voter is decisive for. We present a new combinatorial formula how to calculate both indices solely using the set of minimal winning coalitions. JAN OLBRECHT Journal of Human Sport and Exercise , 2011, Abstract: Swimming performance in triathlon gradually gets of overriding importance in view of the final positioning in a race. It is important to end up swimming in the leading group(s) and to consider the impact of the swim stage on the 2 remaining sports disciplines in order to outbalance the athlete’s effort and to be able to keepracing for a good position until the end of the race. Unlike cycling and running where the performance mainly depends on conditioning, the performance in swimming is a subtle combination of conditioning and technical abilities. Even elite swimmers may lose a lot of performance if their outstanding conditioning is not coupled with an excellent swimming technique. Triathletes very often suffer from a lack of technique and despite the wetsuit, which partially outbalances this shortcoming, they spend a lot of energy in the swimstage without reaping any success, energy which is then not on hand anymore for the rest of the race. Therefore, swimming technique should be the groundwork in the multi-year planning AND should befocussed on in each training session during the whole carrier of the triathlete. Monitoring the combination of time/stroke rate/stroke length is thus a must. Periodisation in triathlon is much more complex than in “single” sports. Not only the sports specific weaknesses/strengths of the athlete but also the intrinsicinteraction between cycling, running and swimming on training effects and his swim-technical qualities will rule the periodisation. Additionally the level of technique will also set volume, intensity and form of training exercises. Simple to complex tests can help to make the right choice. This makes from triathlon an exciting sport, not only for the athlete but also for the coach and supporting teams. This article will summarise some practical implications on periodisation and on swimming training in triathlon. Page 1 /100 Display every page 5 10 20 Item
2019-11-15 21:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5048223733901978, "perplexity": 1642.6468158775945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00509.warc.gz"}
https://tex.stackexchange.com/questions/289539/how-does-thanks-work-in-latex-article-class
# How does \thanks work in LaTeX article class? I am trying to figure out how it happens that \thanks in the standard LaTeX article class (with no options specified) is typeset with an asterisk for the footnote mark. I got lost trying to trace the definitions and redefinitions of \@thanks and \footnotemark and \c@footnote through article.cls and latex.ltx. I also tried grepping all the LaTeX base files for \\ast, thinking that this must be used somewhere to redefine the footnote mark, but I couldn't find anything. (This question came up because I would like to convert the \thanks footnote to an endnote for journal submission, but my question is about the mechanics of this command.) MWE: \documentclass{article} \begin{document} \title{Example} \author{Name\thanks{Affiliation, e-mail}} \maketitle \end{document} Here's what I have been able to figure out: The definition of \maketitle in article.cls: \newcommand\maketitle{\par \begingroup \renewcommand\thefootnote{\@fnsymbol\c@footnote}% \def\@makefnmark{\rlap{\@textsuperscript{\normalfont\@thefnmark}}}% \long\def\@makefntext##1{\parindent 1em\noindent \hb@xt@1.8em{% \hss\@textsuperscript{\normalfont\@thefnmark}}##1}% \if@twocolumn \ifnum \col@number=\@ne \@maketitle \else \twocolumn[\@maketitle]% \fi \else \newpage \global\@topnum\z@ % Prevents figures from going at top of page. \@maketitle \fi \thispagestyle{plain}\@thanks \endgroup \setcounter{footnote}{0}% \global\let\thanks\relax \global\let\maketitle\relax \global\let\@maketitle\relax \global\let\@thanks\@empty \global\let\@author\@empty \global\let\@date\@empty \global\let\@title\@empty \global\let\title\relax \global\let\author\relax \global\let\date\relax \global\let\and\relax } I see that the footnotemark is redefined in some way, and that \@thanks is called at the end. But \thanks and \@thanks are not defined in this file. Here are the definitions of \thanks and \@thanks in latex.ltx: \def\thanks#1{\footnotemark \protected@xdef\@thanks{\@thanks \protect\footnotetext[\the\c@footnote]{#1}}% } \let\@thanks\@empty It seems like \thanks redefines \@thanks to be \@thanks plus a footnote containing its argument. It looks like \thec@footnote would produce the value of a footnote counter and typeset a numeral, not an asterisk. The article class calls \@thefnmark in \maketitle but never redefines it to an asterisk (that I can tell). • In case anyone wants an MWE: \documentclass{article} \begin{document} \title{} \thanks{A star was born} \maketitle \end{document} – bers Jan 26 '16 at 18:47 • This is the command: \thanks: macro:#1->\footnotemark \protected@xdef \@thanks {\@thanks \protect \footnotetext [\the \c@footnote ]{#1}} – Sigur Jan 26 '16 at 18:53 • What you get with \protected@edef is a “cumulative” definition: \@thanks gets what it contained before along with the text of the new footnote. – egreg Jan 26 '16 at 19:10 Let's look at some definitions: • \thanks within article.cls: \def\thanks#1{\footnotemark \protected@xdef\@thanks{\@thanks \protect\footnotetext[\the\c@footnote]{#1}}% } This macro sets a footnote mark - \footnotemark. Then it adds the footnote text to \@thanks using \protected@xdef\@thanks{\@thanks <new footnote text>}. • \maketitle in article.cls: \newcommand\maketitle{\par \begingroup \renewcommand\thefootnote{\@fnsymbol\c@footnote}% \def\@makefnmark{\rlap{\@textsuperscript{\normalfont\@thefnmark}}}% \long\def\@makefntext##1{\parindent 1em\noindent \hb@xt@1.8em{% \hss\@textsuperscript{\normalfont\@thefnmark}}##1}% \if@twocolumn \ifnum \col@number=\@ne \@maketitle \else \twocolumn[\@maketitle]% \fi \else \newpage \global\@topnum\z@ % Prevents figures from going at top of page. \@maketitle \fi \thispagestyle{plain}\@thanks \endgroup \setcounter{footnote}{0}% \global\let\thanks\relax \global\let\maketitle\relax \global\let\@maketitle\relax \global\let\@thanks\@empty \global\let\@author\@empty \global\let\@date\@empty \global\let\@title\@empty \global\let\title\relax \global\let\author\relax \global\let\date\relax \global\let\and\relax } Inside \maketitle, a number of redefinitions of the footnote mechanism occurs within a group \begingroup...\endgroup. Firstly, the footnote numbering is changed to \@fnsymbol - the regular *, †, ‡, §, ¶, ‖, **, ††, ‡‡, (error) sequence: \def\@fnsymbol#1{\ensuremath{\ifcase#1\or *\or \dagger\or \ddagger\or \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger \or \ddagger\ddagger \else\@ctrerr\fi}} Secondly, an update is made to the way the footnote mark and text is set - that is, the macros \@makefnmark and \@makefntext are updated. These updates (redefinitions) are very minimal in the sense that they really are very similar to the original definitions outside the \maketitle usage (also taken from latex.ltx: % From latex.ltx \def\@makefnmark{\hbox{\@textsuperscript{\normalfont\@thefnmark}}} % From article.cls \newcommand\@makefntext[1]{% \parindent 1em% \noindent \hb@xt@1.8em{\hss\@makefnmark}#1} The only major difference is that the footnote mark is set using \rlap. This is because within the list of \authors, you'd typically separate with a ,, for which this redefinition allows the footnote mark to overlap with. Thirdly the actual title is set using \@maketitle: \def\@maketitle{% \newpage \null \vskip 2em% \begin{center}% \let \footnote \thanks {\LARGE \@title \par}% \vskip 1.5em% {\large \lineskip .5em% \begin{tabular}[t]{c}% \@author \end{tabular}\par}% \vskip 1em% {\large \@date}% \end{center}% \par \vskip 1.5em} The setting of the \thanks marks (footnote marks) as well as the collection into \@thanks occurs with the setting of \@author. Finally, at the end of \maketitle, \@thanks is called, which "releases" the accumulated footnote texts collected with \@author. Since \@thanks carries content globally (due to \protected@xdef), it's cleared at the end of \maketitle (outside the \begingroup...\endgroup scope), together with other macros. Now let's look at an example: \documentclass{article} \usepackage[paperheight=20\baselineskip]{geometry}% Just for this example \title{Some title} \author{Author1\thanks{Abc}, Author2\thanks{Def} and Author3\thanks{Ghi}} \begin{document} \maketitle \end{document} • Note how the overlapping occurs of the footnote marks with the punctuation after Author1. \usepackage{etoolbox} \makeatletter % \patchcmd{<cmd>}{<search>}{<replace>}{<success>}{<failure>} \patchcmd{\@maketitle}{\@author}{\@author\show\@thanks}{}{} \makeatother to the preamble we can see the .log outputs the sequence of accumulate footnote texts stemming from the \thanks inside \@author: > \@thanks=macro: ->\protect \footnotetext [1]{Abc}\protect \footnotetext [2]{Def}\protect \footn otetext [3]{Ghi}. \@maketitle ...ular}[t]{c}\@author \show \@thanks \end {tabular}\par }\vskip... l.11 \maketitle These are set immediately at the end of \maketitle as mentioned above. • This is beautiful: thank you! The key piece of information in your explanation for me was that \maketitle redefines \thefootnote to include \@fnsymbol, thus typesetting one of a series of symbols starting with * instead of a numeral. – musarithmia Jan 26 '16 at 19:56
2020-09-21 00:04:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473370909690857, "perplexity": 8185.077652885131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00560.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-3-5-9-4
# How do you simplify (3/5) / (9/4)? Mar 1, 2018 See below. #### Explanation: https://mathchat.me/2008/11/19/dividing-fractions-from-annoying-to-fun/ This person shows how to solve complex fractions in a beautiful and easy to understand way. But for a quick calculation, you can simply multiply $3 \cdot 4 = 12$ for the numerator, and $5 \cdot 9 = 45$ for the denominator. $= \frac{12}{45} = \frac{4}{15}$
2020-01-26 18:16:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6762949228286743, "perplexity": 1018.4590856665862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00090.warc.gz"}
https://chemistry.stackexchange.com/questions/97703/difference-between-elimination-reactions-and-oxidation-reactions
# difference between elimination reactions and oxidation reactions [closed] I know that oxidation reactions involve the loss of hydrogen. But is the mechanism the same ## closed as too broad by Waylander, aventurin, Mithoron, airhuff, Avnish KabajJun 2 '18 at 15:45 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • That rather depends on what oxidation or reduction conditions you are considering – Waylander May 31 '18 at 13:17 • sorry, I'm not sure what you mean – Y-MinG Jun 1 '18 at 1:12 Whether the oxidation of an alochol or the reduction of a carbonyl group follows a certain mechanism depends on the reagents/conditions. For example: The sodium borohydride reduction of a ketone follows a nucleophilic addition mechanism: $$\ce{R2C=O ->[1)\ \ce{NaBH4},\ \ce{CH3OH}][2)\ \ce{H3O+}] R2CHOH}$$ However, some metal catalyzed hydrogenation reactions are considered to be (nearly) concerted additions. For example, here is the transition state of the key step of the mechanism of the Noyori reduction. $$\ce{R2C=O ->[\ce{H2}][\ce{RuCl2en2/BINAP}] R2CHOH}$$ Many oxidations of alcohols follow something of an elimination mechanism. For example, the key step of the Swern oxidation looks like an elimination. $$\ce{R2C=O ->[1)\ \ce{DMSO, (COCl)2}][2) \ce{Et3N}] R2CHOH}$$ There is a second important mechanism: hydride transfer oxidation. This is the prevailing mechanism biochemically, where the alochol is used as a hydride source to simultaneously reduce another compounds. $$\ce{R2C=O ->[\ce{NAD+}][\ce{base}] R2CHOH}$$
2019-11-13 05:46:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6690273284912109, "perplexity": 2487.412910407293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00008.warc.gz"}
https://domino.mpi-inf.mpg.de/internet/reports.nsf/c125634c000710d0c12560400034f45a/8e728a20612085d4c12560400056c5b5?OpenDocument
max planck institut informatik # MPI-I-92-149 ## Fast deterministic processor allocation ### Hagerup, Torben MPI-I-92-149. November 1992, 11 pages. | Status: available - back from printing | Next --> Entry | Previous <-- Entry Abstract in LaTeX format: Interval allocation has been suggested as a possible formalization for the PRAM of the (vaguely defined) processor allocation problem, which is of fundamental importance in parallel computing. The interval allocation problem is, given $n$ nonnegative integers $x_1,\ldots,x_n$, to allocate $n$ nonoverlapping subarrays of sizes $x_1,\ldots,x_n$ from within a base array of $O(\sum_{j=1}^n x_j)$ cells. We show that interval allocation problems of size $n$ can be solved in $O((\log\log n)^3)$ time with optimal speedup on a deterministic CRCW PRAM. In addition to a general solution to the processor allocation problem, this implies an improved deterministic algorithm for the problem of approximate summation. For both interval allocation and approximate summation, the fastest previous deterministic algorithms have running times of $\Theta({{\log n}/{\log\log n}})$. We also describe an application to the problem of computing the connected components of an undirected graph. Acknowledgement: References to related material: MPI-I-92-149.pdf11152 KBytes Please note: If you don't have a viewer for PostScript on your platform, try to install GhostScript and GhostView URL to this document: http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1992-149 BibTeX @TECHREPORT{Hagerup92, AUTHOR = {Hagerup, Torben}, TITLE = {Fast deterministic processor allocation}, TYPE = {Research Report}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
2019-10-16 05:35:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674791693687439, "perplexity": 11608.965308294706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00213.warc.gz"}
https://www.tubesandmore.com/products/luthier-parts-tools?sort=price_high_to_low&filters=3481a3483c3478a3481c3387a3478
# Luthier Parts & Tools Saddle - Tusq, Acoustic, Martin Style, 3/32” The PQ-9100-00 is a great option to upgrade or replace the saddle on your Martin acoustic guitar. It is pre-shaped for a quick and simple install. Dimensions: • Length - 2.908 in. (73.87mm) • Width - 0.100 in. (2.56mm) • Height - 0.383 in. (9.73mm) $11.60 Saddle - Tusq, Acoustic, Taylor Style, Compensated, ⅛” The PQ-9200-C0 is a great option to replace or upgrade the saddle on your Taylor acoustic as well as many other popular guitars. It is compensated to help improve the playability of your guitar. Dimensions: • Length - 2.8 in. (71.12mm) • Width - 0.22 in. (3.1mm) • Height - 0.392 in. (9.99mm)$11.60 On Backorder Saddle - Tusq, Acoustic, Compensated The PQ-9280-C0 is a great option to replace or upgrade the saddle on your acoustic guitar. It is our most popular saddle and will work with many other guitars. It is 1/8" thick and is compensated to help improve the playability of your guitar. Dimensions: • Length - 2.88 in. (73.15mm) • Width - 0.128 in. (3.25mm) • Height - 0.43 in. (10.92mm) $11.60 Saddle - Tusq, Acoustic, Shaped Blank, ⅛” The PQ-9000-00 is a blank TUSQ acoustic saddle with a slight radius on top. It is a great option for someone who wants to custom shape a new saddle. A lot of the work is already done for you. Dimensions: • Length - 3.02 in. (76.72mm) • Width - 0.124 in. (3.16mm) • Height - 0.45 in. (11.42mm)$11.60 Bridge Saddle - Bone, Acoustic, Oversized, 83mm x 12mm x 6mm High quality bone saddle blank with extra length, width, and height. These blanks leave the shaping, slotting, and polishing to you so you can control your own custom fit and appearance. Measures 83 mm x 12 mm x 6 mm. Sold individually. \$6.95
2021-08-01 04:44:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29723432660102844, "perplexity": 13376.621858804063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00194.warc.gz"}
https://www.physicsforums.com/threads/angular-deceleration-of-the-earth.1047866/
# Angular deceleration of the Earth Zoubayr Homework Statement: The rotation of the Earth is slowing down. In 1977, the Earth took 1.01 s longer to complete 365 rotations than in 1900. What was the average angular deceleration of the Earth in the time interval from 1900 to 1977? Relevant Equations: w=w_o+αt 365 rotations - 365 days 365 days - 31536000 s apart from that I do not know how to continue the question Homework Helper Gold Member You are given w_o, w and t. Can you recognize what their numerical values are from what is given? Make a list. You may assume that over the period of one year, w does not change significantly. Mentor How many seconds in 77 years? What are the angular velocities of the Earth in 1900 and 1977? (Remember that there are ##2\pi## radians in each daily rotation) And what is the difference in those two angular velocities divided by the total time? (that would be the average angular deceleration over that time period) What units is your answer in? Gold Member Homework Statement:: The rotation of the Earth is slowing down. In 1977, the Earth took 1.01 s longer to complete 365 rotations than in 1900. What was the average angular deceleration of the Earth in the time interval from 1900 to 1977? Relevant Equations:: w=w_o+αt 365 rotations - 365 days 365 days - 31536000 s apart from that I do not know how to continue the question What is the definition of the average acceleration? If you understand that you will see how to get this. Zoubayr How many seconds in 77 years? What are the angular velocities of the Earth in 1900 and 1977? (Remember that there are ##2\pi## radians in each daily rotation) And what is the difference in those two angular velocities divided by the total time? (that would be the average angular deceleration over that time period) What units is your answer in? In 77 yrs there are approx 2.43 x 10^9 s how to find the angular velocity given that the frequency f is not given? PeroK Homework Helper Gold Member Can you get the frequency from the period? Gold Member This problem is simpler if you stick to the variables given, angular velocities at two different times and the basic definition of average angular acceleration. In fact, you only care about the difference of the two angular velocities over the time period of concern. Zoubayr Can you get the frequency from the period? w=(2π)/T with T being 2.43 x 10^9 s? Homework Helper Gold Member w=(2π)/T with T being 2.43 x 10^9 s? What is 2.43×109 s in years? Zoubayr What is 2.43×109 s in years? 77 yrs Homework Helper Gold Member 2022 Award In 77 yrs there are approx 2.43 x 10^9 s how to find the angular velocity given that the frequency f is not given? Should a physics student be expected to know how long is a day on Earth? Or, is that data not generally common knowledge? Homework Helper Gold Member 77 yrs That 77 yrs is the time interval from 1900 to 1977. What do the symbols w and w_o that you used in your relevant equation represent and how are they related to the 77 yrs? Zoubayr Should a physics student be expected to know how long is a day on Earth? Or, is that data not generally common knowledge? a day on earth is approx 24 hrs Homework Helper Gold Member a day on earth is approx 24 hrs I agree. How is that related to w and w_o? Gold Member One does not actually need to know the exact angular velocities ##\omega## or ##\omega_0## or the equivalent periods to do this problem, only the difference. Zoubayr I agree. How is that related to w and w_o? w=(2π)/T with T being 24 hrs? Zoubayr One does not actually need to know the exact angular velocities ##\omega## or ##\omega_0## or the equivalent periods to do this problem, only the difference. how to find the difference? Gold Member how to find the difference? If you assume the change in angular velocity over the period of 1 year is negligible (the years which they made their measurements) You have that: $$\omega_{77} = \omega_{00} - \alpha \Delta T$$ Put each ## \omega## ( measured over a period of a year) in terms of the angle turned in a year ##\theta## and the times ##t, t+\Delta t ## it took to turn ##\theta##. Zoubayr If you assume the change in angular velocity over the period of 1 year is negligible (the years which they made their measurements) You have that: $$\omega_{77} = \omega_{00} - \alpha \Delta T$$ Put each ## \omega## ( measured over a period of a year) in terms of the angle turned in a year ##\theta## and the times ##t, t+\Delta t ## it took to turn ##\theta##. And delta t being 1.01 s. Replacing the equation and making alpha sof will give the answer? Gold Member And delta t being 1.01 s. Replacing the equation and making alpha sof will give the answer? 1) Does the earth make a full revolution in an hour? 2) They measured the revolutions over a year. Does the earth make one revolution in a year? Zoubayr 1) Does the earth make a full revolution in an hour? 2) They measured the revolutions over a year. Does the earth make one revolution in a year? And delta t being 1.01 s. Replacing the equation and making alpha sof will give the answer? Gold Member how to find the difference? Sorry, I was looking at the periods difference not angular velocity. I think you do need the value at each end not just the difference. But they give enough to figure them. Gold Member And delta t being 1.01 s. Replacing the equation and making alpha sof will give the answer? You know what, I may be leading you on a wild goose chase. You can't solve for ##\alpha## that way. Sorry. Gold Member Sorry, I was looking at the periods difference not angular velocity. I think you do need the value at each end not just the difference. But they give enough to figure them. I don't know... I can't seem to find ##\Delta \omega## from the information given. I think we stil need ##\omega## at one of the end points? Gold Member You know what, I may be leading you on a wild goose chase. You can't solve for ##\alpha## that way. Sorry. My fault. Sorry. Gold Member My fault. Sorry. I don't think it's your fault. I can't seem to deduce the answer from the given data? Gold Member Not enough info. Thats my current position. Gold Member Not enough info. Thats my current position. I assume the angular velocity in 1900 is in units of revolutions or rotations 365 revs/yr and in 1977 it is 365 rev/(1yr -1.01s). Then just figure the average acceleration from 1900 to 1977. Gold Member I assume the angular velocity in 1900 is in units of revolutions or rotations 365 revs/yr and in 1977 it is 365 rev/(1yr -1.01s). Then just figure the average acceleration from 1900 to 1977. But I think we need the time it took to make 365 revolutions the year in 1900 or 1977? Last edited: Homework Helper Gold Member Not enough info. Thats my current position. The goal is to find the angular acceleration using the equation ##\alpha=\dfrac{\Delta\omega}{\Delta t}##. We know that 1. The time over which the period changes is ##\Delta t## = 77 years = 2.4×109 s. 2. The change in period over that time is ##\Delta T=T_{\text{1977}}-T_{\text{1900}}= 1~## s. We need to find the change in frequency ##\Delta \omega## corresponding to the change in period ##\Delta T.## $$\Delta \omega=\Delta \left(\frac {2\pi}{T}\right)= 2\pi\left(\frac{1}{T+\Delta T}-\frac{1}{T} \right)=~?$$Of course, it can also be done by using differentials. Gold Member The goal is to find the angular acceleration using the equation ##\alpha=\dfrac{\Delta\omega}{\Delta t}##. We know that 1. The time over which the period changes is ##\Delta t## = 77 years = 2.4×109 s. 2. The change in period over that time is ##\Delta T=T_{\text{1977}}-T_{\text{1900}}= 1~## s. We need to find the change in frequency ##\Delta \omega## corresponding to the change in period ##\Delta T.## $$\Delta \omega=\Delta \left(\frac {2\pi}{T}\right)= 2\pi\left(\frac{1}{T+\Delta T}-\frac{1}{T} \right)=~?$$Of course, it can also be done by using differentials. My interpretation is: $$\omega_{77} = \omega_{00} - \alpha \Delta T$$ Where ##\Delta T ## is the time in seconds between the year 1977 and 1900. Let ##t## be the time to make 365 revolutions ##\theta## in 1900. It follows that: $$\frac{2 \pi \theta}{ t + \Delta t} = \frac{2 \pi \theta}{ t } - \alpha \Delta T$$ So we need some more info, because I count two variables ##t, \alpha##, and one equation? Last edited: Gold Member My interpretation is: $$\omega_{77} = \omega_{00} - \alpha \Delta T$$ Where ##\Delta T ## is the time in seconds between the year 1977 and 1900. Let ##t## be the time to make 365 revolutions ##\theta## in 1900. It follows that: $$\frac{2 \pi \theta}{ t + \Delta t} = \frac{2 \pi \theta}{ t } - \alpha \Delta T$$ So we need some more info, because I count two variables ##t, \alpha##, and one equation? You can safely take, and I think this was presumed, that t=1 year. But a small variation of a few moments will not change the final answer within the accuracy of the 1.01s change in time over 77 years. erobz Gold Member You can safely take, and I think this was presumed, that t=1 year. But a small variation of a few moments will not change the final answer within the accuracy of the 1.01s change in time over 77 years. Yeah, that's quite reasonable (I still feel like one of the time measurements should just be stated). Gold Member @Zoubayr Sometimes it's hard to tell how close, "close enough" is to the desired outcome in physics problems (this probably wasn't one of those times). Let my ignorance be a lesson to you. Last edited: Zoubayr how to do the problem then? I am struggling to understand
2023-02-09 02:09:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7889252305030823, "perplexity": 1395.4261958991412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00300.warc.gz"}
https://www.physicsforums.com/threads/how-does-tourmaline-produce-negative-ion.534256/
# How does tourmaline produce negative ion ? 1. Sep 27, 2011 ### lnsanity How does tourmaline produce negative ions ? Where those negative ions come from ? I know that negative ions is simply atoms with 1 extra electron so where they come from ? I also know that volcanic ash also produce negative ion same question ? 2. Sep 28, 2011 ### Staff: Mentor If you don't get any answers here try the general or classical forums, as this isn't a purely quantum physics question.
2017-11-25 03:06:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8453700542449951, "perplexity": 3789.0431713209528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809229.69/warc/CC-MAIN-20171125013040-20171125033040-00711.warc.gz"}
https://kar.kent.ac.uk/30580/
# Relating and Visualising CSP, VCR and Structural Traces Brown, Neil C.C. and Smith, Marc L. (2009) Relating and Visualising CSP, VCR and Structural Traces. In: Welch, Peter H. and Roebbers, Herman W. and Broenink, Jan F. and Barnes, Frederick R.M. and Ritson, Carl G. and Sampson, Adam T. and Stiles, Gardiner S. and Vinter, Brian, eds. Communicating Process Architectures 2009. Concurrent Systems Engineering . IOS Press, Amsterdam, Netherlands, pp. 182-196. ISBN 978-1-60750-065-0. E-ISBN 978-1-60750-513-6. (doi:10.3233/978-1-60750-065-0-89) (KAR id:30580) PDF Publisher pdf Language: English Download (232kB) Preview Preview This file may not be suitable for users of assistive technology. Request an accessible format Official URL:http://dx.doi.org/10.3233/978-1-60750-065-0-89 ## Abstract As well as being a useful tool for formal reasoning, a trace can provide insight into a concurrent program's behaviour, especially for the purposes of run-time analysis and debugging. Long-running programs tend to produce large traces which can be difficult to comprehend and visualise. We examine the relationship between three types of traces (CSP, VCR and Structural), establish an ordering and describe methods for conversion between the trace types. Structural traces preserve the structure of composition and reveal the repetition of individual processes, and are thus well-suited to visualisation. We introduce the Starving Philosophers to motivate the value of structural traces for reasoning about behaviour not easily predicted from a program's specification. A remaining challenge is to integrate structural traces into a more formal setting, such as the Unifying Theories of Programming – however, structural traces do provide a useful framework for analysing large systems. Item Type: Book section 10.3233/978-1-60750-065-0-89 determinacy analysis, Craig interpolants Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing [UNSPECIFIED] WoTUG Neil Brown 21 Sep 2012 09:49 UTC 16 Nov 2021 10:08 UTC https://kar.kent.ac.uk/id/eprint/30580 (The current URI for this page, for reference purposes)
2022-11-27 02:07:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514658212661743, "perplexity": 9227.708247442768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00792.warc.gz"}
https://uk.mathworks.com/help/radar/ug/radar-architecture-part-1.html
# Radar Architecture: Part 1 – System components and requirements allocation This example is the first part of a two-part series on using Simulink to design and test a radar system given a set of requirements. It starts by introducing a set of performance requirements that must be satisfied by the final design. A radar system architecture is then developed using Simulink System Composer. The example next demonstrates how to connect the radar requirements to the architecture and a corresponding design. Finally, a functioning model of a radar system is created by providing concrete implementations to the components of the architecture. The second example in the series discusses testing of the developed model and verification of the requirements. It shows how to use Simulink Test to set up test suites and run Monte Carlo simulations to verify the linked requirements. Part 2 also explores a scenario when the stated requirements have been revised. It demonstrates how to trace the changes in the requirements to the corresponding components of the design and make modifications to the implementation and tests. ### Performance Requirements Radar system design typically begins with a set of requirements. The real-world radar systems must satisfy dozens or hundreds of requirements. In this example we consider an X-band radar system that must satisfy the following two performance requirements: • R1: The radar must detect a Swerling 1 Case target with a radar cross section (RCS) of 1 m${}^{2}$ at the range of 6000 m with a probability of detection of 0.9 and the probability of false alarm of 1e-6; • R2: When returns are detected from two Swerling 1 Case targets separated in range by 70 m, with the same azimuth and elevation, the radar must resolve the two targets and generate two unique target reports 80 percent of the time. ### Virtual Test Bed As the first step we set up a virtual test bed for a radar system that will be used to implement and test our design. We will demonstrate that this test bed is useful for tracing of the performance requirements to the individual components of the system, making iterative design changes, and testing and verifying the system’s performance. We start by creating a general top-level architecture model using Simulink System Composer. We then show in more detail an architecture of a radar sensor component and the part of the test bed that simulates the environment and the radar targets. #### Top-Level Architecture The architecture model specifies only the conceptual components of the system, their interfaces, and links between them. The components of the architecture model are not required to have a concrete implementation. As will be shown further in this example, Simulink System Composer allows for defining specific Simulink behavior for some of the components while leaving other components specified only at the architecture level. Such a modular-based design is convenient and flexible since the behavior of the individual components can be modified or completely changed without the need to make any changes to other parts of the system. In addition to the `Radar Sensor` component that models the actual radar sensor, the test bed also includes: • `Power Substation - S`upplies power to the radar sensor; • `Control Center` - Passes control commands to the radar sensor through `Communications Link` and receives the radar data back; • `Targets and Environment` - Models the radar waveform propagation through the environment and the interaction of the waveform with the targets. `Radar` Sensor is connected to `Target and Environment` through a set of ports marked `Tx`, `Rx`, and `TargetsPos`. `Tx` and `Rx` links are used to pass the radar waveform to and from `Targets and Environment`. `TargetsPos` is used to pass the information about the targets positions to `Radar Sensor` in order to simulate the transmitted and received waveform in the directions of the targets. Open the top-level architecture. `open_system('slexRadarArchitectureExample')` Each component in an architecture model can be further decomposed into subcomponents. As a next step we define architecture for a radar sensor. When `Radar` Sensor is decomposed, `Power`, `Tx`, `Rx`, `CmdRx`, and `DataTx` ports defined at the top level become available as the external ports. Open the `Radar` Sensor component. `open_system("slexRadarArchitectureExample/Radar Sensor");` We define the following components to create an architecture model of a radar sensor: • `Resource Scheduler` is responsible for allocating the system resources within a dwell. It receives control commands from `Control Center` through the external CmdRx port. To indicate the flow of the control signals in the radar sensor architecture, `Resource Scheduler` is also linked to every component inside `Radar`. • `Waveform Generator` produces samples of the radar waveform. • `Transmit Array` passes the transmitted waveform to `Target and Environment` through the external `Tx` port. • `Receiver Array `receives back the reflected waveform from `Target and Environment` through the external `Rx` port. • `Signal Processor` performs beamforming, matched filtering, and pulse integration and passes the detections to `Data Processor`. • `Data Processor `creates radar reports or radar tracks and passes them back to `Control Center.` Notice that this architecture model of a radar sensor is very general. It does not make any assumptions about the type of a transmitted waveform, the shape or size of the antenna array, or the implementation of the signal and the data processing chains. The same architecture can be used to implement a large variety of different radar sensors. Further in this example we will implement only a subset of the above listed components leaving out `Resource Scheduler` and `Data Processor`. #### Targets and Environment `Targets and Environment` can be decomposed into two subcomponents: • `Targets` outputs targets positions and velocities. • `Propagation `models the propagation of the plane wave emitted by `Transmit Array` through the environment, reflection from the radar targets, and propagation back to `Receive Array`. Open `Targets and Environment` component. `open_system("slexRadarArchitectureExample/Targets and Environment");` #### Requirements Traceability Simulink Requirements is a tool that provides a way to link the requirements to the components of the architecture responsible for implementing the corresponding functionality. When either the requirements or the model change, Simulink Requirements provides a convenient way to trace the changes to the corresponding tests and verify that the model's performance and the requirements are always in agreement. Requirements Manager can be launched through the Apps tab. Requirements Editor then can be accessed by navigating to the Requirements tab and selecting Requirements Editor. To create a new set of requirements for the model, click on New Requirement Set. For this example, we create a requirements set and add R1 and R2 to it. Open these requirements in Requirements Editor. `open('slreqRadarArchitectureExampleRequirements.slreqx')` Requirements Editor lists the maximum range and the range resolution requirements. In the left panel it also shows the `Verified` and `Implemented` status for each requirement. At this moment, both requirements are not implemented and not verified. In order to change the `Implemented` status of a requirement it must be linked to a component of the architecture that implements the corresponding function. We link both requirements to `Waveform Generator` and `Signal Processor`. Notice that Requirements Manager at the bottom also shows the status of R1 and R2. After linking the requirements to the components, Requirements Manager shows that the status of R1 and R2 has changed to `Implemented`. When a requirement is selected in Requirements Manager, the components to which it is linked are highlighted with a purple frame. The linked components are also shown in the Links sections of the Details tab on the right. Another convenient way to visualize the links between the requirements and the components of the architecture is the Traceability Matrix that can be generated by clicking on Traceability Matrix in the Requirements tab of Requirements Editor. It clearly shows which components are responsible for implementation of each requirement. ### Component Implementation To simulate a radar system, we now need to provide a concrete behavior to the components of the architecture model. System Composer allows for specifying a Simulink behavior for some components while leaving the behavior of other components undefined. This provides a lot of flexibility to the design and simulation since we can build a functioning and testable model with some of the components modeled in detail while other components defined only at the abstract level. In this example we will only specify the concrete behavior for the components of the radar sensor needed to implement generation, transmission, reception, and processing of the radar signal. We will also provide a concrete implementation to `Targets and Environment.` To specify the dimensions of signals within the model, the example assumes that the targets positions are specified by a three-row matrix `tgtpos`, the targets velocities are specified by a three-row matrix `tgtvel`, and the targets RCS are specified by a vector `tgtrcs`. #### System Parameters To provide the Simulink behavior to the components of the radar sensor we first need to identify a set of radar design parameters that could satisfy the stated requirements. A set of parameters for a radar system that would satisfy R1 and R2 can be quickly found by performing a radar range equation analysis in the Radar Designer app. The app computes a variety of radar performance metrics and visualizes the detection performance of the radar system as a function of range. We use the `Metrics and Requirements` table to set the objective values of the maximum range and the range resolution requirements to the desired values specified in R1 and R2. Then we adjust the system parameters until the stoplight chart indicates that the system’s performance satisfies the objective requirement. The resulting set of the radar design parameters is: • radar frequency - 10 GHz; • peak power - 6000 W; • pulse duration - 0.4 $\mu s$; • pulse bandwidth - 2.5 MHz; • pulse repetition frequency – 20 kHz; • number of transmitted pulses – 10; • antenna gain – 26 dB; • noise figure – 0 dB; ```radarDesigner('RadarDesigner_RectangularWaveform.mat'); ``` #### Waveform Generator The analysis performed in the Radar Designer app assumes the time-bandwidth product to be equal to 1. This means that the transmitted waveform is an unmodulated rectangular pulse. We can use the Pulse Waveform Analyzer app to confirm that the derived waveform parameters will result in the desired performance and satisfy R1 and R2. Start the Pulse Waveform Analyzer app with the waveform parameters defined in this example. ```pulseWaveformAnalyzer('PulseWaveformAnalyzer_RectangularWaveform.mat'); ``` The app shows that the range resolution and the unambiguous range agree well with our requirements. To implement this behavior in the radar model, the` Waveform Generator` component needs to contain only a single Simulink block generating a rectangular waveform. The output of the `Rectangular Waveform` block is connected to the external `Waveform` port linked to the `Transmit Array` component. Since in this example we are not modeling the command signals, `Cmd` input is linked to a terminator. We set the `Output signal format` property of the block to `Pulses`, which means that every pulse repetition interval (PRI) of `1/prf` seconds, the block will produce a column vector of `fs/prf` complex waveform samples. #### Transmit Array The Transmit Array component comprises the following Simulink blocks: • `Transmitter -` transmits the waveform generated by `Waveform Generator` with the specified peak power and transmit gain. • `Range Angle Calculator` - computes the directions towards the targets assuming the radar is placed on static platform located at the origin. The target directions are used as `Ang` input to `Narrowband Tx Array.` • `Narrowband Tx Array - `models an antenna array for transmitting narrowband signals. It outputs copies of the transmitted waveform radiated in the directions of the targets. The radar range equation analysis identified that the transmit gain should be 26 dB. We set the `Gain` property of the `Transmitter` block to 20 dB and use an antenna array to get an additional gain of 6 dB. A phased array antenna with the desired properties can be designed using the Sensor Array Analyzer app. In this example we use a 4-element uniform linear array that has array gain of approximately 6 dB. Open the array model in the Sensor Array Analyzer app. ```sensorArrayAnalyzer('SensorArrayAnalyzer_ULA.mat'); ``` Simulink System Composer requires explicit specification of the dimensions, sample time, and complexity of the input signals. We set the dimensions of the `Waveform` input to `[fs/prf 1]`, the sample time to `1/prf`, and the complexity to `'complex'`. The dimensions of `TargetsPos` input are set to `size(tgtpos)`, leaving the default setting for the corresponding sample time and complexity. • `Narrowband Rx Array -` models the receive antenna array. It is configured using the same properties as the corresponding block in the `Transmit Array `component. At each array element the block combines the signals received from every target adding appropriate phase shifts given the targets directions computed by `Range Angle Calculator`. The output of the Narrowband Rx Array block is a `[fs/prf num_array_elements]` matrix. • `Receiver Preamp` - adds gain of 20 dB to the received signal. The `Rx` input is a matrix of received waveform samples with columns corresponding to `size(tgtpos,2)` targets. The dimensions of `Rx` must be set to [fs/prf `size(tgtpos,2)`], the sample time to `1/prf`, and the complexity to `'complex'`. #### Signal Processor `Signal Processor` implements a simple signal processing chain that consists of: • `Phase Shift Beamformer` - combines the received signals at each array element. In this example the beamforming direction is set to the broadside. • `Matched Filter` - performs matched filtering to improve SNR. The coefficients of the matched filter are set to match the transmitted waveform. • `Time Varying Gain - `compensates for the free space propagation loss. • `Noncoherent Integrator -` integrates the magnitudes of the 10 received pulses to further improve SNR. The dimensions of the `Signal` input must be configured to `[fs/prf num_array_elements]`, the sample time to `1/prf`, and the complexity must be set to `'complex'` #### Targets and Environment The `Targets` component is implemented using a single `Platform` block. The `Propagation` component consists of : • `Free Space Channel` - models the two-way propagation path of the radar waveform. The origin position and velocity inputs of the `Free Space Channe`l block are set to zero to indicate that the radar is located at the origin and that it is not moving. The destination position and velocity inputs are connected to the targets positions and velocities through `TargetsPos` and `TargetVel` ports. • `Radar Target` - models the RCS and target fluctuation effects. Since in this example we are considering slow fluctuating Swerling 1 Case targets, the `Update` input is set to false. We also set the simulation stop time to `10/prf` indicating that a single simulation run constitutes a single coherent processing interval (CPI). The dimensions of `Tx` input must be set to` [fs/prf size(tgtpos,2)]`, the sample time to `1/prf`, and complexity to `'complex'`. ### Simulation Output Specifying Simulink behavior for the above blocks is enough to obtain a model of a radar system that can produce radar detections. Prior to proceeding with testing the model and verifying the specific performance requirements, we want to run the simulation and check whether it generates the results as expected. Consider three targets with the following parameters: ```% Target positions tgtpos = [[2024.66;0;0],[3518.63;0;0],[3845.04;0;0]]; % Target velocities tgtvel = [[0;0;0],[0;0;0],[0;0;0]]; % Target RCS tgtrcs = [1.0 1.0 1.0];``` Adding the Simulation Data Inspector to log the output of the `Signal Processer` component and running a simulation, results in the range profile shown below. As expected, we get three distinct peeks corresponding to the three targets in the simulation. ```% Set the model parameters helperslexRadarArchitectureParameters; % Run the simulation simOut = sim('slexRadarArchitectureExample'); data = simOut.logsout{1}.Values.Data; % Plot results figure; plot(range_gates, data(numel(range_gates)+1:end)); xlabel('Range (m)'); ylabel('Power (W)'); title('Signal Processor Output'); grid on;``` ### Summary This example is the first part of a two-part series on how to design and verify a radar system in Simulink starting from a list of performance requirements. It shows how to build a radar system architecture using Simulink System Composer which can be used as a virtual test bed for designing and testing radar system. Part 1 also demonstrates how to link the performance requirements to the components of the architecture and how to implement the behavior of the components using Simulink to obtain a functioning and testable model. In Part 2 of this example we show how to set up test suites to test the created radar design and how to verify that the stated performance requirements are satisfied.
2021-09-22 18:29:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050421357154846, "perplexity": 851.9594846621767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00719.warc.gz"}
https://math.stackexchange.com/questions/2794490/units-in-local-rings
# Units in local rings Let $R$ be finite commutative local ring. Let $U(R)$ be the group of units and $M$ the maximal ideal and denote by $k$ the degree of nilpotence of $M$. I'm trying to find an example of a finite commutative local ring $R$ where the order of $U(R)$ is less than $k$. I do not know how to find it, can someone help me out? • What makes you think such an example is possible? – quasi May 24 '18 at 15:57 Let $M^n=\{0\}$ but $M^{n-1}\neq\{0\}$. Then in the series $\{0\}\subset M^{n-1}\subset\ldots\subset M\subset R$, we can see that $2|M^i|\leq |M^{i+1}|$, because $M^i$ has to have at least two cosets in $M^{i+1}$. Working up the chain this way, you can work out that $|M|\geq 2^{n-2}|M^{n-1}|$, and $M^{n-1}$ has at least two elements, so $|M|\geq 2^{n-1}$. There is also an injective map $M\to U(R)$ given by $m\mapsto 1+m$. This means that $|U(R)|\geq |M|\geq 2^{n-1}$. But if you check, $2^{n-1}\geq n$ for every positive integer $n\geq 1$.
2019-12-13 17:14:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393123984336853, "perplexity": 62.57372489367921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00516.warc.gz"}
https://www.inference.vc/deepsets-modeling-permutation-invariance/
February 7, 2019 # DeepSets: Modeling Permutation Invariance One of my favourite recent innovations in neural network architectures is Deep Sets. This relatively simple architecture can implement arbitrary set functions: functions over collections of items where the order of the items does not matter. This is a guest post by Fabian Fuchs, Ed Wagstaff and Martin Engelcke, authors of a recent paper on the representational power of such architectures and why the deep sets architecture can represent arbitrary set functions in theory. It's a great paper. Imagine what these guys could achieve if their lab was in Cambridge rather than Oxford! Here are the links to the original Deep Sets paper, and the more recent paper by the authors of this post: Over to Fabian, Ed and Martin for the rest of the post. Enjoy. ## Sets and Permutation Invariance in ML Most successful deep learning approaches make use of the structure in their inputs: CNNs work well for images, RNNs and temporal convolutions for sequences, etc. The success of convolutional networks boils down to exploiting a key invariance property: translation invariance. This allows CNNs to • drastically reduce the number of parameters needed to model high-dimensional data • decouple the number of parameters from the number of input dimensions, and • ultimately, to become more data efficient and generalize better. But images are far from the only data we want to build neural networks for. Often our inputs are sets: sequences of items, where the ordering of items caries no information for the task in hand. In such a situation, the invariance property we can exploit is permutation invariance. To give a short, intuitive explanation for permutation invariance, this is what a permutation invariant function with three inputs would look like: $f(a, b, c) = f(a, c, b) = f(b, a, c) = \dots$. Some practical examples where we want to treat data or different pieces of higher order information as sets (i.e. where we want permutation invariance) are: • working with sets of objects in a scene (think AIR or SQAIR) • multi-agent reinforcement learning • perhaps surprisingly, point clouds We will talk more about applications later in this post. Note from Ferenc: I would like to jump in here - because it's my blog so I get to do that - to say that I think the killer application for this is actually meta-learning and few-shot learning. By meta-learning, don't think of anything fancy, I consider amortized variational inference, like a VAE, as a form of meta-learning. Consider a conditionally i.i.d model where you have a global parameter $\theta$, and a bunch of observations $x_i$ drawn conditionally i.i.d from a distribution $p_{X\vert \theta}$. Given a set of observations $x_1, \ldots, x_N$ we'd like to approximate the posterior $p(\theta\vert x_1, \ldots, x_N)$ by some parametric $q(\theta\vert x_1, \ldots, x_N; \psi)$, and we want this to work for any number of observations $N$. Clearly, the real posterior $p$ has a permutation invariance with respect to $x_n$, so it would make sense to make the recognition model, $q$, a permutation-invariant architecture. To me, this is the killer application of deep sets, especially in an on-line learning setting, where one wants to update our posterior estimate over some parameters with each new data point we observe. ## The Deep Sets Architecture (Sum-Decomposition) Having established that there is a need for permutation-invariant neural networks, let's see how to enforce permutation invariance in practice. One approach is to make use of some operation $P$ which is already known to be permutation-invariant. We map each of our inputs separately to some latent representation and apply our $P$ to the set of latents to obtain a latent representation of the set as a whole. $P$ destroys the ordering information, leaving the overall model permutation invariant. In particular, Deep Sets does this by setting $P$ to be summation in the latent space. Other operations are used as well, e.g. elementwise max. We call the case where the sum is used sum-decomposition via the latent space. The high-level description of the full architecture is now reasonably straightforward - transform your inputs into some latent space, destroy the ordering information in the latent space by applying the sum, and then transform from the latent space to the final output. This is illustrated in the following figure: ![](/content/images/2019/02/Architecture.png) If we want to actually implement this architecture, we'll need to choose our latent space (in the guts of the model this will mean something like choosing the size of the output layer of a neural network). As it turns out, the choice of latent space will place a limit on how expressive the model is. In general, neural networks are universal function approximators (in the limit), and we'd like to preserve this property. Zaheer et al. provide a theoretical analysis of the ability of this architecture to represent arbitrary functions - that is, can the architecture, in theory, achieve exact equality with any target function, allowing us to use e.g. neural networks to approximate the necessary mappings? In our paper, we build on and extend this analysis, and discuss what implications it has for the choice of latent space. ### Can we get away with a small latent space? Zaheer et al. show that, if we're only interested in sets drawn from a countable domain (e.g. $\mathbb{Z}$ or $\mathbb{Q}$), a 1-dimensional latent space is enough to represent any function. Their proof works by defining an injective mapping from sets to real numbers. Once you have an injective mapping, you can recover all the information about the original set, and can, therefore, represent any function. This sounds like good news -- we can do anything we like with a 1-D latent space! Unfortunately, there's a catch -- the mapping that we rely on is not continuous. The implication of this is that to recover the original set, even approximately, we need to know the exact real number that we mapped to -- knowing to within some tolerance doesn't help us. This is impossible on real hardware. Above we considered a countable domain, but it's important to consider instead the uncountable domain $\mathbb{R}$, the real numbers. This is because continuity is a much stronger property on $\mathbb{R}$ than on $\mathbb{Q}$, and we need this stronger notion of continuity. The figure below illustrates this, showing a function which is continuous on $\mathbb{Q}$ but not continuous on $\mathbb{R}$ (and certainly not continuous in an intuitive sense). The figure is explained in detail in our paper. Using $\mathbb{R}$ is particularly important if we want to work with neural networks. Neural networks are universal approximators for continuous functions on compact subsets of $\mathbb{R}^M$. Continuity on $\mathbb{Q}$ won't do. ![](/content/images/2019/02/continuous.png) Zaheer et al. go on to provide a proof using continuous functions on $\mathbb{R}$, but it places a limit on the set size for a fixed finite-dimensional latent space. In particular, it shows that with a latent space of $M+1$ dimensions, we can represent any function which takes as input sets of size $M$. If you want to feed the model larger sets, there's no guarantee that it can represent your target function. As for the countable case, the proof of this statement uses an injective mapping. But the functions we're interested in modeling aren't going to be injective -- we're distilling a large set down into a smaller representation. So maybe we don't need injectivity -- maybe there's some clever lower-dimensional mapping to be found, and we can still get away with a smaller latent space? No. As it turns out, you often do need injectivity into the latent space. This is true even for simple functions, e.g. max, which is clearly far from injective. This means that if we want to use continuous mappings, the dimension of the latent space must be at least the maximum set size. We were also able to show that this dimension suffices for universal function representation. That is, we've improved on the result from Zaheer (latent dimension $N \geq M+1$ is sufficient) to obtain both a weaker sufficient condition, and a necessary condition (latent dimension $N \geq M$ is sufficient and necessary). Finally, we've shown that it's possible to be flexible about the input set size. While Zaheer's proof applies to sets of size exactly $M$, we showed that $N=M$ also works if the set size is allowed to vary $\leq M$. ## Applications & Connections Why do we care about all of this? Sum-decomposition is in fact used in many different contexts - some more obvious than others - and the above findings directly apply in some of these. ### Attention Mechanisms Self-attention via {keys, queries, and values} as in the Attention Is All You Need paper by Vaswani et al. 2017 is closely linked to Deep Sets. Self-attention is itself permutation-invariant unless you use positional encoding as often done in language applications. In a way, self-attention "generalises" the summation operation as it performs a weighted summation of different attention vectors. You can show that when setting all keys and queries to 1.0, you effectively end up with the Deep Sets architecture. Therefore, self-attention inherits all the sufficiency statements ('with $N=M$ you can represent everything'), but not the necessity part: it is not clear that $N=M$ is needed in the self-attention architecture, just because it was proved that it is needed in the Deep Sets architecture. ### Working with Point Clouds Point clouds are unordered, variable length lists (aka sets) of $xyz$ coordinates. We can also view them as (sparse) 3D occupancy tensors, but there is no 'natural' 1D ordering because we have three equal spatial dimensions. We could e.g. build a kd-tree but again this imposes a somewhat 'unnatural' ordering. As a specific example, PointNet by Qi et al. 2017 is an expressive set-based model with some more bells and whistles. It handles interactions between points by (1) computing a permutation-invariant global descriptor, (2) concatenating it with point-wise features, (3) repeating the first two steps several times. They also use transformer modules for translation and rotation invariance --- So. Much. Invariance! ### Stochastic Processes & Exchangeability A stochastic process corresponds to a set of random variables. Here we want to model the joint distributions of the values those random variables take. These distributions need to satisfy the condition of exchangeability, i.e. they need to be invariant to the order of the random variables. Neural Processes and Conditional Neural Processes (both by Garnelo et al. 2018) achieve this by computing a global latent variable via summation. One well-known instance of this is Generative Query Networks by Eslami et al. 2018 which aggregate information from different views via summation to obtain a latent scene representation. ## Summary 👋 Hi, this is Ferenc again. Thanks to Fabian, Ed and Martin for the great post. Update: As commenters pointed out, these papers are, of course, not the only ones dealing with permutation invariance and set functions. Here are a couple more things you might want to look at (and there are quite likely many more that I don't mention here - feel free to add more in the comments section below) As I said before, I think that the coolest application of this type of architecture is in meta-learning situations. When someone mentions meta-learning many people associate to complicated "learning to learn to learn via gradient descent via gradient descent via gradient descent" kind of things. But in reality, simpler variants of meta-learning are a lot closer to being practically useful. Here is an example of a recommender system developed by (Vartak et al, 2017) for Twitter, using this idea. Here, a user's preferences are summarized by the set of tweets they recently engaged with on the platform. This set is processed by a DeepSets architecture (the sequence in which they engaged with tweets is assumed to carry little information in this application). The output of this set function is then fed into another neural network that scores new tweets the user might find interesting. Such architectures can prove useful in online learning or streaming data settings, where new datapoints arrive over time, in a sequence. For every new datapoint, one can apply the $\phi$ mapping, and then simply maintain a moving averages of these $\phi$ values. For binary classification, one can have a moving average of $\phi(x)$ for all negative examples, and another moving average for all positive examples. These moving averages then provide a useful, permutation-invariant summary statistics of all the data received so far. In summary, I'm a big fan of this architecture. I think that the work of Wagstaff et al (2019) provides further valuable intuition on their ability to represent arbitrary set functions.
2020-01-23 18:21:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7846984267234802, "perplexity": 495.6984220105316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00376.warc.gz"}
https://www.rocketryforum.com/threads/best-way-to-strengthen-an-excel-plus-or-horizon-54.80129/
# Best way to strengthen an Excel Plus or Horizon 54 ### Help Support The Rocketry Forum: #### firemanup ##### Well-Known Member Hi guys, Some new questions for ya, I'm too chicken to put my BSD THOR up on a K motor i'm just afraid of something like what happened to the Jaguar. So to me basically I don't have a K capable bird, now that's a problem.. heh So I'm considering an Excel Plus or a Horizon 54 for a new K motor/mach busting bird. I doubled the completed weights the respective websites have listed and i'm breaking mach in sims with most of the upper K motors. Has anyone seen either of these birds fly on K motors..? What I'm wondering at this point is how best to strengthen the bird to handle the upper K motors. I've not fiberglassed before but i'm willing to learn. What i'm considering now is building stock with one or two wraps of 6oz glass, topped with one wrap of 2oz glass. I don't want to glass the nosecone so i'm wondering what kind of lip that will give me at the airframe where the cone meets. 2nd option is to double wall the whole thing with couplers. 3rd option is to double wall the whole thing with couplers and glass the inside of them with one or two layers of 6 oz glass. With any of these i'm thinking i'm going to need some noseweight with the bigger K motors, so i'm considering filling the nosecone with 2 part expanding foam. What is your opinion as far as what will be needed, and or which of the above modifications for strength would be the best bet.? My preference at this point would be to double wall with glassed couplers, I don't mind finishing paper tubes at all and am a little worried about my finish on my first fiberglass attempt.. Thanks for any help suggestions. #### DPatell ##### Well-Known Member I suggest fiberglassing the tube with a layer of 6oz. followed by a layer of 2oz. Fiberglass the inside of your couplers with 2 layers of 6oz. glass by laying the fiberglass inside the couplers, then blowing up a ballon to compress the glass tight to the coupler. Then I suggest fin tab to fin tab internal glassing with 6oz. glass, large epoxy filled with milled glass external fillets, along with a layer of 6oz fiberglass fin tip to fin tip, followed by a layer of 2oz for finishing, or just paint another layer of resin over it to fill the weave. This will result in a VERY robust rocket, that should be able to handle the K motors. Mach is not nice to rockets The lip will proably be 1/16" or so, not too bad. You can use bondo to build that up on the cone. You will be surprised how much strength the 6oz fiberglass adds. Also, I suggest making sure that your coupler fits are very tight, in order to prevent movement side to side if it cones. That will definately result in a shred. #### rocwizard ##### Well-Known Member Personally, I would recommend biting the bullet and glassing the exterior. It is really not that hard. I actually think it's fun to do As for getting a good finish, the weave can be filled in many ways, be it auto body filler <bondo> or thickened epoxy. WHat I recommend though, is to get a can of Kilz primer. It is VERY high solids and fills in voids very quickly. You mioght also want to try out some Super Fill or UV Smooth Prime which can be had from Aircraft Spruce or Shadow Composites. I would recommend two wraps of 6oz. but that's just me. HTH #### firemanup ##### Well-Known Member Dpatell, Are you saying 6 oz and 2 oz on the outside then 2 layers of 6 oz on the inside of the stock couplers, or..... Outside layers then double wall full length of the body tubes with couplers that have two layers of 6 oz inside of them.. Roc, I'm not so concerned with filling the weave of the fiberglass, more so where the wrap overlaps, i'm trying to figure out a way to get the edges to match up without coming up short or overlapping creating a hump that has to be sanded down... What is finishing epoxy guys...? is it basically any long cure time epoxy or is it a specific type of epoxy used for fiberglassing? Without being a west systems guy yet, and not willing to invest in it YET i'd rather try this first, how much epoxy will i need..? Currently I use bob smith stuff from the hobby shop, comes in two squeeze bottles that equal out to 9 oz combined, to put one wrap of glass on the outside of a 4"x36" tube am i going to use about 1/2 of that, all of that..??? I'm starting to think the fiberglassing supplies alone will add up to another 30 to 40 bucks.. Thanks for the quick replies and info... #### Justin Horne ##### Well-Known Member From what I've gathered, finishing epoxy is just a sandable epoxy. PML sells some, 20 minute. Almost all that I have seen is 20 minute, so that may be the only kind. It's also used for sealing wood fins. Justin #### DPatell ##### Well-Known Member I'm sorry, I read it afterwards and had to double read it. Hope this will make a little more sense... Body Tubes: 1 Layer of 6oz. 1 Layer of 2 oz. Couplers (altimeter Bay, or zipperless coupler if thats how you plan to deploy): 2 layers of 6oz. glass internally You could double wall if you'd like, that would make a stockier rocket. Build for the biggest motor you plan to put in there! Overkill is okay in this instance. #### Ryan S. ##### Well-Known Member I would to essentially what dan said. put big internal fillets (thickened, really thickened eopxy) so when you do the tip to tip it will be easier for the glass to make the transition from fin to MMT. Definantly overbuild, it is fun I like heavy little rockets that can take big motors, it looks cool when the flame is bigger than the rocket itself #### daveyfire ##### Piled Higher and Deeper Finishing epoxy is a longer-cure epoxy that is also quite thin and is designed for lamination. It does not contain any of the fillers added to the hobby shop 5 and 30 minute resins to make them 1:1 mix and a lot thicker. It's very, very thin and runny. It is NOT, however, sandable. No epoxy is sandable, and if it is, it has been adulterated with some filler which will weaken the joint. I've found that covering the layer of glass with another layer of resin works to fill in the weave, but it's incredibly heavy and is quite brittle (we land on lakebed, and I bring my rockets down quick -- on rockets I've filled this way, it's a cosmetic repair every flight). The way to fill in the weave is to use a high build primer, as Eric mentioned, in 2 or 3 coats, or to squeegee in epoxy filled with West System 407 Fairing Filler or similar. This is not only lighter than straight epoxy, it also will hold up better and be easier to sand. Dealing with the overlap isn't that bad -- just overlap the cloth about .5" to 1" and fill in with SuperFil or thickened epoxy. Sand it down (get a power sander if you don't have one yet -- you'll be needing it!) and it'll practically be invisible. Epoxy-wise, I'd recommend just diving in and getting a high-quality marine or aerospace grade product. It has practically an infinite shelf life, is much stronger than hobby store epoxies, and is an incredibly versatile product -- from thin for lamination to incredibly thick for fillets, from unmodified to modified with carbon fiber, fiberglass, or kevlar pulp for fin fillets, etc. etc. etc. It's about $40 to get into the West System group A size, which will last you a while (until you become like Carl and start to build 5 big rockets at once!). This includes the handy pumps for measuring the 5:1 mix ratio. Aeropoxy is even cheaper and is designed for the aerospace industry -- it's what amateur aviators use to hold together their airplanes. It also has a higher temperature resistance than West System, and can be oven-cured in 90 minutes. Aeropoxy can also be post-cure treated for even better temperature resistance. It's about$30 to get a gallon of the stuff -- not too bad! Enjoy the ride while you're glassing. It'll make your rockets a lot stronger for the high-speed flights and less-than-nominal landings #### firemanup ##### Well-Known Member Ok, Browsing the internet another option just came up... Looking at Giant Leap's website and found the Kevlar sock and easyglass socks..., emailed them to find out if one wrap of each would be similar in strength to two 6oz and one 2 oz layer of glass... Now during the build when would i do what..?? LOL sorry but i'm unsure here.. I'm looking at building the motor mount fin can outside of the rocket then slotting the end of the body tube and sliding it up and in... I'm wondering when to glass different pieces.. ie build the fin can, tip to tip glass the fins. Cut fin slots then glass or sock the body tubes.. Then insert the fins can into the rocket and do the internal and external fillets.. my only question is, how well will these fillets adhear to already glassed fins and body tube..?? or do i have te build order messed up..? #### firemanup ##### Well-Known Member I appreciate the replies and this is actually one of the first threads i've decided to print off.. I've decided to do this project for sure, should start in the next 2 to 4 wks.. and i will be taking, as close as i can possibly get, carl type build pics. The man just sets a standard ya know.. Davey, Aeropoxy, comes in a one gallon can? I take it there's no mixing of this stuff..? or is there..? 30 bucks i'll fork out for this project.. With regular glass i'm starting to assume you must overlap for strength issues..? I was trying to figure out how to make it match up evenly and think i can do that.. 3 layers of glass would you want to stagger the overlaps so they're not all on top of eachother..? #### daveyfire ##### Piled Higher and Deeper I don't like Kevlar Sock for any rockets under 6" diameter. Wetting it out is like pouring epoxy on a sponge -- it just keeps drinking it up more and more. I recommend doing a standard wrap -- it's a little tougher, but the end result is lighter and stronger. I build exactly the way you have described, except I slot after I glass. If you extend the slots all the way out the back of the tube, IMHO, it's a pain in the butt to get the tube to stay in the right shape as you pull the glass over the surface. Epoxy will stick GREAT to the glassed surface -- if you don't fill the weave until after you've filleted, it leaves a bunch of nooks and crannies for the epoxy to grab on to. The only caveat is to sand a little bit to clean up the "amine blush". If you live in an area with any humidity, the curative will react with it and produce this oily layer on top of the cured laminate. Not a strength problem -- just a bit of an annoyance. I've noticed that Aeropoxy doesn't blush nearly as much as West System. You can get Aeropoxy from Aircraft Spruce (http://www.aircraftspruce.com) or from ShadowAero (http://www.shadowaero.com) in 1 quart, 1 gallon, 5 gallon, and larger sizes. I've gone through one quart kit and am about 1/8 of the way into a gallon kit since I started out with the system in Decmber of 2002. Aeropoxy is a 3:1 mix ratio by volume or a 100:27 mix ratio by weight -- you don't get pumps, so you can either weigh out the resin and hardener on a scale (what I do) or do a volume measure (fill up a small cup with rice three times and dump it into a bigger cup each time, mark that level, and that's your resin and hardener amounts -- fill up the big cup to the line and fill up the small cup all the way, dump em together and mix). A little more work, but a quart of Aeropoxy costs $15 and a quart of West costs$40... you do the math While you're there, pick up a kit of SuperFil. It's essentially Bondo for rockets. I'm always pushing this stuff on the forums, but I don't work for them, I'm just an incredibly satisfied customer. It mixes (by eye is OK) in a 2:1 ratio of part blue to part white and is very, very thick. It is also very, very light, weighing 3.3 lbs per gallon. The best part about this stuff is that it is epoxy based. Due to their chemistry, you can bond epoxy products to polyester surfaces, but not polyester products to epoxy surfaces. Bondo is polyester based (you can tell because curative is measured in drops, not in a proportion to the product), and as such doesn't stick very well to the surface. SuperFil is magic stuff -- it smooths out wonderfully and can be used to fill your overlap or fill the weave on small rockets. It's creamy smooth. I have had a quart kit for over four years and have barely made a dent in it. Best $20 I ever spent. The up-front on these materials is a little high, but they last a long, long time. I highly recommend starting out with the right stuff! Overlap is more of a technique question than a strength question. When I started out, I always tried to get the 'glass to come right up to itself again on the overlap. I never could -- I always came up short. Problem with the coveted "exact wrap" is that 1. the glass shrinks slightly when resin is applied, and 2. the weave is inevitably pulled out of the glass, unless you tape it off like Carl does (but then you get to cut off the tape!). It's not a problem and glass layers are about 0.006" thick, so it's pretty easy to get a decent finish over the overlap. I've heard of a method where you bring the cloth around and pull out the weave so that the two ends of the cloth mesh together and it becomes perfectly smooth, but I've never tried it -- by the time I reach the overlap, I'm done with the tube and I toss it in the oven. I just let the power sander take care of the rest! When you put on layers, you indeed stagger the overlaps to prevent the tube from becoming egg-shaped. That's no fun! Hope this helps more than it confuses #### firemanup ##### Well-Known Member Davey, Can the AeroPoxy be used as regular old epoxy for building also..or is it just for laminating..? #### firemanup ##### Well-Known Member Also am I looking for E glass or S glass or does it really matter, I'm finding from the web that the E glass is stronger, but a little stiffer to work with, Which is the preferred for rocketry..? #### Ryan S. ##### Well-Known Member #### daveyfire ##### Piled Higher and Deeper TRF Lifetime Supporter Aeropoxy can most DEFINITELY be used for building! You'll need some sort of filler to thicken it up and keep it from running all over the place (West 406 is what I use, cheap and available locally, but any sort of fumed silica or microballoons will work), but it's incredibly strong. On my new 3" minimum diameter all-fiberglass rocket, I've just begun the reinforcement process on the fins. Using Aeropoxy with chopped carbon fiber added, the fillets are now on. I was out sanding them smooth for the next layer of reinforcement (5.7oz carbon), and walked away briefly to get some fresh sandpaper. The booster (4 ft long, weighs ~3 lbs) fell over on its side, hitting the concrete, and bounced four or five times. No damage to the surface mount fins. I've also done a surface mount demonstration for the uncertain with my Kick Me rocket. It's 38mm minimum diameter with G10 fins on it. I had the fillets completed with Aeropoxy and fumed silica when I performed the "test". I picked up the rocket and whacked it on the ground as hard as I could. No damage either On its second flight, it ripped out the shock cord and came in quick from 5000+ feet. At last, one of the fins cracked at the root. Good, strong stuff, that! It's the beauty of a true epoxy system, because you start with a basic, very thin epoxy that has excellent strength characteristics, then build it up into whatever you need it for -- fillers, fibers, pigments, cloth, whatever. It's much more versatile than thick hobby shop epoxy, where it starts thick and stays that way. It's easier to thicken thin epoxy than thin thick epoxy. Wow, that's confusing! Sorry, I was also off on the Aeropoxy price... it's$35.90/gallon for resin and $12.95 for hardener to cure it, so overall it's about$50ish. A gallon of resin is a lot, though... think of how many hobby shop bottles that is! E-glass vs. S-glass... it doesn't really matter for our applications. I typically use S-glass just cause it's cheaper, and have had no problems with it so far, even on some pretty wild flights. Hope this helps some! #### strudleman ##### Well-Known Member This has been the most interesting thread! Thanks to everyone who's posted ( questions OR answers!), I'm eagerly awaiting my first chance to try glassing! #### scm86 ##### Well-Known Member Jason, sorry to add another option but, you can get the fiberglass or carbon or carbon/kevlar sleeves from http://www.aerosleeves.com. the advantage to these is that the loading of the fibers is in the direction that you want it, and there is no overlap at all, just wet it out after putting on the tube and strecth it out. that simple. Im told that a single layer of the carbon stuff would withstand mach on its own, without anysort of paper tube attached to it. As for epoxy, Dave Muesing sells really nice epoxy called Mr Fiberglass epoxy. I use the 3:1 ratio stuff, thats the slow i think. Perfect for laminations since its thin, and is awesome for general construction. the phenolic microbaloons he sels are good for making large fillets that are strong and light. www.mrfiberglass.com Oh, one last thing, do the slots after you glass and fill/sand the tube, that way theres no chance of sanding too hard and crushin the tube at the weak points where the slots are... Scott McNeely
2020-09-19 03:44:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3324393033981323, "perplexity": 2985.413985558155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00116.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-9-counting-and-probability-section-9-3-binomial-probability-9-3-exercises-page-675/8
## College Algebra 7th Edition Published by Brooks Cole # Chapter 9, Counting and Probability - Section 9.3 - Binomial Probability - 9.3 Exercises - Page 675: 8 #### Answer $0.36015$ #### Work Step by Step Probability of success: $.7$ Probability of failure: $(1-.7)=.3$ The probability of one failure out of five attempts (implies four successes) is $(.7)^{4}\times (.3)\times 5C4=.36015$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-09-18 17:11:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7699682712554932, "perplexity": 3338.764895661358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155634.45/warc/CC-MAIN-20180918170042-20180918190042-00382.warc.gz"}