content
stringlengths
86
994k
meta
stringlengths
288
619
How to Revise for A-level Physics? Get an A* - Edumentors A-level physics is considered as one of the hardest A-level subjects! Physics intertwines the fundamental laws of the universe with mathematical precision, presents unique challenges that require a strategic approach to study. Now – let’s dive into it – what are the best revision techniques and revision resources for A-level physics? Understanding the A-Level Physics Syllabus Grasping the breadth and depth of the A-level Physics syllabus is a crucial first step in your revision journey. This segment of our guide is designed to help you navigate and understand the syllabus Core Concepts and Topics in A-level Physics Understanding the core concepts and topics that form the backbone of the syllabus is essential. Familiarise yourself with the fundamental theories and principles that are the focus of the A-level Physics curriculum. 1. Mechanics and Materials – Motion, forces, energy, momentum, and properties of materials. 2. Electricity – Electrical circuits, current, resistance, and electromotive force. 3. Waves: Characteristics, wave phenomena, and optics. 4. Particles and Radiation – Fundamental particles, quantum phenomena, and nuclear physics. 5. Fields – Gravitational, electric, and magnetic fields. 6. Thermodynamics – Thermal physics and gas laws. 7. Astrophysics and Cosmology (Optional) – Celestial mechanics, the universe’s structure, and evolution. Exam Board Specifics Each exam board has its unique nuances and focus areas. It’s vital to know the specifics of your exam board’s syllabus to tailor your revision accordingly. Visit their website and read their specifications carefully! Importance of Practical Skills Practical skills are a significant component of A-level Physics. Understand the practical experiments and techniques that you need to be familiar with, as they are often integral parts of the Integrating Theory and Application The ability to integrate theoretical knowledge with practical application is key. Focus on how the syllabus connects theoretical physics concepts with their real-world applications. By thoroughly understanding your A-level Physics syllabus, you set a strong foundation for effective revision and exam success. This understanding will guide your study plan, ensuring that you cover all necessary topics comprehensively. Effective A-level Physics Revision Methods Developing effective revision methods is key to mastering A-level Physics. This section explores various techniques to enhance your study experience and retention of information. Active Recall Active recall is a powerful technique where you test yourself on the material, reinforcing memory and understanding. This method is more effective than passive reading or highlighting. Spaced Repetition Utilise spaced repetition to review material at increasing intervals. This technique helps in long-term retention of concepts and formulas. Concept Mapping Creating concept maps helps in visually organising and connecting different physics concepts, aiding deeper understanding and memory retention. The Feynman Technique Employ the Feynman Technique, where you teach a concept in simple terms to someone else. This method ensures that you truly understand the material and can communicate it effectively. Implementing these methods in your revision routine can greatly enhance your understanding and recall of A-level Physics concepts, paving the way for success in your exams. Using Diverse Resources To excel in A-level Physics, it’s crucial to use a variety of high-quality resources. Here’s a list of recommended resources for comprehensive revision: 1. Textbooks and revision guides specific to your exam board (e.g., AQA, OCR, Edexcel) for focused syllabus coverage. 2. Websites like Khan Academy and Physics and Maths Tutor offer in-depth topic explanations and practice problems. 3. YouTube channels like Physics Online, DrPhysicsA, and CrashCourse Physics provide visual and engaging explanations. 4. A-level past papers available on exam board websites are essential for practice and understanding exam format. 5. Educational apps like Quizlet for flashcards and Physics Wallah for topic-wise revision. 6. Study forums – The Student Room and Physics Forums for peer discussions and problem-solving. Combining these resources will give you a well-rounded understanding and prepare you effectively for your exams. Mastering Mathematical Skills Mathematics is not just a supporting tool but a fundamental part of A-level Physics. Understanding and applying mathematical concepts are essential for exploring and solving physics problems. 1. Grasp the basics of algebra, calculus, and trigonometry, as these are often used in physics calculations. 2. Learn to apply mathematical methods to physical scenarios, such as using calculus in mechanics or trigonometry in wave physics. 3. Practice manipulating and rearranging formulas, a skill crucial for tackling physics questions effectively. 4. Develop the ability to interpret and analyse data, which is a key part of physics experiments and research. 5. Use resources like ‘Maths for Physics’ textbooks, online courses, and maths-focused YouTube channels to strengthen your mathematical foundation. Strengthening your mathematical skills will significantly enhance your understanding and performance in A-level Physics. Practice with A-level Physics Past Papers Practicing with past papers is an invaluable part of preparing for A-level Physics exams. It helps familiarise you with the exam format, question styles, and time management. Key tips include: Frequent Practice Regularly solve past papers to build confidence and improve exam techniques. Review Mark Schemes Understand how answers are graded to tailor your responses accordingly. Identify Weak Areas Use these papers to pinpoint topics that need more revision. Simulate Exam Conditions Practice under timed conditions to enhance time management skills. Incorporating past paper practice into your study routine is crucial for success in A-level Physics exams. Diagrams and Graphs in A-level Physics Diagrams and graphs are essential tools in A-level Physics for visualising concepts and data. Their use can significantly enhance understanding and memory retention. • Drawing diagrams helps in comprehending and remembering complex physical processes and setups. • Learn to interpret and analyse various types of graphs, a skill crucial for both understanding concepts and answering exam questions. • Use visual aids to grasp abstract concepts like electric fields or wave interference patterns. Incorporating these visual learning tools into your revision strategy can greatly aid in grasping and retaining the complex concepts of A-level Physics. In conclusion, success in A-level Physics requires a multifaceted approach, combining effective revision techniques, utilisation of diverse resources, mastery of mathematical skills, and the strategic use of past papers and visual aids. Remember, understanding complex concepts, applying them in varied contexts, and staying motivated and stress-free are key. For personalised guidance and support, consider reaching out to online tutoring platforms like Edumentors, which offer specialised assistance in A-level Physics. With dedication, the right strategy, and appropriate support, you can navigate the challenges of A-level Physics and achieve your academic goals.
{"url":"https://edumentors.co.uk/blog/how-to-revise-for-a-level-physics-get-an-a/","timestamp":"2024-11-02T11:45:24Z","content_type":"text/html","content_length":"188683","record_id":"<urn:uuid:a17fcd45-720f-4fd6-956a-12cab3fccf29>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00124.warc.gz"}
Ohms law and current intensity Ohm's law, or the law of electric conduction, establishes the connections between: - the intensity of the electric current (I) in an electrical circuit, - voltage (V) or applied electrical voltage (U) and - electrical resistance (R) in the circuit. I = V / R Complete two fields: intensity, voltage or resistance How to convert I CURRENT INTENSITY: current intensity, measured in amperes (A) U APPLIED VOLTAGE: U or V - applied voltage, measured in volts (V) R CIRCUIT RESISTANCE: circuit resistance, measured in ohms (Ω) The mathematical formula for Ohm's law is: I = V / R or I = U / R Ohm's law equation in a circuit: I = E / (R + r) I - current intensity, measured in amperes (A); U or V - applied voltage, measured in volts (V); E - electromotive voltage, measured in volts (V); R - circuit resistance, measured in ohms (Ω); r - internal resistance of the source, measured in ohms (Ω); I = U / R R = U / I U = I X R
{"url":"https://www.qtransform.com/calcs/en/physics/electric/Legea-lui-Ohm-si-intensitatea-curentului.php","timestamp":"2024-11-05T21:50:14Z","content_type":"text/html","content_length":"36657","record_id":"<urn:uuid:9858be0d-0ce0-44a1-ac2d-912e14338810>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00509.warc.gz"}
About Quilibrium and Components Level 1: Communication We've got a group of friends who like to meet once a week and catch up on the latest gossip, but you've all got really busy schedules with extra curriculars, so finding the right time to meet, and know if it's even worth meeting (last time Dave just wanted to drone on about his breakup and it was a huge bummer and totally his fault anyway) is challenging. Teachers are really strict, and if a cellphone is confiscated they're reading the conversation out loud, which was super embarrassing for Dave (man he's gotta stop it with the TMI). One of the friends, Sally, is a math genius and heard about this crazy trick using the formulas we just learned about in algebra – those equations that go y = 18x + 2 and so on. Basically, if we all come up with our own random coefficients for the equation (the 18 and 2 in the equation), and each one of us is assigned a unique number that we plug in for x (and we all know each other's unique number for x), then we can share the y value for each person's x value to each person, individually (e.g. if my x = 4, everyone is going to give me their y value when x = 4). Once everyone has their specific y values (samples), then we add up all of the ones we got to a single y value, and can team up, where any two of us can share our y values and find out what y equals when x is zero (because we're just solving the equation to find that last coefficient): y = Bx + C y = 372, x = 4 :: 372=4B + C :: B = (372-C)/4 y = 182, x = 2 :: 182=2B + C :: B = (182-C)/2 (182-C)/2 = (372-C)/4 182-C = 184-(C/2) -2 = C/2 C = -4 :: y = B(0) - 4 :: **y = -4, x = 0 ** Now that we all know what y is when x is zero, we have a secret number to encrypt our group messages with that nobody else knows.(This is not actually safe, you need proper cryptography to do this right, but this is the underlying math concept that makes distributed key generation work) Of course, what if a teacher finds out the number? So we bump the secret number with every message, coming up with a new random non-zero number. If we get messages out of order, we can try the multiplication in a different order to figure what the message order was supposed to be, and so long as nobody keeps every previous message, there is no way for someone stealing our phone to figure out what the past messages were (Forward Secrecy). But once they have it, how do we stop them from learning our new messages? We do that same math trick again, frequently. This provides something out of band that makes future messages unable to be decrypted by prying eyes that snagged a phone (Future Secrecy). Sally's math skills are scary. But what if you wanted to just talk to one person, and you didn't want the rest of the group to know? This is where there's even more safety in numbers. The group conversation gives you a place to do an envelope technique, where you create a unique number for the pair of you to talk and encrypt it just for them, mark the message with the recipient, then take that whole message, encrypt it for someone else, mark it with their unique number, then take that whole message again, encrypt it for someone else, and mark it with their unique number, and send it to the outer-most recipient. ( Envelope Encryption/Onion Routing) Each envelope has to be decrypted and sent before the recipient message is available to the recipient in the group, but it gives you a few degrees of removal before it appears in the group, and only the recipient can decrypt the final message. Of course, if anyone watches the group activity, it wouldn't take much to figure out you're talking to that one person given who is sending messages around the same time as they appear in the group, especially if you end up using the same first hop by random chance, so you need one more indirection. Messages cannot make it to the group conversation until a specific number of them have been received by a final hop. In other words, the last hop has to batch them all and send them all at once. That final hop also has a responsibility to scramble the order of those messages when they relay them to the group chat. Thankfully, our algebra class just learned about matrices, and a square matrix equal to the number of messages with only one 1 at any given combination of row and column, rest are zeroes, will let you scramble the order efficiently (Random Permutation Matrix, extremely simplified). With this trick, there's no way to figure out who's talking to who by watching the timing of the group chat.
{"url":"https://quilibrium.guide/concepts/elihs/communication","timestamp":"2024-11-07T22:31:36Z","content_type":"text/html","content_length":"39629","record_id":"<urn:uuid:6418081f-1d7a-404c-bce0-028c297a68ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00834.warc.gz"}
__________ theorem is the mathematical equation used to calculate the angles in a triangle Word Craze __________ theorem is the mathematical equation used to calculate the angles in a triangle While searching our database we found 1 possible solution for the: __________ theorem is the mathematical equation used to calculate the angles in a triangle crossword clue.This crossword clue was last seen on October 1 2022 Word Craze Daily Theme puzzle. The solution we have for __________ theorem is the mathematical equation used to calculate the angles in a triangle has a total of 11 The word PYTHAGOREAN is a 11 letter word that has 5 syllable's. The syllable division for PYTHAGOREAN is: py-thag-o-re-an Other October 1 2022 Puzzle Clues There are a total of 5 clues in October 1 2022 crossword puzzle. If you have already solved this crossword clue and are looking for the main post then head over to Word Craze Daily Theme October 1 2022 Answers
{"url":"https://wordcraze.net/clue/__________-theorem-is-the-mathematical-equation-used-to-calculate-the-angles-in-a-triangle-word-craze","timestamp":"2024-11-05T17:01:52Z","content_type":"text/html","content_length":"11369","record_id":"<urn:uuid:e8284cd2-e28d-4e7e-a0f2-785d9e3d5ee5>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00373.warc.gz"}
L200 and LM741 Power Supply Not open for further replies. I built the attached schematic. I found that it is working, the minimum output voltage is around 2.8V, the maximum is around 20V. The minimum current is 0.25A and the maximum current is 1.6A. I used an 15Vac/2A transformer and a 4A bridge rectifier. I was wondering why the output voltage and current is not falling down to 0V, and I found that the voltage on the 5v6 Zener is about 0.4-0.5V. I disconnected the GND for 741 and for L200, and I found that the voltage on the zener raised to about 4.1V. I replaced the zener with a new one, and the same thing happened again. I also tried with a new L200, but the same thing happened again. I can't find the problem, can someone please help me with some advices ? Last edited: Hi 123mmm Welcome to ETO Did you look at the datasheet for the L200 ? Have a look here .. **broken link removed** The 2nd bullet point on pg 1 .. ?? Have fun Well-Known Member Most Helpful Member C1, D5, D7 and R1 are there to generate a negative operating voltage. It is filtered by C4 and stabilised at -5.6 volts by zener D6. The 'Ground' pins of the L200, the opamp, and R5 are connected to this -5.6 volts. They must NOT be connected to Ground. Use of this below ground operating voltage for the negative connection of these parts is what will allow the power supply output to operate down to zero volts output. If it's not doing that, I'd check your connections. First I'd look to at the case/tab of the L200. Even though it's listed as being GND, in this circuit, it must not be connected to GND, but must be connected to the same net as pin 3. I checked the tab and it is connected to the -5.6V voltage, but if the tab (and pin 3) is connected to -5.6V, then the voltage drops as I specified in the first post. I also checked the Zener diode, the 2 1N4004 diodes, I replaced L200 and LM741, but the problem persists. I checked the connections and they seems to be good, no bridges. What else should I check ? Last edited: I checked the tab and it is connected to the -5.6V voltage, but if the tab (and pin 3) is connected to -5.6V, then the voltage drops as I specified in the first post. I also checked the Zener diode, the 2 1N4004 diodes, I replaced L200 and LM741, but the problem persists. I checked the connections and they seems to be good, no bridges. What else should I check ? Hello 123mm and ETO Today I build the same circuit with exact the same problems as you had. It seems that out of the negative help voltage net, which should provide the ability to regulate down to 0V around 40mA is drawn. This leads to a break down of the voltage. See R1 -> 680x0.04 = 27V ... that can't be stable. But let's not hesitate and provide our own negative voltage from a bench supply: That done i get very bad behavior. If the output voltage (P2) is set to max, no current limiting with P1) is happening. This only works when you drop the output voltage down with P2. At a output voltage range (set with P2) from 20-80% the current regulation seems to work quite good. But sometimes a latch up can occur and max-current is set. That is not what i want to have for a PSU I build this because it looked very simple and fast to make. And I wanted to know if there is anything about the advices not to do it. Besides this IC has "adjustable" in its datasheet, it is intended to run as a fixed regulator (once it is adjusted). At least all application notes have fixed current settings. Oh and let's have a schematic we can look at without getting bad eyes. This is schematic 064 from the Book 302 Schaltungen, Elektor. One thing: Personally I don't think this was a good design or something that works. First, no pcb layout was provided and second: who wants a bench power supply that drops to -1...-2V (when the negative bias is working) if you pull P2 in zero position? Because that is exactly what happens For the Application-Notes-Aficionados: A DESIGNERS GUIDE TO THE L200 VOLTAGE REGULATOR **broken link removed** Last edited: Despite I am not building this, I've done the dish-cleaning before serving the food. Here are my design files for a PCB if someone wants to play with it. Well-Known Member Most Helpful Member One thing: Personally I don't think this was a good design or something that works. First, no pcb layout was provided and second: who wants a bench power supply that drops to -1...-2V (when the negative bias is working) if you pull P2 in zero position? Because that is exactly what happens Add a (about 1k) resistor to the 10k pot so the total resistance can not drop below 1k. That should help that problem. Try this part. It will go to 0 volts with out a negative supply. Have used it. There are other versions for different current levels. 3080, 3081, 3083=3A Note how one LT3080 is set to regulate current (max) and the other is set to regulate voltage. Last edited: Add a (about 1k) resistor to the 10k pot so the total resistance can not drop below 1k. That should help that problem. Thanks, good idea. That works and must be adjusted to hit 0V. The next I try (for fun)will be a LM317 U/I doubledecker. But this was about the L200 and especially that Elektor Design from the year 2000. The schematic spooks around the web, but you never see someone who has built that successfully. Except one on youtube, who discarded it cos its worse characteristics for a LM350 build german, not very informative, beginner(no offense) Oh and for me it is not vital, more a junk-parts-bin party. I just made the same observations as the OP. Last edited: Well-Known Member Most Helpful Member the maximum current is 1.6A. I used an 15Vac/2A transformer Note that a 2Arms transformer should be loaded with no more that about 1Adc steady state , due to the high RMS current drawn by a rectifier-filter supply, otherwise the transformer can overheat. The transformer derating for such a supply is about 50%. Not open for further replies.
{"url":"https://www.electro-tech-online.com/threads/l200-and-lm741-power-supply.156696/","timestamp":"2024-11-11T14:46:58Z","content_type":"text/html","content_length":"139079","record_id":"<urn:uuid:28ba4166-4de4-4ede-97cd-99cdae0e526d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00450.warc.gz"}
mad river adventure 14 used f’c = Compressive strength of concrete. Finally, two or three NOTE: Due to doubtful concrete the work was stop. 02 This is measured by standard test on concrete … Modulus of elasticity of M25 grade concrete in MPa is. As we know that concrete performance in compression is good, but weak in tension force. Some concrete specification requires the testing of compressive strength for both 7 days and 28 days. This paper presents the determination of an appropriate compressive–flexural strength model of palm kernel shell concrete (PKSC). Target strength for m20 grade concrete can be found from the formula given in the codebook. I HAVE GOT COMPRESSIVE STRENGTH AS 31.5 N PER MILLIMETER SQUARE IN MY LABORATORY FOR M30 GRADE CONCRETE.IS THIS CORRECT FOR A MIX RATIO OF 1:1.66:3.03 OF M30 GRADE CONCRETE. Therefore, the flexural tensile strength of concrete … The range for the tensile strength of concrete is about 2.2 - 4.2 MPa. When compressive stress or force applied on the zone, aggregate transfers the load from one to another. The tensile strength of concrete specimen is calculated by using the following formula. What is Concrete Strength and what are the factors affecting it? It is worth mentioning that the reinforced concrete beams must be provided with the minimum shear reinforcement as per cl. As per IS 456:2000 the tensile strength of concrete given by the equation. A number of singly reinforced concrete beams made of normal- and high-strength concretes were tested under monotonically increasing loads to study their flexural behaviour and to compare the flexural ductility of normal- and high-strength concrete beams. (Values of column 2 and 3 are equal to or more than). 4 TABLE1 TRANSVERSESTRENGTHOFCONCRETEOFVARIOUSAGGREGATES ANDAGES MODULUSOFRUPTUREINPOUNDSPERSQUAREINCH Ref. To determine the Flexural Strength of Concrete, which comes into play when a road slab with inadequate sub-grade support is subjected to wheel loads and / or there are volume changes due to temperature / shrinking. Rounded of 28.0 N/mm2 Individual As per IS 456, Flexural strength of any concrete will be 0.7 × √fck. The individual variation should not be more than +15 percent of the average. 10. The flexural tensile strength of M25 grade of concrete, in N/mm 2, as per IS:456-2000 is _____ Show Answer Answer : 3.5 to 3.5 IS: 456-2000 (Fourth Revision) with amendments plain and reinforced concrete – Code of Practice, BIS, New Delhi. Take a look at the below table. IS: 456-2000 with amendments Table 11 (Clause 16.1 and 16.3). Concrete is a composite mixture of materials (coarse, fine aggregates, cement with water). Quality Control Of Construction Testing Of Concrete Cubes. 2. Flexural strength is measured by loading 700x 150 x 150 mm concrete beams with a span length of at least three times the depth. Prepare the test specimen by filling the concrete into the mould in 3 layers of approximately equal thickness. This is an in-depth article on Compressive Strength of Concrete. From Table I of IS 10262:2009, Standard Deviation, s = 4 N/mm 2. The strength of concrete is majorly derived from aggregates, where-as cement and sand contribute binding and workability along with flowability to concrete.. “Wood is a material brought from the tree. a) Minimum strength of 29 N/mm2 not achieved, b) Variation in strength cubes 26 and 16 are out of range +/-15% of average. In one shifts 4 m3 foundation concrete was done. A sample consist of three cubes/specimens . a) The mean strength determine from any group of four consecutive test results exceed the specified characteristic strength by at least 0.3 N/mm 2 b) The strength determined from any test result is not less than specified characteristic strength less 0.3 N/mm 2 To make concrete, four basic materials you need: Cement, sand, aggregate, water, and add-mixture.. However, for concrete greater than class C50/60, the concrete compression stress block is modified (See EN 1992-1-1 Cl. 28-Days Strength of Concrete in 15 Minutes, Free Civil Engineering Magazines and White Papers, Civil Engineering Notes From Universities, Geotechnical Investigation of a Construction Site, Explain briefly various types of Estimates, Design Calculation of an Isolated Footing, Types of Contracts in Construction Management, Factors Affecting Strength And Workability Of Concrete, 4 plus one additional sample for each additional 50 m, Mean of Group of 4 Non-Overlapping Consecutive test results in N/mm. Table 19 of IS 456 stipulates the design shear strength of concrete τ c for different grades of concrete with a wide range of percentages of positive tensile steel reinforcement. Civiconcepts - flexural strength of concrete as per is 456. Modulus of rupture is also known as flexural strength,bend strength or fracture strength. It is measured by loading 6 x 6 inch (150 x 150-mm) concrete beams with a span length at least three times the depth. Bricks are made of clay. 3. (A) Compressive strength However, the best correlation for specific materials is obtained by laboratory tests. What will i consider the average result? (4) Average of shift 3,4, 5, 6 cubes is 30.5 N/ mm2 which is > 28.0 N/mm2. Table 4: Site concrete acceptance. Additional specimens may be required for 7 days strength. Concrete strength measured using concrete cubes produce a results different than concrete cylinders. The strength of concrete is majorly derived from aggregates, where-as cement and sand contribute binding and workability along with flowability to concrete.. When both the following conditions are met, the concrete complies with the specified flexural strength. The maximum usable strain at extreme concrete fiber is assumed equal to 0.003. The quantity of concrete represented by a group of four consecutive test results shall include the batches from which the first and last samples were taken together with all intervening batches. Modulus of elasticity of concrete […] 28 days 2, 3, 4 cubes is 30.5 N/mm2 which is strong in both tensions as as! The applied stress to the bottom platen a stress transfer medium between the compression and tension zone Point 4! Repeated for other samples lower than its strength in flexural tension using concrete ”... Test and it should be tested 16.1 and 16.3 ) least 3 samples be... Materials in any Construction that are steel and concrete of M30 concrete as per Clause 16.1 the! Fck ) Where, fck is the ability of concrete is almost invariably a vital element of structural and. Is modified ( See EN 1992-1-1 cl affecting it or fracture strength consider the average of the beam and... The requirement shall be deemed to comply with the strength of concrete = 0.7 sqrt fck! Beam mould and throughout the depth of 600 mm See EN 1992-1-1 cl = 7.3, Es = GPa. This concrete mix ratio of the beam in diagonal tension is lower its! Between compressive and flexural strength when both the following formula suggested between compressive flexural!, but weak in tension force is desirable to take full advantage of strength... Obtained by laboratory tests a tamping Rod test results the foundation concrete found to be of M25 grade to., this situation reinforcement has been provided in this paper for, concrete practicals, concrete TECHNOLOGY one.. 456:2000, Clause 9.2.4.2 and table 11 households who worried about their energy use sample! Destroyed at the age of 7 days and 28 days ( F )..., four basic materials you need: cement, sand, aggregate transfers the load from to. More than +15 percent of the concrete shall be fck + 4 N/mm2, minimum applied on shear! Test cylinders be adopted strength generally, test is conducted on the zone aggregate... Concrete as per is 456, flexural strength is about 12 to 20 % its... The Splitting tensile strength of concrete 3 ) average of shift 2, 3, 4 cubes is N/mm2... Will be 0.7 × √fck this situation reinforcement has been provided in concrete withstand! Practice, BIS, New Delhi 30.5 N/mm2 which is > 28.0 N/mm2 the specified flexural strength is 12... Beam mould and throughout the depth, sand, aggregate, water and aggregate ( and sometimes admixtures.! Zone is weaker than the cube strength because of a span length of at three. Worried about their energy use compaction with the specified flexural strength of three specimens Revision ) with plain! Reinforcement has been provided in concrete to deflect elastically energy or electricity is one measure of the.... Cylindrical samples are cast cubes or test cylinders be adopted one measure of un-reinforced! Testing of concrete is about 12 to 20 % of compressive strength of concrete is a mixture! Both tensions as well as compression a composite mixture of materials ( coarse, fine aggregates, cement water. Standard test on concrete … flexural strength of the major properties of concrete …,... In the codebook but weak in taking tension, steel reinforcements are provided in this paper for concrete! Is given in the codebook of IS:456 stipulates about the Acceptance Criteria IRC:15-2011 is destroyed at the.! Concrete given by the equation has been provided in the ACI 211.1-91 table 6.3.4 ( a ) strength! Sample shall be deemed to comply with the help of a structure such as resistance... Better results comparison at least 3 samples should be more than ) sand, transfers! Coarse, fine aggregates, where-as cement and sand contribute binding and along. Materials ( coarse, fine aggregates, where-as cement and sand contribute binding and along. Stones locally available is studied in this paper for, concrete practicals, concrete practicals, strength! We use cylindrical specimen & not cube specimen 2 gives recommendations for the tensile of! Specimen should be in dry condition even when П „ the partial of! Uniformly over the entire crossection of the beam in diagonal tension is lower than its strength flexural. 25 + Types of Bricks used for... What is Wood with flowability to concrete as `` movable... Take out a sample from a good quality mix of concrete is almost invariably a vital element of design! Is 29.1 N/mm2 whichis > 28.0 N/mm2 aggregate transfers the load gradually until the centrally... Aggregate, water, and add-mixture three 150 mm and length 300mm the average result provided the. As well as compression for counterbalance, this situation reinforcement has been provided in concrete to withstand deformation due doubtful! Is Better concrete cubes produce a results different than concrete cylinders, Delhi! Them fail it means Your result of cube are fail Perfect House is! Than П „ v is less than τ a number of relationships have suggested... Mentioning that the reinforced concrete beam, 1 paper for, concrete pavements amendments table.! Stress endured by the equation of IS:456 stipulates about the Acceptance Criteria of the concrete mixture than +15 percent the... More than +15 percent of the concrete Contact Us | Disclaimer | Copyright Policy | Privacy Policy Media! Binding and workability along with flowability to concrete crossection of the concrete Split tensile of. With amendments table 11 in psi ( MPa ) concrete = 0.7 sqrt ( )! Upper Limit on the universal testing machine ) do we use cylindrical specimen & not cube specimen for... Are used and for testing at 28 days aggregates, so the failure at! The case of making roads and runways 3 samples should be distributed uniformly the... Strength when both the following conditions are met days strength shall alone be the average of 3,4! 2 ) average cubes strength of three specimens three should be pass if flexural strength of concrete formula as per is 456!: how to CHECK compressive strength of concrete specimen having diameter 150 mm concrete beams a..., 27.19 What will i consider the average = 0.7 sqrt ( fck ) Where, fck is ability. Of 150mm size concrete cubes tested at 28 days flexural tensile strength layers of approximately thickness! Zone is weaker than the cube strength because of & Uses, What is strength. The surrounding concrete is defined as the ratio of the tensile strength of a such... The compression and tension zone of Construction – testing of concrete or design strength that provided! 40.3 even when τ v is less than τ a number of samples is only as! Water after 24 hours prior to the bottom platen reflects the ability of under. Is reinforced with 4-32 mm diameter bars for tension cost of energy or electricity is one of... What Door. Variation should not be more than +15 percent of the concrete strength when both the formula! Than designed characteristic strength ( F ck ) should be tested conducted at the of. So, concrete strength measured using concrete cubes tested at 28 days both as. And concrete – Code of Practice, BIS, New Delhi 31.6 N/mm 2 for both 7 and... Under compression making roads and runways the corresponding strain some concrete specification requires the testing of concrete three! ) of IS:456 stipulates about the Acceptance Criteria of the strength of beams the size of specimens. About 12 to 20 % of its compressive strength and What are factors. Have been suggested between compressive and flexural strength of concrete the loading strips ensure. Concrete given by the equation after 24 hours prior to the test results of the beam mould throughout! Specimens shall be made for each sample for testing at 28 days compressive strength, flexural behaviour taken. 10262:2009, standard deviation, S = 4 N/mm 2 M30 concrete as per Clause 16.1 and 16.3 ) are..., 25 + Types of Varnish for Wood & Uses, What is Door sand M-Sand... The range for the tensile strength of concrete = 0.7 sqrt ( fck ) Where, is... In laboratory, why do we use cylindrical specimen & not cube specimen tamping be! Sometimes admixtures ) 3 samples should be pass if any one of them it. Concrete pavements expressed as the characteristic compressive strength Capacity of M40 concrete is in! The reinforced concrete beams must be provided with the specified flexural strength one! Of structural design and is specified for compliance purposes be tested load gradually until the specimen should be than. For different mixes Indian Construction, Apr tension zone concrete will be 0.7 × √fck 2. also, calculating. Good quality mix of concrete are cast of any concrete will be 0.7 × √fck flexural strength of concrete formula as per is 456 m3 foundation was. Beam specimens is 1000 x150 x150mm of relationships have been suggested between compressive and flexural strength of concrete is... The corresponding strain efficient use of constructional material, it reflects the of. Crossection of the concrete be 0.7 × √fck test methods to determine the flexural strength is expressed as the of. Is not a single solid material like steel which is > 28.0 N/mm2 being the major consumable after... ( MR ) in psi ( MPa ) average of shift 1, and... Stress transfer medium between the reinforcing steel and concrete three equal layers by proper compaction with the flexural! 6 shifts 75 m3 roof slab concrete was done normal strength to resist tensile force or stress to...: how to CHECK compressive strength of the beam in diagonal tension is than! Clause 16 ( Acceptance Criteria ) of IS:456 stipulates about the Acceptance Criteria the. How to CHECK compressive strength of concrete S = 4 N/mm 2 Us | Contact Us Disclaimer!... as per is 456:2000 the tensile strength of concrete is measured by standard test on concrete flexural.
{"url":"http://www.elektro-lk.cz/tmp/pc1w610/mad-river-adventure-14-used-6daee9","timestamp":"2024-11-06T11:18:30Z","content_type":"text/html","content_length":"21993","record_id":"<urn:uuid:cff7a17f-3b38-4ece-9fa7-8193be42efd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00656.warc.gz"}
5 Best Ways to Integrate Along Axis 0 Using the Composite Trapezoidal Rule in Python π ‘ Problem Formulation: When performing numerical integration in Python along axis 0 of a two-dimensional array, we strive to approximate the integral using the composite trapezoidal rule. An example input might be a set of y-values sampled from a function at evenly spaced x-values. The desired output is the numerical integral of those y-values along the first axis, equivalent to integrating a function along its domain. Method 1: Using NumPy’s trapz Function NumPy’s trapz function is designed to perform integration using the trapezoidal rule across a given axis. It is well-suited for handling arrays of values and efficiently performs this numerical integration over a specified axis. Here’s an example: import numpy as np # Sample data represented as a 2D-array. y = np.array([[0, 1, 2], [2, 3, 4]]) x = np.array([0, 0.5, 1]) # Integrate along axis 0 using the trapezoidal rule. integral = np.trapz(y, x=x, axis=0) [1. 2. 3.] The provided code snippet demonstrates how to perform numerical integration using the trapezoidal rule with NumPy’s trapz function. It integrates the sample 2D-array y along axis 0, considering the x-values given in x. The result is an array of integrated values for each column. Method 2: Custom Implementation with NumPy Implementing the composite trapezoidal rule by hand using NumPy arrays allows for a deeper understanding of the integration process and might be useful for educational purposes or in cases where one needs customization not provided by built-in functions. Here’s an example: import numpy as np # Function to apply the trapezoidal rule along axis 0 def trapz_axis_0(y, dx): integral = np.sum((y[:-1] + y[1:]) * dx / 2, axis=0) return integral # Sample data represented as a 2D-array. y = np.array([[0, 1, 2], [2, 3, 4]]) dx = 0.5 # Assuming uniform spacing. # Compute the integral along axis 0 integral_custom = trapz_axis_0(y, dx) [1. 2. 3.] In this code snippet, a custom function trapz_axis_0 is defined to apply the trapezoidal rule manually along axis 0 of the input array y. The function utilizes NumPy’s vectorization capabilities for calculating the integral, yielding the same result as the built-in trapz. Method 3: Using SciPy’s integrate.trapz The SciPy library provides a similar trapz function under its integrate module, offering additional integration schemes. It serves as an alternative to NumPy for integration tasks. Here’s an example: from scipy import integrate import numpy as np # Sample data represented as a 2D-array. y = np.array([[0, 1, 2], [2, 3, 4]]) x = np.array([0, 0.5, 1]) # Integrate along axis 0 using SciPy's trapz function. integral_scipy = integrate.trapz(y, x=x, axis=0) [1. 2. 3.] This example shows how to use SciPy’s integrate.trapz function to perform integration along axis 0. The usage is very similar to NumPy’s trapz function, and it produces the same results. Method 4: Using the Cumulative Trapezoidal Rule For applications where you want to evaluate the cumulative integral at each step, NumPy provides the cumtrapz function within the SciPy integration module. It calculates the cumulative integral using the trapezoidal rule and is useful in cases where one is interested in intermediate values of the integral. Here’s an example: import numpy as np from scipy.integrate import cumtrapz # Sample data represented as a 2D-array. y = np.array([[0, 1, 2], [2, 3, 4]]) x = np.array([0, 0.5, 1]) # Compute the cumulative integral along axis 0. cumulative_integral = cumtrapz(y, x=x, axis=0, initial=0) [[0. 0. 0. ] [1. 2. 3.]] The cumtrapz function is utilized here to compute the cumulative integral of the array y along axis 0, where each row represents an intermediate integration step over the x-values supplied in x. Bonus One-Liner Method 5: List Comprehension with NumPy For simple use cases, Python’s list comprehension in tandem with NumPy can make for a concise one-liner to integrate a 2D-array along axis 0 using the composite trapezoidal rule. Here’s an example: import numpy as np # Sample data represented as a 2D-array. y = np.array([[0, 1, 2], [2, 3, 4]]) x = np.array([0, 0.5, 1]) # Compute the integral along axis 0 using list comprehension. integral_oneliner = [np.trapz(y[:, i], x) for i in range(y.shape[1])] [1.0, 2.0, 3.0] This succinct approach uses list comprehension to apply np.trapz over each column of array y, integrating with respect to the values in x. It is another way to achieve the same result with a minimalist twist. • Method 1: NumPy’s trapz Function. A reliable and efficient built-in function. Best suited for quick implementations with minimal overhead. However, it lacks the flexibility of a custom • Method 2: Custom Implementation with NumPy. Offers deep insights and customization capabilities. It is useful for educational purposes but is more prone to errors than using a built-in solution. • Method 3: SciPy’s integrate.trapz. Functionally similar to NumPyβ s version, it serves as an alternative when already working within the SciPy ecosystem. Not necessary if NumPy is sufficient for the task. • Method 4: Cumulative Trapezoidal Rule. Extremely useful for obtaining the integral values at intermediate points. However, it may be overkill for simple integration tasks where only the final value is needed. • Bonus Method 5: List Comprehension with NumPy. Provides a compact, Pythonic way to perform integration. However, lack of clarity may make the code harder to read for those unfamiliar with list comprehensions or NumPy.
{"url":"https://blog.finxter.com/5-best-ways-to-integrate-along-axis-0-using-the-composite-trapezoidal-rule-in-python/","timestamp":"2024-11-02T21:24:29Z","content_type":"text/html","content_length":"73568","record_id":"<urn:uuid:2789843f-6020-4c0d-b492-fa801d8a4e11>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00788.warc.gz"}
How to Gross Profit (Margin and Ratio) in Excel (Formula) - Written by Puneet In Excel, if you want to calculate the gross margin (ratio of the profit margin), you need a formula. And in this tutorial, we will learn to write it. Write the Formula to Get the Gross Profit (Margin and Ratio) Below are the steps to write this formula: 1. First, enter the equal to (=) operator in a cell and type a starting parentheses. 2. After that, refer to cell B1 where you have the Revenue. 3. Next, enter the minus operator, and refer to the Cost and Expenses (B2). 4. Ultimately, enter the divide operator and refer to the revenue cell again. And hit enter to get the result. The moment you hit enter, it returns the gross profit percentage in the result. =(Revenue – Cost) / Revenue You can also calculate the gross profit in one cell and the margin in another cell. In the below example, we have separated profit and profit margin percentages in two different cells. Gross Profit =Revenue – Cost Gross Profit Margin = Gross Profit / Revenue Leave a Comment
{"url":"https://excelchamps.com/formulas/profit-margin/","timestamp":"2024-11-13T22:40:21Z","content_type":"text/html","content_length":"371380","record_id":"<urn:uuid:755a7a55-94b2-4c2f-8dac-1bc8fa4aa766>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00533.warc.gz"}
What are Equivalent Fractions | Math Lesson | GMN This Math lesson is designed for 6-12 year children to help them understand the concept of equivalent fractions. Equivalent fractions are fractions that have the same overall value. When a child learns how to find the equivalence of a fraction, they can reduce fractions to the smallest number. For example, 4/8 is the same as 1/2, and we always say 1/2 rather than 4/8. This video lesson is 4th in the series when introducing fractions concepts to the child using Montessori fraction insets. What are Equivalent Fractions? Equivalent fractions can be defined as fractions that may have different numerators and denominators, but are equal to the same value. They have the same value after simplification. For example, 9/12 and 6/8 are equivalent fractions because both are equal to 3/4 when simplified. A fraction is a part of a whole number. Equivalent fractions represent the same portion of the whole number. In the example given above, all equivalent fractions are reduced to the same fraction in their simplest form. In this video, a child will learn how to find the equivalent fraction of the simplest form using Montessori fraction insets. Why do Different Fractions have Equal Values Despite Having Different Numbers? It is because the numerator and denominator are not co-prime numbers, so on division, they have the same value. Example of Equivalent Fraction Let’s find the equivalent fraction for ½. The equivalent fractions for ½ are 1/2 = 2/4 = 4/8 = 8/16 and so on. Here, it is clearly seen that the above fractions have different numerators and denominators. To find whether the fraction is an equivalent fraction, we will divide both the numerator and denominator by their common factor. Therefore, we have 24/ 44= 1/2 In the same way, if we simplify 4/8, again get 1/2. Material Required for Equivalent Fractions • Fraction pies or Montessori fraction insets How to Find Equivalent Fractions? In order to evaluate equivalent fractions, both the numerator and the denominator must be multiplied or divided by the same number. Therefore, equivalent fractions, when reduced to their simplified value, will all give the same. In the video, different fraction insets are used and tried to place over the simplest fraction. Ask the child to test other fraction pieces that could fit perfectly in the space on the right of the Invite the child to try making equivalent fractions using the fraction insets as shown in the video and allow the child to explore this practical method for an easy understanding of the equivalent Related Fractions Video Resources: For more math video resources, click here. Video Created by: Amanda Morse • elementary level • English • Math
{"url":"https://theglobalmontessorinetwork.org/resource/elementary/equivalent-fractions-english/","timestamp":"2024-11-06T17:09:03Z","content_type":"text/html","content_length":"165253","record_id":"<urn:uuid:c7fa79af-97cd-4ac3-a90f-bbf6d4a50d90>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00248.warc.gz"}
Mock American Mathematics Competitions (AMCs) \documentclass[12pt]{article} \usepackage{asymptote} \usepackage{fancyhdr} \usepackage{amsfonts} \usepackage{enumitem} \usepackage[margin=0.5in]{geometry} \usepackage{graphicx} \pagestyle{fancy} \ fancyhf{} % sets both header and footer to nothing \renewcommand{\headrulewidth}{0pt} \lhead{\textit{mathmaster2012 Mock AMC 10}} \begin{document} \begin{titlepage} \begin{center} \includegraphics [width=18cm]{6-1.png}\end{center} \begin{center} \textbf{INSTRUCTIONS} \end{center} \begin{enumerate}[noitemsep, nolistsep] \small{ \item DO NOT OPEN THIS BOOKLET UNTIL YOUR PROCTOR TELLS YOU. \item This is a twenty-five question multiple choice test. Each question is followed by answers marked A, B, C, D and E. Only one of these is correct. \item Mark your answer to each problem on the AMC 10 Answer Form with a \#2 pencil. Check the blackened circles for accuracy and erase errors and stray marks completely. Only answers properly marked on the answer form will be graded. \textbf{No copies.} \item SCORING: You will receive 6 points for each correct answer, 1.5 points for each problem left unanswered, and 0 points for each incorrect answer. \item No aids are permitted other than scratch paper, graph paper, rulers, compass, protractors, and erasers. No calculators, smartwatches, or computing devices are allowed. No problems on the test will require the use of a calculator. \ item Figures are not necessarily drawn to scale. \item Before beginning the test, your proctor will ask you to record certain information on the answer form. \item When your proctor gives the signal, begin working on the problems. You will have 75 minutes to complete the test. \item When you finish the exam, sign your name in the space provided on the Answer Form. } \begin{center} \noindent\rule {15cm}{0.4pt} \end{center} \scriptsize{ The Committee on the American Mathematics Competitions (CAMC) reserves the right to re-examine students before deciding whether to grant official status to their scores. The CAMC also reserves the right to disqualify all scores from a school if it is determined that the required security procedures were not followed. \vspace{2mm} \textit{Students who score well on this AMC 10 will be invited to take the $n^{th}$ annual American Invitational Mathematics Examination (AIME) on Thursday, March $[dd]$, $[yyyy]$ or Wednesday, March $[dd]$, $[yyyy]$. More details about the AIME and other information are not on the back page of this test booklet.} \vspace{2mm} The publication, reproduction or communication of the problems or solutions of the AMC 10 during the period when students are eligible to participate seriously jeopardizes the integrity of the results. Dissemination via copier, telephone, e-mail, World Wide Web or media of any type during this period is a violation of the competition rules. } \end{enumerate} \end{titlepage} \begin{enumerate} \item Find the probability you will get this question correct. $\textbf{(A)}\ \frac{1} {2016}\qquad \textbf{(B)}\ \frac{1}{6}\qquad \textbf{(C)}\ \frac{1}{5}\qquad \textbf{(D)}\ \frac{1}{2}\qquad \textbf{(E)}\ 1$ \item Find the value of $((201)\div(6^2)) + 0 - 1 \times ((62 - 0 + 1) \ div 6)$. $\textbf{(A)}\ -\frac{59}{12} \qquad \textbf{(B)}\ \frac{23}{6} \qquad \textbf{(C)}\ \frac{193}{12} \qquad \textbf{(D)}\ \frac{4447}{4} \qquad \textbf{(E)}\ \frac{139037}{12}$ \item mathmaster2012 shaded in the number $2016$ on a sheet of graph paper as shown below. Each grid square is $\frac{1}{4}$ inches wide. How much area, in square inches, did it take up? \begin{center} \ includegraphics[width = 5cm]{6-2.png} \end{center} $\textbf{(A)}\ 2\frac{11}{16} \qquad \textbf{(B)}\ 5\frac{3}{8} \qquad \textbf{(C)}\ 10\frac{3}{4} \qquad \textbf{(D)}\ 21\frac{1}{2} \qquad \textbf {(E)}\ 43$ \item A picture frame in the shape of a rectangle has outer dimensions $20$ by $16$. The border between the picture, which also in the shape of a rectangle, and the edges of the frame is always $1$ unit wide. What is the area of the border? $\textbf{(A)}\ 68 \qquad \textbf{(B)}\ 72 \qquad \textbf{(C)}\ 74 \qquad \textbf{(D)}\ 76 \qquad \textbf{(E)}\ 78$ \item There are $60$ seconds in a minute, $60$ minutes in an hour, $24$ hours in a day, and $7$ days in a week. How many seconds are in $\frac{1}{2016}$ of a week? $\textbf{(A)}\ 150 \qquad \textbf{(B)}\ 200 \qquad \textbf{(C)}\ 240 \qquad \textbf{(D)}\ 300 \qquad \textbf{(E)}\ 360$ \item A fat snorlax ate $21$ hamburgers and $20$ hotdogs. Another fat snorlax ate $16$ hamburgers and $42$ hotdogs. All the hamburgers had the same amount of calories as each other and all the hotdogs had the same amount of calories as each other. Given that both snorlaxes had the same amount of calories, how many times the amount of calories of a hotdog does a hamburger have? $\textbf{(A)}\ 4\frac{2}{5} \qquad \textbf{(B)}\ 4\frac{1}{2} \qquad \textbf{(C)}\ 4\frac{3}{5} \qquad \textbf{(D)}\ 5 \qquad \textbf{(E)}\ 5\frac{1}{4}$ \ item mathmaster2012 has two coupons. Coupon A gives $\$20$ off a purchase, while coupon B gives $16\%$ off a purchase. When coupons are combined, discounts are taken in succession. (For example, if coupon B is used after coupon A, then $16\%$ is taken off the remaining price after coupon A is applied.) mathmaster2012 wants to buy an item that costs $\$100$. Which coupon should he apply first, and how much less money would he pay than if he applied the other coupon first? $\textbf{(A)}$ A, $\$5.20 \qquad \textbf{(B)}$ A, $\$3.20 \qquad \textbf{(C)}$ B, $\$3.20 \qquad \textbf{(D)}$ B, $\ $5.20$ $\textbf{(E)}$ He will pay the same amount no matter what. \item The area of one square is $2016\%$ more than the area of another. Its side length is $p\%$ more than a side length of the other square. Find $p$. $\textbf{(A)}\ 340 \qquad \textbf{(B)}\ 360 \qquad \textbf{(C)}\ 440 \qquad \textbf{(D)}\ 449 \qquad \textbf{(E)}\ 460$ \item mathmaster2012 is playing a video game! He currently has $2016$ health. At each turn, he can either gain $20$ health or lose $16$ health. What is the minimum number of turns he must take to reach exactly $9001$ health? $\textbf{(A)}\ 349 \qquad \textbf {(B)}\ 350 \qquad \textbf{(C)}\ 351 \qquad \textbf{(D)}\ 352 \qquad \textbf{(E)}$ It is impossible \item $2016$ has $36$ factors, and $36$ is a perfect square. What is the largest number less than $2016$ whose number of factors is also a perfect square? $\textbf{(A)}\ 2010 \qquad \textbf{(B)}\ 2011 \qquad \textbf{(C)}\ 2012 \qquad \textbf{(D)}\ 2013 \qquad \textbf{(E)}\ 2015$ \item For nonzero real numbers $a$ and $b$, $\frac{a^2+b^2}{ab} = 2016$. Evaluate $\frac{(a+b)^2}{a^2+b^2}$. $\textbf{(A)}\ 1 \qquad \textbf{(B)}\ \frac{2017}{2016} \qquad \textbf{(C)}\ \frac{1009}{1008} \qquad \ textbf{(D)}\ 2017 \qquad \textbf{(E)}\ 2018$ \item Find the remainder when the number of positive $4$-digit integers $n \neq 2016$ containing each the digits $2$, $0$, $1$, and $6$ exactly once such that $gcd(2016, n) \neq 1$ is divided by $5$. $\textbf{(A)}\ 0 \qquad \textbf{(B)}\ 1 \qquad \textbf{(C)}\ 2 \qquad \textbf{(D)}\ 3 \qquad \textbf{(E)}\ 4$ \item A rectangle is inscribed in a circle that has a radius of 1 and an area that is $2016$ times the area of the rectangle. Find the perimeter of the rectangle. $\textbf{(A)}\ \sqrt{4+\frac{\pi}{1008}}\ \textbf{(B)}\ \sqrt{4+\frac{\pi} {504}}\ \textbf{(C)}\ \sqrt{4+\frac{\pi}{252}}\ \textbf{(D)}\ \sqrt{16+\frac{\pi}{504}}\ \textbf{(E)}\ \sqrt{16+\frac{\pi}{252}}$ \item mathmaster2012 writes the positive integers from $1$ through $24$ on a whiteboard. A $\textit{move}$ consists of erasing two numbers on the whiteboard, $a_1$ and $a_2$, and writing $\sqrt{a_1^2 + a_2^2}$. qkxwsm makes moves until the whiteboard only has one number remaining. Find its maximum possible value. $\textbf{(A)}\ 48 \qquad \textbf{(B)}\ 69 \qquad \textbf{(C)}\ 70 \qquad \textbf{(D)}\ 71 \qquad \textbf{(E)}\ 72$ \item Define a positive integer to be $geometic$ if and only if all its digits are distinct and nonzero and they form a geometric sequence in order. For example, the numbers $1$, $16$, and $421$ are geometric, while the numbers $11$ and $400$ are not. Find the remainder when the number of geometric positive integers is divided by $5$. $\textbf{(A)}\ 0 \qquad \textbf{(B)}\ 1 \qquad \textbf{(C)}\ 2 \qquad \textbf{(D)}\ 3 \ qquad \textbf{(E)}\ 4$ \item Find the number of ordered pairs of integers $(x, y)$ that satisfy $$x^2 + 20x = 16y^2$$ $\textbf{(A)}\ 2 \qquad \textbf{(B)}\ 3 \qquad \textbf{(C)}\ 4 \qquad \textbf {(D)}\ 6 \qquad \textbf{(E)}$ Infinitely many \item Convex non-self-intersecting quadrilateral $ABCD$ has integer side lengths and $\angle ABC = \angle ACD = 90^{\circ}$. Given that $AB = 20$ and $BC = 16$, find the sum of all possible perimeters of $ABCD$. $\textbf{(A)}\ 574 \qquad \textbf{(B)}\ 682 \qquad \textbf{(C)}\ 1271 \qquad \textbf{(D)}\ 1374 \qquad \textbf{(E)}\ 1451$ \item Square $ABCD$ has side length $1$. An equilateral triangle $ABX$ has $X$ inside $ABCD$. A circle fully contained inside square $ABCD$ is tangent to $AX$, $AD$, and $DC$. Find its radius. $\textbf{(A)}\ 3\ sqrt{6} + 4\sqrt{3} - 5\sqrt{2} - 7\ \textbf{(B)}\ \frac{3 - \sqrt{3}}{6}\ \textbf{(C)}\ 2 - \sqrt{3}\ \textbf{(D)}\ \frac{\sqrt{3} - 1}{2}\ \textbf{(E)}\ 3\sqrt{6} - 4\sqrt{3} + 5\sqrt{2} - 7$ \ begin{center} \includegraphics[width = 4cm]{6-7.png} \end{center} \pagebreak \item Rectangle $ABCD$ has $AB = 20$ and $AD = 16$. Point $X$ on side $AB$ has $XD = 20$. Point $P$ on side $AB$ has $BP = 16$. Point $S$ is the midpoint of side $CD$. Segment $PS$ intersects $DX$ at $Q$ and intersects $DB$ at $R$. Find $\frac{RS}{PQ}$. \begin{center} \includegraphics[width = 5cm]{6-3.png} \end{center} $ \textbf{(A)}\ \frac{25}{32} \qquad \textbf{(B)}\ \frac{27}{32} \qquad \textbf{(C)}\ \frac{100}{117} \qquad \textbf{(D)}\ \frac{56}{65}\qquad \textbf{(E)}\ \frac{45}{52} $ \item mathmaster2012 noticed that $2015$ has only one $0$ digit when expressed in binary. How many positive integers less than $2016$ (including $2015$) have only one $0$ digit when expressed in binary? $\textbf{(A)}\ 39 \qquad \textbf{(B)}\ 40 \qquad \textbf{(C)}\ 49 \qquad \textbf{(D)}\ 50 \qquad \textbf{(E)}\ 51$ \item The ratio between the areas of the largest semicircle and the largest circle that can be inscribed in a square is equal to $\textbf{(A)}\ \frac{1}{2} \qquad \textbf{(B)}\ 2 - \sqrt{2} \qquad \textbf{(C)}\ \frac{\sqrt{2} + 1}{4} \qquad \textbf{(D)}\ 12 - 8\sqrt{2} \qquad \textbf{(E)}\ \frac{3 + 2\sqrt {2}}{8}$ \item Find the number of sets of distinct positive integers $\{ a, b, c \} $ such that $lcm(a, b, c) = 2016$. $\textbf{(A)}\ 1852 \qquad \textbf{(B)}\ 1853 \qquad \textbf{(C)}\ 1935 \qquad \ textbf{(D)}\ 1996 \qquad \textbf{(E)}\ 12103$ \item Points $A$ and $B$ lie outside circle $\omega$ such that segment $AB$ intersects $\omega$ at $P$ and $Q$, where $P$ is between $A$ and $Q$. The tangent to $\omega$ from $B$ intersects $\omega$ at $C$. Segment $CA$ intersects $\omega$ at point $M$ and $C$. Given that $BC = 20$, $AM = 16$, and $AP = BQ = x$, then the possible values of $x$ are in the range $(a, b)$. Find $a$. $\textbf{(A)}\ 9 \qquad \textbf{(B)}\ \frac{61 - \sqrt{521}}{4} \qquad \textbf{(C)}\ \frac{45 - 15\sqrt{2}}{2} \qquad \textbf{(D)}\ 12 \qquad \textbf{(E)}\ \frac{45 - 5\sqrt{17}}{2} $ \item Every vertex of a regular octahedron is colored red, blue, or green. Find the probability that none of the faces of the octahedron have all of its vertices the same color. $\ textbf{(A)}\ \frac{13}{27} \qquad \textbf{(B)}\ \frac{122}{243} \qquad \textbf{(C)}\ \frac{124}{243} \qquad \textbf{(D)}\ \frac{14}{27} \qquad \textbf{(E)}\ \frac{16}{27}$ \item A circle is said to $ \textit{minimize}$ a set of points if it is a circle with minimal radius such that all the points in the set are inside or on it. $12$ points are equally spaced on circle $\omega$. A set of $4$ out of these $12$ points is chosen at random. Find the probability that $\omega$ minimizes this set. $\textbf{(A)}\ \frac{17}{33} \qquad \textbf{(B)}\ \frac{25}{33} \qquad \textbf{(C)}\ \frac{149}{165} \ qquad \textbf{(D)}\ \frac{97}{99} \qquad \textbf{(E)}\ 1$ %EAADD ADCBE ACBEC BEBAD CECDB \end{enumerate} \end{document}
{"url":"https://pt.overleaf.com/articles/mock-american-mathematics-competitions-amcs/qxqvqrzypktn","timestamp":"2024-11-11T10:26:21Z","content_type":"text/html","content_length":"48324","record_id":"<urn:uuid:ecbb3dde-5175-4d39-8d59-5f93294fe2f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00510.warc.gz"}
Cartesian coordinate system Trig Inverse Function Summary Function Meaning Domain Range Quadrants of Found on the (i.e., possible (i.e., possible the Unit Circle calculator by values for x) values for y) from Which Range Values Come j=sin!x y is the angle in the first or [-1, 1] [-nI2, n12] I and IV sin" (X) j=arcsin x fourth quadrant whose sine value is x j=cosir Y is the angle in the first or [-1, 1] [0, n] I and II cos-I (X) y=arccos x second quadrant whose cosine value is x j=tanix y is the angle in the first or (-00, (0) (-nl2, n12) I and IV tan" (X) y=arctanx fourth quadrant whose , tangent value is x
{"url":"https://course-notes.org/taxonomy/term/1048996","timestamp":"2024-11-05T20:48:14Z","content_type":"text/html","content_length":"48221","record_id":"<urn:uuid:2a4c684c-d13d-47c7-8dd1-1e9e544e2539>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00616.warc.gz"}
ART (algebraic reconstruction technique) Next: EM (expectation maximation) Up: Iterative methods Previous: Iterative methods This is an extension of the Kaczmarz (1937) method for solving linear systems. It has been introduced in imaging by Gordon et al. (1970). We describe it in a more general context. We consider the linear system where H into the Hilbertspace If (p = 1, 4.3) to (4.1), and we will make use of this freedom to our advantage. One can show (Censor et al. (1983), Natterer (1986)) that (4.3) converges provided that and k-th SOR iterative for the linear system If (4.2) is consistent, ART converges to the solution of (4.2) with minimal norm in H. Plain convergence is useful, but we can say much more about the qualitative behaviour and the speed of convergence by exploiting the special structure of the image reconstruction problems at hand. With R the Radon transform in where w is the weight function m (Abramowitz and Stegun (1970)) are invariant subspaces of the iteration (4.3). This has been discovered by Hamaker and Solmon (1978). Thus it suffices to study the convergence on each subspace 1. Let the m < p large and slow for m small. This means that the high frequency parts of f (such as noise) are recovered first, while the overall features of f become visible only in the later iterations. For This explains why the first ART iterates for 2. Let m < p. The same is true for more sophisticated arrangements of the p = 18 (Hamaker and Solmon (1978)) or similarly for p = 30 in Herman and Meyer (1993). The practical consequence is that a judicious choice of directions may well speed up the convergence tremendously. Often it suffices to do only 1-3 steps if the right ordering is chosen. This has been observed also for the EM iteration (see below). Note that the f which cannot be recovered from p projections because they are below the resolution limit. This is a result of the resolution analysis in Natterer (1986). Next: EM (expectation maximation) Up: Iterative methods Previous: Iterative methods Frank Wuebbeling Thu Sep 10 10:51:17 MET DST 1998
{"url":"https://www.uni-muenster.de/AMM/num/Preprints/1998/natterer_1/paper.html/node21.html","timestamp":"2024-11-11T10:46:02Z","content_type":"text/html","content_length":"10994","record_id":"<urn:uuid:1289ec20-7b7f-478f-b426-6ef5d5a6de5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00702.warc.gz"}
Intro to Function Notation | mathteacherbarbie.com Intro to Function Notation As a whole, mathematics has adopted a standardized form of communicating about functions and sets of operations. Mastering this notation early on in one’s algebra career can open doors to understanding many future topics throughout math. Misunderstanding this notation unfortunately can leave students floundering to understand from the get-go. Hopefully the paragraphs below can building a foundation of notation that becomes a launching pad for your algebra skills rather than a stumbling block. This was originally published on mathteacherbarbie.com. If you are reading it elsewhere, you are reading a stolen version. Naming a function The image illustrates a function as a production machine. In order to produce anything, a machine needs inputs, or raw materials. The machine itself then does something to those inputs and ultimately creates the final product, or an output. We need a way to refer to the machine itself. In math, that “machine” is really just a set of instructions. For example, the “square root function” or “square root machine” takes inputs and spits out the square roots of those inputs: $25$ becomes $5$, $2$ becomes $\sqrt{2}$ or $1.41…$. Sometimes that set of instructions can be hard to explain easily, so we give the function machine itself a name. A common name is $f$, short for “function.” However, you might see other letters, symbols, or even longer names. For example, FIRSTNAME is a descriptive function name, implying that the output will be someone’s first name. These types of function names are often found in spreadsheets and computer programming; in math we tend toward shorter (single-letter) names, but don’t have to stick to that. Function names in this sense are used similarly in math to both spreadsheets and computer programming. Argument of a function When we attempt to describe a function, especially in the form of a formula, we need a way to describe “the input.” The challenge is that the input might vary or change. So we assign a variable to represent that input generically. Then we know that every time that variable shows up in the formula, we substitute in whatever input we’re working with at the moment. That variable is called the argument of the function. In math, the argument of a function is the collective input variable. (This is different from computer programming, which reserves the word argument for specific inputs.) We can change the representation of the argument (e.g., we can use a variable other than $x$ if we want), but this change doesn’t impact the steps the function takes or, ultimately, the output that gets assigned to any specific input. “Inputs” vs “Outputs” The input variable is also called the independent variable, especially when there is a real-world context around the function. This is because the ultimate result, the output, usually depends upon which input we choose but we can independently choose which input to use. Have you ever heard the acronym GIGO (garbage-in, garbage-out)? This is a specific real-world meme about this idea. What a production machine produces depends on the raw materials that are put into it each time. The output variable is similarly called the dependent variable since its value depends upon which input is sent through the machine. We’ve seen these inputs/outputs (independent and dependent variables) already in algebra. Most often we’ve called them $x$ and $y$, respectively, though sometimes they’ve gone by other names. A Note About Parentheses We use parentheses — ( and ) — a lot in math. It can be really confusing when one pair of symbols means so many different things. Unfortunately, as users of modern mathematics, we’re kind of stuck with them. As you work with math, contextual clues will help you identify which meaning to use each time you encounter them. Here are the main four you will see in early- to mid-algebra. Putting it all together When all of these ideas are put together in one symbolic form, $f(x)$ (read: “f of x”) acts as a unified symbol that represents the output when input x is put through the f machine. We need the whole set here — the whole “$f(x)$” to stand in for the generic output. The $x$ by itself represents the input. The $f$ by itself represents the function — the actions performed. The entire $f(x)$ as a whole represents the output we get when we perform $f$ on input $x$. I can substitute in specific values for the argument. I can change the appearance of the argument, either changing the variable or even using an expression as the argument (such as $f(3x-2)$ or “f of 3 x minus 2”). However, if you leave off any of the three pieces, functionname(argument), you’re talking about some other part of the process, not the output. Let’s use an example: let’s let $f$ be the “multiply by 3” machine. Then we can actually compute some of our outputs for various inputs. At the same time we can use function notation to describe those same outputs. function output at $x$ = formula with $x$ as argument $f(2)=3/cdot 2=6$ function output at $2$ = formula with $2$ as input value = 6 function output at $7$ = 21 function output at $h$ = formula with $h$ as argument function output at $x+5$ = formula with $x+5$ as argument What’s the point of function notation? Function notation allows us to show where a conclusion came from. The full notation, such as $f(x)$, shows both which function (or set of “instructions”) created the conclusion (or output) as well as which raw materials (input) those instructions were performed on. This gives us much more information than the conclusion (or output) alone. In this way, function notation allows us to examine the impact of changing the input values. Eventually, these ideas expand to functions that take multiple inputs, allowing us to examine the impact of changing just one input at a time. Beyond this, it allows us to talk about multiple functions in the same conversation. Perhaps function $f$ performs one set of instructions while function $g$ performs a different set. If we had used $y$ instead of $f(x)$ and $y$ instead of $g(x)$, we wouldn’t know which $y$ we were talking about at any given time. In this way, it also allows us to both compare different sets of instructions against each other as well as to combine the sets of instructions in a variety of ways (a topic for another post). Further, function notation allows us to write a complicated set of instructions only once and then refer to that same set of instructions over and over again by its name instead of rewriting the whole thing out. Our examples above stayed pretty simple; “multiply by 3” or $\times 3$ are pretty quick to say and/or write. But if we had a different function, such as $f(x)=\frac{32-\sqrt{4x+17}} {2x+4} +3x$, we wouldn’t want to write it over and over again each time we change the input. Instead, we can use its name, $f$, to refer to this complicated set of operations rather than risking typographical errors and/or simply using lots of time to write it out over and over. These give one of the most common and practical applications of this idea today. When functions are used and their variables are defined carefully, the computing power of spreadsheets allows us to tweak scenarios to answer questions like “what would happen if…?” These ideas also allow general computer programs to accept user input and still run without having to rewrite the program for every possible input. You’ve Got This!
{"url":"https://mathteacherbarbie.com/intro-to-function-notation/","timestamp":"2024-11-01T20:46:27Z","content_type":"text/html","content_length":"124168","record_id":"<urn:uuid:b942a5b6-e165-410d-b8c1-bb912820d922>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00410.warc.gz"}
NMPC Suboptimality Estimates for Sampled–Data Continuous Systems Title data Grüne, Lars ; von Lossow, Marcus ; Worthmann, Karl: NMPC Suboptimality Estimates for Sampled–Data Continuous Systems. In: Diehl, Moritz ; Glineur, Francois ; Jarlebring, Elias ; Michiels, Wim (ed.): Recent Advances in Optimization and its Applications in Engineering. - Berlin : Springer , 2010 . - pp. 329-338 ISBN 978-3-642-12597-3 DOI: https://doi.org/10.1007/978-3-642-12598-0_29 Abstract in another language In this paper we investigate the performance of unconstrained nonlinear model predictive control (NMPC) schemes, i.e., schemes in which no additional terminal constraints or terminal costs are added to the finite horizon problem in order to enforce stability properties. Particularly, we consider the application of the recent stability and performance estimates from [Grüne 2009] and [Grüne et al. 2009] to sampled data systems with fast sampling. We demonstrate that the direct application of these results to such systems yields very pessimistic estimates and show that this problem can be fixed by including additional continuity properties into our analysis. Further data
{"url":"https://eref.uni-bayreuth.de/id/eprint/63419/","timestamp":"2024-11-15T04:19:37Z","content_type":"application/xhtml+xml","content_length":"22376","record_id":"<urn:uuid:1c329f68-3664-429e-bf1c-e69167e9aa14>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00560.warc.gz"}
Move from Excel to Python with Pandas Move from Excel to Python with Pandas Transcripts Chapter: Aggregating, grouping, and merging Lecture: Pivot table and crosstab 0:00 Here's some examples of how to use pivot table on cross tab. We're using the same notebook we use for a group by example. 0:07 So we'll just show a couple examples of how to use a pivot table. The way the pivot table works is you defined the data frame that you want to 0:22 work on, and then the index here, which is a company and then the columns or products so you can see we've got all of our products listed across here. 0:30 The values are data that we actually want to do a mathematical function on. So here's the extended amount, and we tell it that we want to do the 0:39 some. So add them all up and then we use margins equals true toe. Add this all, Ah, as a column and a row, one of the other functions we have. 0:50 We can define a fill value here, which is useful. So instead of having the Entei ends here, it's now filled in with zero. 1:01 I'm going to copy and paste this cause I want to go through some other examples of how to use the pivot table. 1:06 So one of the things we can do is we can actually combine. So knows how each of these arguments is a list. 1:12 So if we wanted Teoh, what we could do is actually do multiple math functions here. So if we want to do this some in the mean and the max 1:26 we now get for each product we get the some of the books, the average extended amount for the book and so on for all the products. 1:36 So this just shows how you have a lot of flexibility with this function and how 1:42 you can use the different lists and the different aggregation functions that you can run to do. Ah, lot of complex analysis on your data very quickly. 1:52 So let's do another example, since it's pretty long thing to type. So one of the things we can do is we don't necessarily have to pass in 2:00 the columns, we put it here we get ah, similar sort of view. So we're just gonna do some. And now we can see for each product for each company, 2:14 the product in the some, and you may be thinking this is very similar to Group I and IT ISS. But we can use the fill value in the margins 2:23 equal true, to get a total. So the other shortcut function I want to talk about is the cross tab. So the function call is a little bit different here. 2:36 So you just defined the two different columns that you want to perform the function on So in this case, I want to look at company and product, 2:47 and what it's doing is it's counting how many occurrences there are for each of these 2:54 combinations. So how many occurrences of books for this company pins, posters, etcetera? And for this specific data set, 3:03 it's not terribly useful. One of the things we may want to do is actually some the values associated with each of these combinations, 3:13 so we could tell what values to use. And we need to tell what to do with those values using ak funk again. 3:24 So now we can tell what the total purchase amount was for each one of those and then the other useful argument is to pass normalize equals true, 3:34 and this gives you a view on what percentage of the total amount of purchases in this case or extended amount is allocated to each one of these cells. 3:46 So how many books and what percent of total is it for a bat's? We can also do columns. So then we can see that 1.7% for the books 4:00 went to company a bots and 0.3% for Abu. So that's how you could do it at the columns level. And if you want to look at the index level, 4:13 then we can see for a bots. Almost 60% of its persons were books, 36% were posters and the rest were pens. 4:23 So this is just another example of how pandas has functions that, as you start to master them and get exposure to how to use them, 4:31 you can easily iterated on your analysis and call different combinations of the functions to understand 4:37 your data better and drive insights that you can use in your business.
{"url":"https://training.talkpython.fm/courses/transcript/move-from-excel-to-python-and-pandas/lecture/270606","timestamp":"2024-11-05T13:45:54Z","content_type":"text/html","content_length":"30339","record_id":"<urn:uuid:0e3f7248-cf74-4c08-9aab-e3e0f5c5ccec>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00238.warc.gz"}
How can we see that a 4D N = 2 sigma model will yield a 3D N = 4 sigma model when compactified on a circle? I have a question about sigma models in 3D. If we have $\mathcal{N}=2$ field theory on $\mathbb{R}^4$ and compactify it on $\mathbb{R}^3 \times S^1_R$ (in which $S^1_R$ is a circle of radius $R$) we get a 3D effective field theory whose Lagrangian is dependent on $R$. If we change variables of Lagrangian in suitable way and impose the preservation of SUSY (8 real supercharges), then we get $\mathcal{N}=4$ sigma model whose target space is a Hyperkähler manifold. My question is: How we can prove this rigorously or using theorems of supersymmetry? Is there any reference other than Gauge Dynamics & Compactification To Three Dimensions that explains this more carefully? This post imported from StackExchange Physics at 2015-05-22 20:56 (UTC), posted by SE-user QGravity Let me start by some general considerations. In a theory with massless scalars it is possible that these scalars acquire non-trivial expectation values. The space of the possible expectation values is the moduli space $M$ of vacua of the theory. The kinetic term of the scalars in the low-energy effective action around a given vacua, or equivalently the two points functions of the scalar fluctuations around the vacua, defines a natural metric on the moduli space $M$. If the massless scalars are the only massless degrees of freedom then the low energy description of the theory is the sigma model of target $M$. But if there exists other massless degrees of freedom the low energy description is in general more complicated. For a $\mathcal{N}=2$ $4d$ gauge theory, the low energy description at a generic point of the moduli space of vacua is an abelian gauge theory. In particular if the abelian gauge group is non trivial it is something more complicated than a sigma model with values in the moduli space of vacua. After compactification on a circle we obtain a $\mathcal{N}=4$ $3d$ gauge theory. At low energy, at a generic point of the moduli space of vacua, we obtain again an abelian gauge theory. The key point is that in three dimensions an abelian gauge field is dual to a scalar field. Thus all the (bosonic) massless degrees of freedom can be seen as scalars and so the low energy effective description of the theory is the sigma model of target the moduli space $M$ of these scalars. This $3d$ sigma model has $\mathcal{N}=4$ supersymmetries ($8$ real supercharges). This implies that $M$ is naturally Hyperkähler. The easiest way to see that is maybe to reduce to two dimensions: it is classical that a $2d$ sigma model has $\mathcal{N}=(4,4)$ supersymmetries if and only if the target is Hyperkähler (for more informations and references see the answer to this question: http://physicsoverflow.org/23966/why-are-complex-structures-important-in-physics ). The idea is that the $\ mathcal{N}=(4,4)$ $2d$ supersymmetric algebra has a $SO(4) \sim SU(2) \times SU(2)$ $R$-symmetry rotating the four supersymmetries which also rotates three complex structures $I,J,K$ on $M$. @40227 Thank you! So as I understand, $\mathcal{N}=(4,4)$ has $8$ real supercharges ($4$ chiral and $4$ anti-chiral) which sit in an irreducible rep of Lorentz algebra in $2D$ which is $\mathfrak{so} (1,1) \oplus \mathfrak{s}_o$ in which $\mathfrak{s}_o$ is odd part of the algebra. I have another questions. Does target space have the same structure when we compactify from 3D to 2D? I mean when we compactify from 4D to 3D, the moduli space is essentially a torus fibration over the Coulomb branch of $\mathcal{N}=2$ theory and we know that the Coulomb branch has a rigid special Kähler structure. So when we reduce to 2D, why does the same Hyperkähler target space (moduli space) of 3D theory arise? See http://www.physicsoverflow.org/31429.
{"url":"https://www.physicsoverflow.org/31240/sigma-model-will-yield-sigma-model-when-compactified-circle","timestamp":"2024-11-03T07:59:04Z","content_type":"text/html","content_length":"122146","record_id":"<urn:uuid:e50f69b0-0c02-4073-b541-c3bb3ab24b51>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00407.warc.gz"}
What are the real-world applications of kinematic analysis? | SolidWorks Assignment Help What are the real-world applications of kinematic analysis? Kinematic analysis can actually distinguish between physical vs. virtual reality. The real-world applications of kinematics are for many people. They are on the surface state of the world. These include the technology of computer vision. Just like small computers, they integrate a physical object with sensory information for the purposes of digital processing based on their sensory capabilities. The human limbs have evolved with the invention of computers and of information processors at a large scale. Artificial limbs act as virtual toys. The virtual is the real part of objects. They can also be used by computers to further their mission by exploiting their human body while not relying on the computer or other physical implements. Such virtual arms have been important systems for many years. It can be used to improve efficiency, to reduce electricity generation on cooling media, to reduce paper waste, and to limit access to materials harmful to humans and the environment. The real-world applications of kinematic analysis can be found in many disciplines, governments, and organizations. In recent years, the fields of education, medicine, technology, and robotics have grown and have advanced into industrial fields that mainly rely on kinematics. These disciplines usually need a direct analysis. In general, each of these fields lacks data-driven methods to test the real-world applications of kinematics. Any solution to this problem using real-world means alone will not work. These fields tend to use artificial limbs instead of the classical physical limbs of computers. Non-inertia in general is not used in these fields. However, artificial limbs based simulation to solve atlases and mechanical analysis are now very commonly used in universities. Outsource Coursework These Artificial limbs include mechanical organs or robotic vehicles which make use of optical and/or optical-based information processing. In 2004, I founded the Institute of Inorganic Chemistry at Peking University and moved to see the possibilities of using artificial limbs with kinematics. Lengninger-Bogdan gave the talk at an event organised by the European Association of Inorganic Chemistry in a meeting at the United Nations, in Geneva. The lectures covered much of the basic principles of kinematic analysis, analyzed the applications of kinematic equations, and described how to control materials or processes at high potentials. As a result, I developed a book for people connected with artificial limbs which focused on the process of simulation of the effects of kinematic analysis on their limbs. Before I started, I used to train the instructor for the first time and I was interested in the implications of these natural functions of real-world power systems through kinematic analysis. What started as an exercise in basic operations of artificial limbs was refined over the course of my career. The new course was announced in February 2015. I recommend that you start your research with the following questions: What are the real world applications of kinematics? What can be the natural phenomena that can be used in automated processes like kinematicWhat are the real-world applications of kinematic analysis? This is a very simple post, I’ll share everything web link you. For this problem, I have used kinematic analysis. It’s one the important tools yet used by many engineers and those looking to learn more algorithms in the near future. Then too much information is lost. There are too many unknown parameters. There are too many unknown ones, and you don’t know them. It’s time to learn more kinematic analysis. Please contribute to continue this cool kinematic analysis! But I want to first give an hour full of detailed details about how to handle the kimbap program, and also some questions. In this forum, you’ll find an extensive discussion of these issues. Thanks! For some reason, I am getting the message that the input vector being presented is quite large. At least I think so. But for much more detailed information, here it is! I want to change the input vector that is used for kinematic analysis out to a form that ensures that the associated inputs can be clearly identified. Do Homework Online Note that there are options to specify the input so that the input can be associated to other vectors with exactly that one input. This could be done by providing a candidate set that is all the vectors needed. Let’s look at a few items that I’ve created before. One example is the form ‘1’: Form: ‘1’ Let’s start with four vectors. It’s important to be familiar with the use this form. These are the inputs to the kinematic analysis program. Here goes the four input vectors, First vector 3, 4, 5 (x,y): the input vector 5. But first vector 3 is always the 4th vector along with a 1,2,2… So let’s list them first – 1. You have 4 1,f(x,y)e(x,xy) (x,y) Try that, and fill the matrix with each 3-vector of data before the first transformation: 1.0f4 2f2 3f4 1f4 2f3 (e(x,y)-x1e2) There are possible combinations of the input to shape of the vector. One example of this multiple way my website is in the form ‘z=1+1−1e+1f’. Then the input contains 4 more vectors (same for the input to shape)! How do we recover what we got here? A more complex example would be to solve the problem of finding where all four vectors correspond to the locations of the four nearest points we’ve chosen for the gpu 4.2. Note that we know the dimensions of the vector are very much similar to that of the input. So it would be useful to move some dimensionality to add theWhat are the real-world applications of kinematic analysis? In this talk I will discuss what is the real-world field and how we can use kinematic analysis to help those interested in kinematics find help online. Drawing a real-world example using the paper a diagram of a tree and hermaphroditic kinematics, I will show that the equation of a given process is given, for the case of a perfectly open forest, simply by the change in its front or its front plus an average front only when a step moves into it. I then note that while the process does not have a single front except for the simplest case of a preheated furnace like combustion chamber, the general physical process of the calculation is usually computationally efficient. The author concludes that we can find some instances of the physically easier task of finding this kind of information, most of which falls under the umbrella of kinematic analysis or local isochronism. The technique is explained briefly next. A major contribution of this course is to derive the density matrix formulation within a local isochronistic framework. Do My Online Accounting Homework This section should be seen as a generalization of a study of a particular model of water dynamics \[[@B10]\]. In the plana, Gattes and Dominguez \[[@B16]\] based a Lagrangian approach to solution of a problem involving a complex variable. With this potential a quantum formalism is introduced, some starting points of the analysis are presented. There is a specific form of this framework because it is applicable in setting the problem by introducing a particle that is in quantum mechanical reality to an initial state distribution. We have not presented a theory of evolution involving the motion of a single particle, but we do have a similar approach to this in \[[@B13]\] where we explained how even time steps of an electronic circuit undergo a particle current in real time, such that its reaction potential is the potential for the particle current. I use this approach for our paper using the representation of a particle trajectory; the particle moves it as a screen in simulation, except that the screen moves according to the time-step of these steps (recall the absence of a particle motion in real time). After all, time steps of the individual electronic circuit may not represent all particle trajectories. This is because the particle moves in its classical state, it moves into the chemical process, and it jumps back to its initial state, which may in both cases be the active quantum system of the particle. In the course of this, time steps of the components of the circuit can reach zero but a particle jumping back into the center of the picture, eventually reaching the center of the picture, before the jump still allowed. Then the jumps do not create anything above time zero, i.e. they allow the particle to leave the picture and proceed through the internal dynamics, which includes the original dynamics. In several occasions, the particle is in the final state, which is defined
{"url":"https://solidworksaid.com/what-are-the-real-world-applications-of-kinematic-analysis-24870","timestamp":"2024-11-03T12:09:59Z","content_type":"text/html","content_length":"157516","record_id":"<urn:uuid:71ce1f42-7deb-4356-8c06-633c8f88d965>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00689.warc.gz"}
Model Integer Subtraction Student Teacher Students use the Hot Air Balloon simulation to model integer subtraction. They then move to modeling subtraction on a number line. They use patterns in their work and their answers to write a rule for subtracting integers.Key ConceptsThis lesson introduces the number line model for subtracting integers. To subtract on a number line, start at 0. Move to the location of the first number (the minuend). Then, move in the negative direction (down or left) to subtract a positive integer or in the positive direction (up or right) to subtract a negative integer. In other words, to subtract a number, move in the opposite direction than you would if you were adding it.The Hot Air Balloon simulation can help students see why subtracting a number is the same as adding the opposite:Subtracting a positive number means removing heat from air, which causes the balloon to go down, in the negative direction.Subtracting a negative number means removing weight, which causes the balloon to go up, in the positive direction.The rule for integer subtraction (which extends to addition of rational numbers) is easiest to state in terms of addition: to subtract a number, add its opposite. For example, 5 – 2 = 5 + (–2) = 3 and 5 – (–2) = 5 + 2 = 7.Goals and Learning ObjectivesModel integer subtraction on a number line.Write a rule for subtracting integers. Numbers and Operations Middle School Grade 7 Material Type: Lesson Plan Date Added: Media Format:
{"url":"https://oercommons.org/courseware/lesson/2703","timestamp":"2024-11-11T07:27:09Z","content_type":"text/html","content_length":"67845","record_id":"<urn:uuid:498e48d8-8631-4230-8c81-5049f8a84fc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00745.warc.gz"}
Explaining variance We’re returning to our portfolio discussion after detours into topics on the put-write index and non-linear correlations. We’ll be investigating alternative methods to analyze, quantify, and mitigate risk, including risk-constrained optimization, a topic that figures large in factor research. The main idea is that there are certain risks one wants to bear and others one doesn’t. Do you want to be compensated for exposure to common risk factors or do you want to find and exploit unknown factors? And, perhaps most importantly, can you insulate your portfolio from unexpected risks? Generally, one will try to build a model that explains the return of an asset in terms of its risk factors. Presumably, this model will help to quantify: • The influence of a particular risk factor on the asset’s return. • The explanatory power of these risk factors. • The proportion of the asset’s variance due to identified risk factors. The model generally looks something like the following: \[r_i = a_i + \sum_{k=1}^{K}\beta_{i,k} F_k + \epsilon_i\] \(r_i\) = the return for asset \(i\) \(a_i\) = the intercept \(\beta_{i,k}\) = asset \(i\)’s exposure to factor \(k\) \(F_k\) = the return for factor \(k\) \(\epsilon_i\) = idiosyncratic risk of \(i\), noise term, or fudge factor.^1 The model can be extended to the portfolio level too. Risk factors can be as simple or arcane as you like. Common ones include CAPM’s \(\beta\) or Fama-French factors; economic variables; and/or technical or statistical metrics like moving averages or cointegration. The problem is that there is no a priori list of factors that describe the majority of returns for any broad class of assets. And even if there were, there’s no guarantee those factors would explain returns going forward. Indeed, factor weightings change all the time and the risk premia associated with some factors may erode or disappear. Just type “Is the value factor dead?” into a Google search and you’ll find plenty of debate for and against. While such debates might be fun, fruitful, or frivolous, let’s take a step back and think about what we hope our portfolio will achieve: a satisfactory trade-off between risk and return such that whatever return goal we have, there’s a high probability we accomplish it within the necessary time frame. Warren Buffett’s ideal holding period may be forever, but we’ll need the cash a lot sooner! Recall, when we constructed the naive, satisfactory, and mean-variance optimized portfolios in our previous posts, the standard deviation of returns (i.e., volatility) stood in for risk. Volatility as a proxy for risk begs a lot of questions. The most intuitive being that risk in the real world is not a statistic but the actual risk of losing something—capital, for instance. But the beauty of volatility is that it can quantify the probability of such a risk if one is willing to accept a bunch of simplifying assumptions. We’ll leave the question of whether those assumptions are too simple—that is, too unrealistic—for another time. If one of the biggest risks in portfolio construction is building a portfolio that doesn’t achieve the return goal it’s meant to achieve, how do we avoid such an event? Volatility can tell us roughly what’s the probability it might occur. But a risk factor model should, presumably, tell us what’s driving that risk and what’s not. Maybe even help us figure out which risks we should avoid. While it might seem obvious that the first thing to do is to identify the risks. We want to build a risk model with common risk factors first, so that we can understand the process before we start to get creative searching for meaningful factors. We’ll start by bringing back our data series of stocks, bonds, commodities (gold), and real estate and then also call in the classic Fama-French (F-F) three factor model along with momentum. We’re using F-F not because we believe those factors will feature a lot of explanatory power, but because they’re expedient and useful. Expedient because the data are readily available and many people are familiar with them, aiding the reproducible research goal of this blog. Useful because they’ll be a good way to set the groundwork for the proceeding posts. Our roadmap is the following. Graph the F-F factors, show the portfolio simulations for an initial 60-month (five-year) period beginning in 1987, analyze how much the factors explain asset variance, and then look at how much the factors explain portfolio variance. Let’s begin. First, we plot the F-F factors below. Note that we’re only covering the first five-years of monthly data that matches the original portfolio construction. For those unfamiliar with the factors, risk premium is the return on the stock market less the risk-free rate. SMB is the size factor; i.e. returns to small cap stocks less large caps. HML is the value factor; i.e., returns to high book-to-price (hence, low price-to-book multiples) stocks (value) less low book-to-price (growth). Momentum is returns to stocks showing positive returns in the last twevle months less those showing negative returns. If you want more details visit Prof. K. French’s data library for more details. Now we’ll simulate 30,000 portfolios that invest in two to four out of the four possible assets. Recall this simulation can approximate (hack!) an efficient frontier without going through the convex optimization steps. The red and purple markers are the maximum Sharpe ratio and minimum volatility portfolios. We assume the reader can figure out the maximum (efficient) return portfolio. Kinda pretty. Now we look at how well these factors explain the returns on each of the assets. Here, we regress each asset’s excess return (return less the risk-free rate) against the four factors and show the \(R^{2}\) for each regression in the graph below. Not surprisingly, stocks enjoy the highest \(R^{2}\) relative to the factors, since those factors are primarily derived from stock portfolios. Two notes here. First, we’re not trying to create the best factor model in this post; rather, establish the intuition behind what we’re doing. Second, for the stock explanatory power we exclude the F-F market risk premium since that is essentially the same as the stock return. Now let’s check out the factor sensitivities (or exposures, or beta) for each asset class. We graph the sensitivities below. For stocks, it’s not clear why the value factor has sucha a large negative size effect other than the observation that value underperformed size for that period. Momentum generally has the lowest impact on returns for most of the asset classes. But, the regression output suggests the momentum factor’s significance is not much different than zero. We won’t show the p-values here, but the interested reader will see how to extract them within the code presented below. Now we’ll calculate how much the factors explain a portfolio’s variance. The result is derived from the following formula based on matrix algebra: \[Var_{p} =X^{T}(BFB^T + S)X\] \(Var_{p}\) = the variance of the portfolio \(X\) = a column vector of weights \(B\) = a matrix of factor sensitivities \(F\) = the covariance matrix of factor returns \(S\) = the diagonal matrix of residual variance. In other words, the variance of the returns not explained by the factor model. Having calculated the variances, the question is what can this tell us about the portfolios? Time for some exploratory data analysis! First off, we might be interested to see if there’s any relationship between portfolio volatility and explained variance. However, even though a scatterplot of those two metrics creates a wonderfully fantastic graph, it reveals almost no information as shown below. Who knew finance could be so artistic! What if we group the volatilities into deciles and graph the average explained variance with an annotation for the average volatility of each decile? We show the results below. Note that we’ve shortened the y-axis to highlight the differences in explained variance. While portfolio volatility appears to increase monotonically with the amount of variance explained by the risk factor model, it’s not obvious that tells us much. There needn’t be a relationship between the two. Now, we’ll group the portfolios by major asset class weighting as well as include a grouping of relatively equal-weighted portfolios. Using these groupings we’ll calculate the average variance explained by the factors. We select portfolios for the asset groups if the particular asset in that portfolio makes up a greater than 50% weighting. Hence, all portfolios in the stock grouping have a weighting to stocks in excess of 50%. For the relatively equal-weighted portfolios, we include only those portfolios that feature weightings no greater than 30% for any of the assets. This total grouping only amounts to about half the portfolios, so we bucket the remainder into an eponymous group. We also calculate the average of the variance explained across all portfolios. We plot the bar chart below. Predictably, the variance explained by the risk factors is relatively high for stocks, less so for the other over-weighted asset portfolios. The remaining portfolios see at most 20% of their variance Finally, we’ll look at our original four portfolios (Satisfactory, Naive, Max Sharpe, and Max Return) to see how much of their variance is explained by the factor models. Recall, both the Satisfactory and Naive portfolios had less than 40% allocation to stocks, so it’s not suprising that much less than that is explained by a primarily equity risk factor model. The other \(R^{2}\)s are above 10% and the beta-weighted factor covariance matrix is positive^2, so overall explained variance is still higher than the stock weighthing in the portfolio times the explanatory power. There might also be some additional information captured by the factor returns beyond the stated exposure but that is beyond the scope of this post.^3 The low stock exposure probably explains why the risk model explains so little of the Max Sharpe portfolio’s variance. While the model just about hits the bullseye for the Max Return portfolio, as it’s almost 100% stocks and model’s \(R^{2}\) for stocks was just about the same amount! Where does this leave us? We’ve built a factor model that does an OK job explaining asset returns and a modestly better job explaining portfolio variance. Now that we’ve established the factor model process, we’ll look to see if we can identify factors that are actually good predictors of returns and variance. Note that the factor model we used here was entirely coincident with the asset returns. We want risk factors that predict future returns and variance. Until we find them, the code is below. A few administrative notes. First, we’ve made changes to the blog behind the scenes, including purchasing a domain name. The DNS configuration might still be a little buggy, but we hope that this will solve the problem we were having with subscription delivery. Thanks for bearing with us on that one. If you wish to subscribe, you may do so above in the right hand corner. If you do subscribe, but notice you’re not getting any updates, please email us at content at optionstocksmachines dot com and we’ll try to sort it out. Second, as much as we find providing the code to our posts in both R and Python worthwhile, they soak up a lot of time, which we have less and less of. Going forward, we’ll still try to provide both, but may, from time to time, only code in one language or the other. The tagging will indicate which it is. We’ll still provide the code below, of course. This post we cheated a bit since we coded in Python, but converted to R using the reticulate package. Third, an earlier version of this post had a coding error which altered some of the graphs. The overall content and conclusions were unchanged, but detailing the individual changes would have led to an exceedingly complicated post. We apologize for any confusion this may have caused. Fourth, you’ll find a rant here or there in the code below. Reticulate cannot seem to handle some of the flexibility of Python so we were getting a number of errors for code that we know works perfectly well in jupyter notebooks. If you know what went wrong, please let us know. # Built using R 4.0.3 and Python 3.8.3 ### R ## Load packages library(tidyquant) # Not really necessary, but force of habit library(tidyverse) # Not really necessary, but force of habit library(reticulate) # development version # Allow variables in one python chunk to be used by other chunks. knitr::knit_engines$set(python = reticulate::eng_python) ### Python from here on! # Load libraries import warnings import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib import matplotlib.pyplot as plt import os os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = 'C:/Users/usr/Anaconda3/Library/plugins/platforms' ## Load asset data df = pd.read_pickle('port_const.pkl') # Check out http://www.optionstocksmachines.com/ for how we pulled in the data. df.iloc[0,3] = 0.006 # Interpolation ## Load ff data ff_url = "http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/Developed_3_Factors_CSV.zip" col_names = ['date', 'mkt-rfr', 'smb', 'hml', 'rfr'] ff = pd.read_csv(ff_url, skiprows=6, header=0, names = col_names) ff = ff.iloc[:364,:] from pandas.tseries.offsets import MonthEnd ff['date'] = pd.to_datetime([str(x[:4]) + "/" + str.rstrip(x[4:]) for x in ff['date']], format = "%Y-%m") + MonthEnd(1) ff.iloc[:,1:] = ff.iloc[:,1:].apply(pd.to_numeric) momo_url = "http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Momentum_Factor_CSV.zip" momo = pd.read_csv(momo_url, skiprows=13, header=0, names=['date', 'mom']) momo = momo.iloc[:1125,:] momo['date'] = pd.to_datetime([str(x[:4]) + "/" + str(x[4:]) for x in momo['date']], format = "%Y-%m") + MonthEnd(1) momo['mom'] = pd.to_numeric(momo['mom']) ff_mo = pd.merge(ff, momo, how = 'left', on='date') col_ord = [x for x in ff_mo.columns.to_list() if x not in ['rfr']] + ['rfr'] ff_mo = ff_mo.loc[:,col_ord] ff_mo = ff_mo[(ff_mo['date']>="1987-01-31") & (ff_mo['date']<="2019-12-31")].reset_index(drop=True) ## Plot ff ff_factors = ['Risk premium', 'SMB', 'HML', 'Momemtum'] fig, axes = plt.subplots(4,1, figsize=(10,8)) for idx, ax in enumerate(fig.axes): ax.plot(ff_mo.iloc[:60,0], ff_mo.iloc[:60,idx+1], linestyle = "dashed", color='blue') ax.set_title(ff_factors[idx], fontsize=10, loc='left') if idx % 2 != 0: ax.set_ylabel("Returns (%)") fig.tight_layout(pad = 0.5) ## Abbreviated Simulation function class Port_sim: import numpy as np import pandas as pd def calc_sim_lv(df, sims, cols): wts = np.zeros(((cols-1)*sims, cols)) for i in range(1,cols): for j in range(sims): a = np.random.uniform(0,1,(cols-i+1)) b = a/np.sum(a) c = np.random.choice(np.concatenate((b, np.zeros(i-1))),cols, replace=False) wts[count,:] = c mean_ret = df.mean() port_cov = df.cov() for i in range((cols-1)*sims): vols.append(np.sqrt(np.dot(np.dot(wts[i,:].T,port_cov), wts[i,:]))) port = np.c_[rets, vols] sharpe = port[:,0]/port[:,1]*np.sqrt(12) return port, wts, sharpe ## Simulate portfolios port1, wts1, sharpe1 = Port_sim.calc_sim_lv(df.iloc[1:60, 0:4], 10000,4) ## Plot simulated portfolios max_sharp1 = port1[np.argmax(sharpe1)] min_vol1 = port1[np.argmin(port1[:,1])] fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(1,1, 1) sim = ax.scatter(port1[:,1]*np.sqrt(12)*100, port1[:,0]*1200, marker='.', c=sharpe1, cmap='Blues') ax.scatter(max_sharp1[1]*np.sqrt(12)*100, max_sharp1[0]*1200,marker=(4,1,0),color='r',s=500) ax.set_title('Simulated portfolios', fontsize=20) ax.set_xlabel('Risk (%)') ax.set_ylabel('Return (%)') cbaxes = fig.add_axes([0.15, 0.6, 0.01, 0.2]) clb = fig.colorbar(sim, cax = cbaxes) clb.ax.set_title(label='Sharpe', fontsize=10) ## Calculate betas for asset classes X = sm.add_constant(ff_mo.iloc[:60,1:5]) rsq = [] for i in range(4): y = df.iloc[:60,i].values - ff_mo.loc[:59, 'rfr'].values mod = sm.OLS(y, X).fit().rsquared*100 asset_names = ['Stocks', 'Bonds', 'Gold', 'Real estate'] fact_plot = pd.DataFrame(zip(asset_names,rsq), columns = ['asset_names', 'rsq']) ## Plot betas ax = fact_plot['rsq'].plot(kind = "bar", color='blue', figsize=(12,6)) ax.set_xticklabels(asset_names, rotation=0) ax.set_title("$R^{2}$ for Fama-French Four Factor Model") ## Iterate through annotation for i in range(4): plt.annotate(str(round(rsq[i]))+'%', xy = (fact_plot.index[i]-0.05, rsq[i]+1)) ## Note: reticulate does not like plt.annotate() and throws errors left, right, and center if you ## don't ensure that the x ticks are numeric, which means you have to label the xticks separately ## through the axes setting. Very annoying! # Find factor exposures assets = df.iloc[:60,:4] betas = pd.DataFrame(index=assets.columns) error = pd.DataFrame(index=assets.index) # Create betas and error # Code derived from Quantopian X = sm.add_constant(ff_mo.iloc[:60,1:5]) for i in assets.columns: y = assets.loc[:,i].values - ff_mo.loc[:59,'rfr'].values result = sm.OLS(y, X).fit() betas.loc[i,"mkt_beta"] = result.params[1] betas.loc[i,"smb_beta"] = result.params[2] betas.loc[i,"hml_beta"] = result.params[3] betas.loc[i,'momo_beta'] = result.params[4] # We don't show the p-values in the post, but did promise to show how we coded it. pvalues.loc[i,"mkt_p"] = result.pvalues[1] pvalues.loc[i,"smb_p"] = result.pvalues[2] pvalues.loc[i,"hml_p"] = result.pvalues[3] pvalues.loc[i,'momo_p'] = result.pvalues[4] error.loc[:,i] = (y - X.dot(result.params)).values # Plot the betas (betas*100).plot(kind='bar', width = 0.75, color=['darkblue', 'blue', 'grey', 'darkgrey'], figsize=(12,6)) plt.legend(['Risk premium', 'SMB', 'HML', 'Momentum'], loc='upper right') plt.xticks([0,1,2,3], ['Stock', 'Bond', 'Gold', 'Real estate'], rotation=0) plt.ylabel(r'Factor $\beta$s ') # Create variance contribution function def factor_port_var(betas, factors, weights, error): B = np.array(betas) F = np.array(factors.cov()) S = np.diag(np.array(error.var())) factor_var = weights.dot(B.dot(F).dot(B.T)).dot(weights.T) specific_var = weights.dot(S).dot(weights.T) return factor_var, specific_var # Iterate variance calculation through portfolios facts = ff_mo.iloc[:60, 1:5] fact_var = [] spec_var = [] for i in range(len(wts1)): out = factor_port_var(betas, facts, wts1[i], error) vars = np.array([fact_var, spec_var]) ## Find max sharpe and min vol portfolio max_sharp_var = [exp_var[np.argmax(sharpe1)], port1[np.argmax(sharpe1)][1]] min_vol_var = [exp_var[np.argmin(port1[:,1])], port1[np.argmin(port1[:,1])][1]] ## Plot variance explained vs. volatility fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(1,1, 1) sim = ax.scatter(port1[:,1]*np.sqrt(12)*100, exp_var, marker='.', c=sharpe1, cmap='Blues') ax.scatter(max_sharp_var[1]*np.sqrt(12)*100, max_sharp_var[0],marker=(4,1,0),color='r',s=500) ax.set_title('Portfolio variance due to risk factors vs. portfolio volatility ', fontsize=20) ax.set_xlabel('Portfolio Volatility (%)') ax.set_ylabel('Risk factor variance contribution (%)') cbaxes = fig.add_axes([0.15, 0.6, 0.01, 0.2]) clb = fig.colorbar(sim, cax = cbaxes) clb.ax.set_title(label='Sharpe', fontsize=10) ## Create ranking data frame rank = pd.DataFrame(zip(port1[:,1], exp_var), columns=['vol', 'exp_var']) rank = rank.sort_values('vol') rank['decile'] = pd.qcut(rank['vol'], 10, labels = False) vol_rank = rank.groupby('decile')[['vol','exp_var']].mean() vols = (vol_rank['vol'] * np.sqrt(12)*100).values ## Plot explained variance vs. ranking ax = vol_rank['exp_var'].plot(kind='bar', color='blue', figsize=(12,6)) ax.set_xticklabels([x for x in np.arange(1,11)], rotation=0) ax.set_ylabel('Risk factor explained variance (%)') ax.set_title('Variance explained by risk factor grouped by volatility decile\nwith average volatility by bar') for i in range(10): plt.annotate(str(round(vols[i],1))+'%', xy = (vol_rank.index[i]-0.2, vol_rank['exp_var'][i]+1)) ## Show grouping of portfolios ## Note we could not get this to work within reticulate, so simply saved the graph as a png. ## This did work in jupyter, however. wt_df = pd.DataFrame(wts1, columns = assets.columns) indices = [] for asset in assets.columns: idx = np.array(wt_df[wt_df[asset] > 0.5].index) eq_wt = [] for i, row in wt_df.iterrows(): if row.max() < 0.3: exp_var_asset = [] for i in range(4): out = np.mean(exp_var[indices[i]]) mask = np.concatenate((np.concatenate(indices), np.array(eq_wt))) asset_names = ['Stocks', 'Bonds', 'Gold', 'Real estate'] plt.bar(['All'] + asset_names + ['Equal', 'Remainder'], exp_var_asset, color = "blue") for i in range(len(exp_var_asset)): plt.annotate(str(round(exp_var_asset[i])) + '%', xy = (i-0.05, exp_var_asset[i]+1)) plt.title('Portfolio variance explained by factor model for asset and equal-weighted models') plt.ylabel('Variance explained (%)') # This is the error we'd get every time we ran the code in blogdown. # Error in py_call_impl(callable, dots$args, dots$keywords) : # TypeError: only integer scalar arrays can be converted to a scalar index # Detailed traceback: # File "<string>", line 2, in <module> # Calls: local ... py_capture_output -> force -> <Anonymous> -> py_call_impl # Execution halted # Error in render_page(f) : # Failed to render 'content/post/2020-12-01-port-20/index.Rmd' ## Instantiate original four portfolio weights satis_wt = np.array([0.32, 0.4, 0.2, 0.08]) equal_wt = np.repeat(0.25,4) max_sharp_wt = wts1[np.argmax(sharpe1)] max_ret_wt = wts1[pd.DataFrame(np.c_[port1,sharpe1], columns = ['ret', 'risk', 'sharpe']).sort_values(['ret', 'sharpe'], ascending=False).index[0]] ## Loop through weights to calculate explained variance wt_list = [satis_wt, equal_wt, max_sharp_wt, max_ret_wt] for wt in wt_list: out = factor_port_var(betas, facts, wt, error) port_exp.append(out[0]/(out[0] + out[1])) port_exp = np.array(port_exp) ## Graph portfolio ## We didn't even bother trying to make this work in blogdown and just saved direct to a png. port_names = ['Satisfactory', 'Naive', 'Max Sharpe', 'Max Return'] plt.bar(port_names, port_exp*100, color='blue') for i in range(4): plt.annotate(str(round(port_exp[i]*100)) + '%', xy = (i-0.05, port_exp[i]*100+0.5)) plt.title('Original four portfolios variance explained by factor models') plt.ylabel('Variance explained (%)') 1. You choose!↩︎ 2. Never thought I would be writing such dense phrase!↩︎ 3. Not to criticize Fama-French, but the portfolio sorts that generate the various factors may not perfectly isolate the exposure their trying to capture. That ‘unknown’ information could be driving some of the explanatory power. Remember this ‘unknown’ if and when we tackle principal component analysis in a later post.↩︎
{"url":"https://osm.netlify.app/post/2020-12-01-port-20/explaining-variance/","timestamp":"2024-11-14T03:15:46Z","content_type":"text/html","content_length":"40073","record_id":"<urn:uuid:1c64a78b-15e0-402d-957b-777231a6c5d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00535.warc.gz"}
Lesson 16 Interpret Measurement Data Warm-up: Number Talk: Addition within 50 (10 minutes) The purpose of this Number Talk is to activate students’ previous experiences with addition methods involving composing a ten. In previous lessons, students used their knowledge of making a ten to find sums within 20 and used methods based on place value to compose a ten when adding within 100. This string is designed to encourage these methods. • Display one expression. • “Give me a signal when you have an answer and can explain how you got it.” • 1 minute: quiet think time • Record answers and strategy. • Keep expressions and work displayed. • Repeat with each expression. Student Facing Find the value of each expression mentally. • \(15 + 5 + 1\) • \(25 + 6\) • \(16 + 7\) • \(37 + 6\) Activity Synthesis • "How do you think the third expression could help with finding the value of the fourth one?" (In the ones place there was a 6 and 7, so I knew that \(6 + 4 = 10\) and there were 3 left over. In the fourth expression the ones were switched, but it was still 10 with 3 left over.) Activity 1: The Plant Project (20 minutes) The purpose of this activity is for students to create a line plot from data presented in a table. The table includes data with longer lengths and a greater difference between the shortest and longest lengths than the data used in previous lessons. Students make decisions about how to label the number line using what they have learned about the structure of line plots and how to represent and label measurement data. The synthesis discussion focuses on sharing and comparing the strategies students used to create their line plots, focusing on how they chose which numbers to use on their line plots (MP3). Representation: Internalize Comprehension. Activate or supply background knowledge. Provide either a blank line plot on grid paper or a copy of a previously created line plot for students to use as a reference. The components of the line plot can be reviewed once again before getting to work. Supports accessibility for: Organization, Memory • Groups of 2 • Give each student the Line Plot Template. • Display the table showing plant heights. • “Second grade students were growing plants in science class. They each measured the height of their plants. Height tells us the length of the plant from the soil to the top of the stem.” • “Here is how they represented their data.” • 30 seconds: quiet think time • “Your job is to create a line plot to represent this data. Think about how you want to label the tick marks with numbers. Be sure to include a title, label your units of measure, and think about how you are drawing your Xs so others can easily read your data.” • 8 minutes: independent work time • “Compare your line plot with your partner’s.” • 2 minutes: partner discussion • Monitor for students with clear and accurate line plots. Student Facing Use the data in this table to create a line plot. │Group B │plant heights (centimeters) │ │Andre │33 │ │Clare │25 │ │Diego │27 │ │Elena │25 │ │Han │35 │ │Jada │33 │ │Kiran │26 │ │Noah │30 │ │Priya │26 │ │Tyler │33 │ Activity Synthesis • Invite 2–3 students to display their line plots. • Consider asking each student: □ “How did you decide which numbers to start and end your line plot with?” (I looked at the table to find the shortest and longest lengths.) • “How are these line plots the same? How are they different?” • “What can you say about the height of their plants by looking at the line plot?” (There are 3 Xs over 33, so those plants all measured 33 cm. Only 1 plant was 27 cm.) • “How many students had a plant that measured more than 30 cm?” (4 because 3 had 33 and 1 had 35). Activity 2: Interpret Measurement Data on a Line Plot (15 minutes) The purpose of this activity is to interpret measurement data represented by line plots. Students use the line plots they created in the previous activity and another line plot about plant heights to answer questions. In the activity synthesis, students share how they found the difference between two lengths using the line plot and discuss how the structure of the line plot helps to show differences (MP7). MLR8 Discussion Supports. Activity: Display sentence frames to support small group discussion: “I agree because. . . .” and “I disagree because. . . .” Listen for the appropriate use of comparative words such as shortest and tallest. Advances: Speaking, Conversing • “Now you are going to use the line plots you have created to answer questions about the measurement data.” • “You will also answer a few questions based on a line plot created by Han.” • 8 minutes: independent work time • Monitor for students who notice they can count the length units on the line plot to find the difference between the tallest and shortest plant. • “Check your answers with your partner and share what you learned about Han's line plot.” • 3 minutes: partner discussion • Monitor for a variety of student statements about Han's line plot to share in the lesson synthesis. Student Facing The Plant Project Answer the questions based on your line plot. 1. What was the shortest plant height? 2. What was the tallest plant height? 3. What is the difference between the height of the tallest plant and the shortest plant? Write an equation to show how you know. Answer the questions based on Han’s line plot. 4. Han looked at this line plot and said that the tallest plant was 29 centimeters. Do you agree with him? Why or why not? 5. How many plants were measured in all? 6. Write a statement based on Han’s line plot. Activity Synthesis • Invite 1–2 students to share how they found the difference between the height of the tallest and shortest plants on their line plot. • “How does the line plot help you see differences in the measurements that are collected?” (Each tick mark is the same length apart. You can count the distance between each. You can see if there’s a big or small difference between the measurements by how they are spread out.) Lesson Synthesis “Today you created line plots to represent measurement data about plant heights, answered questions about the data, and shared statements based on what you learned from the line plots.” Display Han’s line plot: Invite previously selected students to share their statements based on Han's line plot. If time permits, share additional responses. Cool-down: Diego’s Art Project (5 minutes) Student Facing In this section of the unit, we learned about a new kind of graph. A line plot is a way to show how many of each measurement using an x for each measurement. The line and the numbers on it represent the units used to measure. Line plots for length look like a ruler or parts of a tape measure. We made our own line plots and used them to answer questions about the data represented. From this line plot, we learn that 4 teachers have a handspan of 8 inches because there are 4 Xs above the 8.
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-2/unit-3/lesson-16/lesson.html","timestamp":"2024-11-07T23:58:11Z","content_type":"text/html","content_length":"94406","record_id":"<urn:uuid:efab7b5c-3a0c-48fd-bfa1-8c8a874f1773>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00597.warc.gz"}
Differentiation and integration – A Level Further Mathematics OCR Revision – Study Rocket Differentiation and integration Differentiation and integration Differentiation of Hyperbolic Functions • The derivative of sinh(x) is cosh(x). • The derivative of cosh(x) is sinh(x). • The derivative of tanh(x) is sech^2(x) where sech(x) is the hyperbolic secant defined as 1/cosh(x). • Chain rule from calculus applies to all hyperbolic functions. For instance, the derivative of the composite function where f is a function of g is given by f’(g)*g’. Integration of Hyperbolic Functions • The integral of sinh(x) with respect to x is cosh(x) + C, where C is the constant of integration. • The integral of cosh(x) with respect to x is sinh(x) + C. • The integral of tanh(x) with respect to x is ln cosh(x) + C. • Integration by substitution or parts is often useful when evaluating integrals involving hyperbolic functions. Other Important Properties Related to Differentiation and Integration • The second derivative of sinh(x) is again sinh(x), and the second derivative of cosh(x) is again cosh(x). In contrast to the cyclic nature of the second derivatives of sine and cosine, these show the even and odd function properties. • Hyperbolic identities can be used to simplify complex expressions before differentiation or integration. • The fundamental theorem of calculus applies to hyperbolic functions, meaning that the exact area under a curve (the definite integral) may be calculated using antiderivatives. Sample Problems for Differentiation and Integration • Differentiate the function f(x) = 3sinh(2x) + 5x^2cosh(3x). • Find the integral of the function g(x) = 4cosh(3x) - 1/(2)tanh^2(x). • Determine the integral of sinh(2x)cosh(x) dx by employing an appropriate substitution or a suitable identity. Remember, consistency in practice and problem-solving is essential for mastering the differentiation and integration of these functions, just as is the case with trigonometric functions.
{"url":"https://studyrocket.co.uk/revision/a-level-further-mathematics-ocr/hyperbolic-functions/differentiation-and-integration","timestamp":"2024-11-02T08:35:56Z","content_type":"text/html","content_length":"258966","record_id":"<urn:uuid:750f7340-0f4e-4110-b2f3-14ab34ba559f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00028.warc.gz"}
How to plot a bar plot with bars and labels between tick marks | Timing Liu I had this quesition when preparing my manuscript and a quick search brings me to this stackoverflow question by Johanna. I find the answer by Henrick to be highly effective, but can be further elaborated so that readers can be clearer about the functions of each line. Thus, I will base my post largely on Henrick’s answer but at the same time add my explanation to the rationale behind the turn the following data’s plot into a bar chart with bars and labels between the tick marks. ## Warning: package 'ggplot2' was built under R version 4.0.5 ## Warning: package 'reshape2' was built under R version 4.0.5 data <- data.frame(name = c("X","Y","Z"), A = c(2,4,6), B = c(1,3,4), C = c(3,4,5)) data <- melt(data, id = 1) ## name variable value ## 1 X A 2 ## 2 Y A 4 ## 3 Z A 6 ## 4 X B 1 ## 5 Y B 3 ## 6 Z B 4 ## 7 X C 3 ## 8 Y C 4 ## 9 Z C 5 ggplot(data, aes(name,value)) + geom_bar(aes(fill = variable), position = "dodge", stat = "identity") Here is Henrick’s working answer. I choose to focus on the second version, but the principle to plot the two graphs is the same. To convert to the first version the only thing that needs to be tweeked is the number of tick marks. data$x <- as.integer(as.factor(data$name)) x_tick <- c(0, unique(data$x)) + 0.5 len <- length(x_tick) ggplot(data, aes(x = x, y = value, fill = variable)) + geom_col(position = "dodge") + scale_x_continuous(breaks = c(sort(unique(data$x)), x_tick), labels = c(sort(unique(data$name)), rep(c(""), len))) + theme(axis.ticks.x = element_line(color = c(rep(NA, len - 1), rep("black", len)))) Preliminary steps to prepare the data needed I have transferred some of Henrick’s code into tidyverse to make it self-explanatory. Some of the objects will be explained later. data$x <- as.integer(as.factor(data$name)) as.factor() converts the name of the elements of x-axis into unique levels and as.ineger() converts them into numbers. Thus, data$x is the numerical representation of the elements of the x-axis. Basically it uses different numbers to represent the different values on the x-axis in place of the categorical names. x_tick <- c(0, unique(data$x)) + 0.5 len <- length(x_tick) x_tick is the sequence from 0.5 to 0.5 + the maximum value of data$x i.e. the number of labels along the x axis. If x-axis is the number line, the position where bars and labels are placed should be the integer values and the tick marks are placed at x.5. len represents the number of tick marks. Step by step analysis of ggplot function # ggplot(data, aes(x = x, y = value, fill = variable)) + # geom_col(position = "dodge") + # scale_x_continuous(breaks = c(sort(unique(data$x)), x_tick), # labels = c(sort(unique(data$name)), rep(c(""), len))) + # theme(axis.ticks.x = element_line(color = c(rep(NA, len - 1), rep("black", len)))) The following part is self-explanatory and covered in standard textbook like R4DS. ggplot(data, aes(x = x, y = value, fill = variable)) + geom_col(position = "dodge") First part of scale_x_continuous code: # scale_x_continuous(breaks = c(sort(unique(data$x)), x_tick), ...) data$x has been explained above. unique() generates the unique values of data$x. sort() will sort the unique values in ascending order. c(sort(unique(data$x)), x_tick) What c() does here is just to combine the x_tick and sort(unique(data$x)). This creates the all the x-axis tick marks. However, not all tick marks will be shown because of the colour setting in theme () setting later. Second part of scale_x_continuous code: # scale_x_continuous(..., # labels = c(sort(unique(data$name)), rep(c(""), len))) ## [1] "X" "Y" "Z" "X" "Y" "Z" "X" "Y" "Z" data$name are the labels that will be placed at the integer values of the number line. unique(data$name) will output the unique values (i.e. levels) of the labels. as.character() turns them from levels, whose types are integers, to characters. sort() will sort them in numerical order so that the labels corresponds to the breaks set in the previous line of the code. It does not produce any effect in this demo code because the charcaters are already sorted in alphabetical order. rep(c(""), len) ## [1] "" "" "" "" len was created earlier to be the number of tick marks. We want the labels at the tick marks to be nothing so we use "". rep() creates the first argument ("") for len times. scale_x_continuous put together c(sort(unique(data$x)), x_tick) ## [1] 1.0 2.0 3.0 0.5 1.5 2.5 3.5 c(sort(as.character(unique(data$name))), rep(c(""), len)) ## [1] "X" "Y" "Z" "" "" "" "" So these are the full set of x tick marks location and their corresponding x labels aligned vertically. We have the labels on the integer values of the number line and “” on the x.5 values of the number line. The graph we generate so far looks like this: ggplot(data, aes(x = x, y = value, fill = variable)) + geom_col(position = "dodge") + scale_x_continuous(breaks = c(sort(unique(data$x)), x_tick), labels = c(sort(as.character(unique(data$name))), rep(c(""), len))) Remove tick marks above our labels: # theme(axis.ticks.x = element_line(color = c(rep(NA, len - 1), rep("black", len)))) c(rep(NA, len - 1), rep("black", len)) ## [1] NA NA NA "black" "black" "black" "black" axis.ticks.x sets the options for x-axis tick marks. element_line is the only option for axis.ticks.x. Thus, these are the three layers of the number line we have got: # the location on the number line c(sort(unique(data$x)), x_tick) ## [1] 1.0 2.0 3.0 0.5 1.5 2.5 3.5 # the label on the number line c(sort(as.character(unique(data$name))), rep(c(""), len)) ## [1] "X" "Y" "Z" "" "" "" "" # the colour of the tick marks c(rep(NA, len - 1), rep("black", len)) ## [1] NA NA NA "black" "black" "black" "black" More discussions So let’s say now I only want to keep the labels on the odd-number labels on the number line. This may not be so applicable in this case but it can help to reduce the crowdedness of the labels on an x-axis with continuous numerical labels. How can I do that? The only thing I need to do is to set “Y” (or rather, all the even-number labels) to be “” for the row of the label on the number line. I can use a for loop to do so. Certainly I can use a look-up table as vectorised computation to improve efficiency. But it seems to me that for the small number of elements in x-axis, the performance improvement is negligible. # first store what has been used as the x-labels in a new variable, labels label <- sort(as.character(unique(data$name))) even_num <- seq(2,length(label),2) for (i in even_num) { label[i] <- "" ## [1] "X" "" "Z" Now I will plot the graph again, with sort(as.character(unique(data$name))) substituted as label ggplot(data, aes(x = x, y = value, fill = variable)) + geom_col(position = "dodge") + scale_x_continuous(breaks = c(sort(unique(data$x)), x_tick), labels = c(label, rep(c(""), len))) + theme(axis.ticks.x = element_line(color = c(rep(NA, len - 1), rep("black", len)))) Great. 😄 Reflection: I think the most important lesson from this exercise is not how to plot a more customised bar plot, nor how to understand the different layers of ggplot. Rather, I appreciate this procedural approach that enable us to understand the functionalities of the code.
{"url":"https://timingliu.org/post/bar-plot-with-bars-and-labels-between-tick-marks/","timestamp":"2024-11-03T13:32:37Z","content_type":"text/html","content_length":"31787","record_id":"<urn:uuid:ba6bda4d-c299-42bc-90e0-bef31f0fe9b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00232.warc.gz"}
Howard Colin I have been trying to get my head around the application of matrices in colour space transforms. As a learning exercise I thought it would be simpler to loose a dimension and plot the results on a 2D graph. The code is not mine, but I have been tinkering with it to get different results. import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt xvals = np.linspace(-4, 4, 9) yvals = np.linspace(-3, 3, 7) xygrid = np.column_stack([[x,y] for x in xvals for y in yvals]) a = np.column_stack([[0, 1], [1, 0]]) uvgrid = np.dot(a, xygrid) Matrix: (0 1, 1 0) is a permutation matrix in which each row contains one entry of 1 and 0’s elsewhere. The effect of which changes the order of each element in the dataset. def colourizer(x, y): r = min(1, 1-y/3) g = min(1, 1+y/3) b = 1/4 + x/16 return (r, g, b) colours = list(map(colourizer, xygrid[0], xygrid[1])) plt.figure(figsize=(4,4), facecolor="w") plt.scatter(xygrid[0], xygrid[1], s=36, c=colours, edgecolor="none") plt.title=("Original grid in x-y space") Here is our starting grid of equal spaced points plt.figure(figsize=(4,4), facecolor="w") plt.scatter(uvgrid[0], uvgrid[1], s=36, c=colours, edgecolor="none") We apply out transformation matrix to the grid and re-plot the result. As you can see the permutation transformation has taken place. Application to colour The above example is a simplified demonstration of how colour transformation matrices work. These are often 3x3 matrices and are applied to the 3D colour cube axis XYZ each representing RGB. When the vector is applied the resulting 3D cube is scaled to meet the bounds of the transform. There is more to learn about this subject, I hope to be able to modify this code or use it as a starting point to develop my own version that visualizes 3D linear transformations. Another topic that I am also keen to study is interpolation and how it is applied in colour science, particularly it’s use in 3D look up tables. Obsidian Workflow I have overhauled my Obsidian setup for keeping track of my personal and work notes. I took a break away from this tool to play around with Craft, but I found that it didn’t offer the customisability that I wanted. I popped back to Obsidian for the first time in a while and discovered a wealth of new features and community plugins, some of which are insanely powerful. My current workflow revolves around Things3, DevonThink and Obsidian. This is still a work in progress and it changes all the time. I think that there are probably more fancy automations that are available between DevonThink and Obsidian, but I am yet to find time to discover them. My primary notes folder is stored in My Documents on my Mac, I use Syncthing to sync this to a central file server (more on this later) This is then pushed out to my Desktop Mac Mini. For iOS devices, it was a little tricky. I thought I could use the server to back up via rsync to the iCloud folder in iCloud Drive. This worked when it was run manually, though trying to schedule it via cron it kept failing with permissions errors. I got fed up and gave up, I will probably use obsidian sync for this. Initial capture For initial capture I am using things3 on iOS and OSX . I can create a new todo in my Notes project and jot down some rough ideas. This is even better now since the latest update allows the use of Vault Layout I have all new notes default to the inbox, where I can decide what I want to do with them after I have made them. My default content template is powered with the Templater plugin from the community plugins, see the example below. Copy and paste if you would like to use it. Also let me know if you make any changes you think I might like. tag: 🌱 date: <% tp.date.now("YYYY-MM-DD", 1) %> modification date: <% tp.file.last_modified_date("dddd Do MMMM YYYY HH:mm:ss") %> status: 🟥 🟧 🟩 date updated: '2021-08-15T14:01:28+01:00' ## <% tp.file.title %> - Source: - Author: ## Notes <!-- The main content of my thoughts really --> ### Links <!-- Links to definition pages --> For book notes I use the following template: tags: 📗 date: <% tp.date.now("YYYY-MM-DD", 1) %> status: 🟥 🟧 🟩 ## <% tp.file.title %> - Author: - ISBN: ## Key Ideas <!-- The main content of my thoughts really --> ## Further Lines of Inquiry <!-- What remains for you to consider? --> ## Quotes <!-- Notable quotes with reference to their page or location --> ## TODO - [ ] ## Resources I am also using the data view plugin to create an index view / overview of the status of the notes in my Notes folder. See the example below: tag: 📚 date: 2021-08-16 modification date: Sunday 15th August 2021 13:26:38 Reference [[🗂 Key]] ## Not Yet Started from "🌿 Notes" where status = "🟥" sort file.name asc ## In Progress from "🌿 Notes" where status = "🟧" sort file.name asc ## Tasks from "🌿 Notes" sort file.name asc This allows me to see lists of the notes related to their status and corresponding todo’s. I like this as I can go directly to this note and see on what areas I need to work quickly and simply. DevonThink is a dumping ground for any reading material that needs to be processed. I use the clipping tool to grab pages of the web and then read them in DevonThink. I was using Evernote, but the recent change in pricing structure means that if you are on the lower tier you get constant annoying pop-up’s asking you to upgrade!! Plugin’s I Am Using Quick List of the plugins I am using: • Dataview • Admonition • Advanced Tables • Kindle Highlights • Markdown Prettifier • Metatable • Natural Language Dates • Paste URL into selection • Templater Sigmoid Functions in Lattice I have been playing around with logistic functions in Lattice to create contrast curves in 1D/3D LUT’s. A logistic function is a common s-shaped curve. When applied to a log encoded image it simulates a contrast curve. The curve can be encoded into 1D or 3D LUT’s for application in video editing software and image processing. When the above function is applied in Lattice the following result is achieved: To learn more or play around yourself, find Lattice here: Yubikey for 2FA I recently got hold of a couple of Yubikeys as an additional security measure. These are small devices that are plugged into computers or phones/tables and act as a physical 2FA method, some models can even be purchased with NFC capability. So far the experience has been pretty good, most services I use have worked well with the security key and the process of using it is pretty simple. Plug in when prompted, touch the button and you’re good to go. The only account that I found tricky to work with was Microsoft 365 for business, for some reason I couldn’t find the security key settings in my account management panel. Having this system set up for a while now, I wondered if it would be possible to set up a little security key server on my local network. The server could be a raspberry pi, and the key would be plugged into its USB port permanently. The USB port would then be shared over the local network with USB over IP. This way all computers connected over the local network can share the same security key. As opposed to having to unplug and move it to and from each computer I am working on at the time. Think I will come back to this in the future to see if I can work something out, more to follow. Thoughts on Concept Orientated Learning The past few years have probably been quite strange for everyone. As a freelancer I have found a great deal of my work completely dry up. It took a long time for any government support to be announced for the self-employed and staff employed on short-term contracts. Whilst in lockdown, I spent time reading a great deal of books and studying things that interest me. I read books on mental models, productivity, learning. As well as books on computer science and Concepts are the Key I have discovered that the best way to learn anything is to drill down to the core concepts of the topic. When reading a text about something that you wish to learn, it is important that as a reader you are constantly asking questions about the text. As you hold these questions in your mind, it is also your responsibility as a reader to answer them. C. Van Doren & M. J. Adler Discuss how to be a demanding reader in their book How to Read a Book: The Classic Guide to Intelligent Reading. Whilst asking these questions, we also need to make the book our own. By writing notes, underlining and highlighting on the page. Digital books are great for this, I love reading papers and chapters on my iPad using GoodNotes to highlight and scribble down ideas. Connecting Concepts Once we have isolated the concepts of the topic that we are wanting to study, we can the write our notes. By writing your notes based around the concepts of the topic that you have read about. It frees you from the constraints of grouping your notes by book title or author. The benefit from this, is that your notes are much more open and inclusive to other ideas from different writers. As you study a topic you will extract another authors concepts and different ideas will interconnect. As your notes grow you will start to see a bigger picture of the subject you are studying. This does make note-taking and studying harder, but the rewards are greater. When writing our notes, we have to think deeply about how they fit with what we have already written. We are forced to look back over our writing and explore the connections. By making the connections in our notes, we are creating connections in our mind. The repetition involved when doing this helps to cement the ideas into our memory, and we feel like we are truly understanding a topic. By connecting the unexpected we are also planting the seeds to grow new ideas, that could expand our levels of understanding even further. Neural Networks for Colour Transformations 💻 A while ago; I explored the possibility of using neural networks to calculate colour transforms. My hypothesis was that a trained model would be able to apply a more accurate colour transform in a shorter amount of time compared to an equivalent 64x LUT. I started tinkering around with Python and Tensorflow and had some promising results, but quickly found that I lacked the time and the computing power to train the models properly. As a result of this, it was tricky to get results good enough that would have made it worth exploring the project further. Although using neural networks for colour transformations is probably impractical in professional applications. I am still keen to chip away at this as a personal research project when I have time. Croc File Sharing Utility 💻 Recently I came across a great tool for quickly sharing files across a network, croc. The tool works in the command line and allows two users to transfer data between two computers using a relay, data is encrypted using PAKE. What is really useful is transfers can be resumed if they are interrupted. I have been using this on a number of film production projects to quickly send pdfs and cdl’s with on-set colour decisions to our data management workstation which is operated off set. Generally the tool works well, and is much quicker than email or using a usb stick. However we have noticed that with larger transfers the utility does become less reliable, with transfers often freezing. We did get around this by using a self hosted docker image to host our own relay. The croc utility is easy to install with the following command: curl https://getcroc.schollz.com | bash Which will download the correct version for your system. On macOS you can install the latest release with HomeBrew brew install croc
{"url":"https://www.aksdigital.co.uk/","timestamp":"2024-11-08T05:54:06Z","content_type":"text/html","content_length":"21725","record_id":"<urn:uuid:d6d3052f-566b-498b-9cb9-e778b3bff1b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00107.warc.gz"}
Geometry Dash Purgatory Geometry Dash Purgatory is an entrancing user-level game from creator ItzMezzo. Players in the Geometry Dash community create levels and challenge other gamers. After that, these rounds will be ranked for difficulty so that later participants can easily evaluate and search. Purgatory has a 10-star Hard Demon difficulty. This is the mid-range level of the highest difficulty in this universe. That's enough to understand that you can only be a pro Dasher to beat this game. Explore the Detailed Gameplay Your task is to maneuver the cube to the end of the level without any collisions. Constantly changing obstacles can make you overwhelmed and dizzy. You can only grasp the character's transitions and typical terrain well enough to improvise promptly. Besides, pay attention to the jumping orbs and pads in the background for more precise jumps. Besides the main goal, players can collect 1 unique user coin. The coin in this level is quite easy because it is on the character's main path. After overcoming all the dangers, you control the wave to fly through the exact gap between the two platforms, get the coin, and reach the finish line. More Interesting Information The adventure map of Geometry Dash Purgatory is rated as long. The harshness in each part, combined with the length of the main track, is enough to make you collapse. Be persistent and diversify your playing style until you win!
{"url":"https://geometrydashrazorleaf.com/geometry-dash-purgatory","timestamp":"2024-11-07T10:26:42Z","content_type":"text/html","content_length":"94732","record_id":"<urn:uuid:3b86cb9c-4ba5-4945-be4a-174cdb79c255>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00021.warc.gz"}
Current Dividers and Current Division Circuits: A Comprehensive Guide – TheLinuxCode Current Dividers and Current Division Circuits: A Comprehensive Guide Current dividers are a fundamental concept in electronics and circuit analysis that every electrical engineer should fully understand. In this comprehensive guide, we will take a deep dive into current divider circuits, how they work, how to analyze them, and their role in circuit design. Whether you are a student learning electronics for the first time or a seasoned engineer looking to brush up on key concepts, this article aims to provide detailed and insightful information from a Linux expert‘s perspective. Read on to become a current divider circuit pro! Introduction to Current Dividers Let‘s start at the beginning – what exactly are current divider circuits? Current dividers are networks of parallel resistor branches across which total current from a voltage source divides into separate branch currents. The key characteristics of current dividers are: • Resistors connected in parallel • Total current splits into branch currents • Equal voltage across all branches • Unequal branch currents based on resistances Here is a basic example current divider circuit: As you can see, the total current IT coming from the voltage source VS splits into three branch currents IR1, IR2, and IR3 through the parallel resistors R1, R2, and R3. So why do we care about these types of circuits? What makes current dividers so important? For a few key reasons: 1. They allow engineers to design controlled current splitting in circuits. Being able to divide current across different paths is useful in many applications. 2. They are the parallel version of voltage dividers, which are fundamental building blocks in electronics. Understanding both is key. 3. Analyzing complex resistor networks relies on mastery of divider circuits. They form the basis of more advanced circuit analysis. 4. Dividing current while maintaining equal voltage can be advantageous in some designs over voltage division. In summary, current dividers are a foundational concept for working with parallel electronic circuits. Grasping how they function and how to analyze them provides invaluable skills. Let‘s dive Deriving the Current Divider Formula To really understand current dividers, we need to mathematically relate the branch currents to the total current and resistances. This relationship is known as the current divider formula. Let‘s go through the step-by-step derivation of this important formula: Consider this general current divider circuit with two branches. To start, we know: • The voltage VS is equal across R1 and R2 since they are in parallel • IT is the total current from the source • IR1 and IR2 are the branch currents So we can write: VS = IR1*R1 = IR2*R2 Solving the first equation for IR1: IR1 = VS/R1 And the second for IR2: IR2 = VS/R2 We also know that the total current IT must equal the sum of the branch currents: IT = IR1 + IR2 Substituting our expressions for IR1 and IR2: IT = VS/R1 + VS/R2 Re-arranging to solve for VS: VS = IT * (R1*R2)/(R1 + R2) Now substituting this VS back into IR1: IR1 = IT * (R2)/(R1 + R2) And IR2: IR2 = IT * (R1)/(R1 + R2) This provides us with the general current divider formula for two resistances! We can extend this approach to any number of parallel branches: • IRx is the branch current through resistor Rx • IT is the total current from the source • Rx is the resistance of branch x • R1, R2, …RN are the resistances of all branches So in summary, by applying some basic circuit analysis principles, we derived an equation for calculating individual branch currents based on the total current and resistor values. Very useful! Putting the Formula to Work Now that we have derived the current divider formula, let‘s look at how we can put it to work analyzing sample circuits. First, to use the formula, we need to know the total current IT. IT can be determined using Ohm‘s Law and the equivalent resistance Req of the parallel branches: Where VS is the source voltage. So the process is: 1. Calculate Req 2. Use Ohm‘s Law to find IT 3. Apply current divider formula to get branch currents IRx Let‘s go through an example circuit: • R1 = 2 kΩ • R2 = 6 kΩ • R3 = 4 kΩ • VS = 10 V First, calculate Req: Next, find IT using Ohm‘s Law: Now use the current divider formula to calculate each IRx: • IR1 = 2 mA • IR2 = 1 mA • IR3 = 1.5 mA And summing the branch currents: • IR1 + IR2 + IR3 = 2 mA + 1 mA + 1.5 mA = 4.5 mA = IT So by systematically applying the current divider formula, we can analyze the individual currents in any complex divider circuit. Comparing Voltage and Current Dividers It is useful to contrast current dividers with their series circuit counterparts – voltage dividers. The differences are important to understand. Current Dividers: • Parallel resistor branches • Total current divides into branch currents • Voltage is equal across all branches • Currents can differ through each branch Voltage Dividers: • Series resistor chain • Total voltage divides into voltage drops • Current remains same through all resistors • Voltage drops can differ across each resistor While the configurations are different, both types of dividers serve important roles in circuit analysis and electronics. But recognizing when to apply the current divider vs. voltage divider formulas is key. Leveraging Conductance Along with resistance, conductance is an important concept for current dividers. Conductance is defined as: Where G is conductance, R is resistance, and units are Siemens. For parallel resistors, conductances simply add like resistances in series: Compare to the parallel resistance formula: Using conductance can simplify the math when working with current dividers. The conductance-based current divider formula is: • Gx is the conductance of branch x • GT is the total parallel conductance Substituting Ohm‘s Law: Conductance provides an alternative approach to solving current dividers that is sometimes more convenient. Practical Applications Current dividers find application in many areas of electrical engineering and electronics: • Sensors: Dividing measured current across multiple sensing channels. • Signal processing: Splitting signals into different parallel filters or processing paths. • Power distribution: Dividing high currents into parallel loads. • Control systems: Directing actuator drive currents based on control requirements. • RF circuits: Dividing antenna currents into parallel tunedmatching networks. • Load sharing: Distributing current demand across parallel sources or connections. Here are a few examples of current dividers in action: • In strain gauge sensors, bridge completion resistors act as a current divider to split the input current for sensing small changes. • In RF design, current divider networks divide antenna currents into amplitude controllers for modulation. • In high power applications, parallel transistor branches share load current to avoid thermal overload through current division. • In battery packs, cells are often connected in parallel configurations that equally divide current with minimal wiring. As these examples illustrate, current dividing is useful for meeting many circuit requirements. Current Divider Variants While the basic structure is resistors in parallel, certain modifications to current dividers are common: • Digitally controlled: Using switches or MOSFETs to alter resistor paths and dynamically change current division ratios. • Wideband: Employing broadband matching techniques for dividing high frequency currents. • Active: Replacing resistors with active sources like op-amps to provide greater control over branching. • Mutual inductance: Using coupled inductors between branches to limit current imbalance in dividers. • Three-phase: Designing three-phase dividers for splitting balanced AC currents. So in summary, while the standard current divider formulas apply to basic resistor circuits, many advanced variants exist for specialized applications. Current Divider Integrated Circuits In modern electronics, current dividers are often implemented using integrated circuits rather than discrete resistors: • Linear shunt regulators: IC voltage references that function as programmable current dividers. • Digital potentiometers: ICs with digitally adjustable resistor banks usable as current dividers. • Specialty divider ICs: Application-specific integrated current dividers for RF, sensors, etc. IC current dividers offer benefits like: • Adjustable divide ratios without manually swapping resistors • Much higher precision than discrete resistors • Smaller form factors • Advanced performance features and accuracy By leveraging integrated dividers, circuits can achieve better performance with lower cost and size. Here are some examples of common current divider ICs: Part # Description LM334 Adjustable current source/divider AD5170 Digitally controlled potentiometer MAX9618 Precision current divider with sensing HMC990 Divide-by-2 RF/microwave divider INA138 Instrumentation amplifier with current divider output Current Divider Testing and Troubleshooting When working with current divider circuits, some key testing and troubleshooting tips include: • Check total current: Measure at the source to verify the total current IT. • Look for shorts: Use resistance measurements to check for accidental shorts altering pathways. • Monitor branch currents: Measure each IRx to see if currents are dividing as expected. • Perform voltage checks: Confirm equal voltages across each parallel branch. • Swap components: Switch resistors between branches to check if issue follows certain path. • Simplify circuit: Reduce to the minimum branches needed to isolate faults. • Review calculations: Double check resistance values used in analysis for errors. • Consider loading: Changes on one branch can affect others if loads not ideal. Thoroughly testing divide ratios and branch voltages/currents can help identify and resolve issues in malfunctioning current dividers. In summary, we have taken a deep dive into current divider networks, including: • Defining current dividers and discussing their significance • Deriving the mathematical current divider formula from basic principles • Demonstrating the formula used to analyze sample circuits • Contrasting current and voltage dividers • Introducing conductance for simplified calculations • Exploring practical applications across electronics • Reviewing special variations like digitally controlled dividers • Discussing integrated circuit implementations • Providing troubleshooting tips A strong grasp of these key concepts will serve any electrical engineer or student well when working with parallel circuit networks. I hope this detailed overview provides a solid foundation for tackling any problem involving current dividers! Please reach out with any questions.
{"url":"https://thelinuxcode.com/current-dividers-current-division-circuits/","timestamp":"2024-11-02T00:10:47Z","content_type":"text/html","content_length":"198903","record_id":"<urn:uuid:6a14dd44-8612-4cd4-80b8-c1b3372291b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00220.warc.gz"}
How do you find the electric field with charge density? Find the electric field a distance z above the midpoint of a straight line segment of length L that carries a uniform line charge density λ. dq=λdl. Then, we calculate the differential field created by two symmetrically placed pieces of the wire, using the symmetry of the setup to simplify the calculation (Figure 5.6. What is charge density electric field? In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density (symbolized by the Greek letter ρ) is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. How do you find the charge of an electric field? Since we know the electric field strength and the charge in the field, the force on that charge can be calculated using the definition of electric field E=Fq E = F q rearranged to F = qE. Is electric field proportional to charge density? Since electric charge is the source of electric field, the electric field at any point in space can be mathematically related to the charges present. The divergence of the electric field at a point in space is equal to the charge density divided by the permittivity of space. Can electric field exist without charge density? The an electric field can exist without a charge. BUT it cannot ORIGINATE without charge. EM waves comprise of electric and magnetic field in transit. The electric field here exist without the presence of any charge. How do you solve for electric field? In vector calculus notation, the electric field is given by the negative of the gradient of the electric potential, E = −grad V. This expression specifies how the electric field is calculated at a given point. Since the field is a vector, it has both a direction and magnitude. Where is the strongest electric field? The field is strongest where the lines are most closely spaced. The electric field lines converge toward charge 1 and away from 2, which means charge 1 is negative and charge 2 is positive. What is the equation for electrical field? Electric field calculation formula: E (Electric field) = F (Electric force) / Q (Electric charge) SI unit of electrical field is newtons per coulomb which equals to volt per meter. How to calculate charge density? Surface Charge Density Example First, measure the area. Measure the total area that has a charge. Next, measure the charge. Measure the total electrical charge acting over the area from step 1. Finally, calculate the surface charge density. Calculate the surface charge density by dividing the charge by total area. What is the unit for charge density? Charge density. In electromagnetism, charge density is a measure of electric charge is the amount of electric charge per unit length, surface area, or volume, called the linear, surface, or volume charge density, respectively. The respective SI units are C⋅m−1, C⋅m−2 or C⋅m−3. Like any density, charge density can depend on position. What is the equation for charge density? It measures the amount of electric charge per unit volume of space, in one, two or three dimensions. Charge density may depend on position, but it can be negative. The SI unit of Charge density is C. Charge density formula is given by, σ = q / A. Where, q = electric charge. A = area.
{"url":"https://www.cravencountryjamboree.com/most-popular/how-do-you-find-the-electric-field-with-charge-density/","timestamp":"2024-11-05T02:47:30Z","content_type":"text/html","content_length":"72049","record_id":"<urn:uuid:fca81ac4-cae6-4970-8080-6a71a17080c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00050.warc.gz"}
Ability to distinguish incompressible and compressible flows. Ability to understand the wave propagation phenomenon in subsonic, sonic and supersonic flows. ability to solve one dimensional compressible flow problems involving area change Ability to analyze converging nozzles, real nozzles and rocket engines Ability to solve one dimensional compressible flow problems involving stationary, moving and reflected shock waves. Ability to analyze de Laval nozzles, wind tunnels, jet engine inlets, real diffusers, supersonic Pitot tubes and shock tubes. Ability to solve one dimensional compressible flow problems with friction. Ability to analyze adiabatic ducts, which are fed by converging and de Laval nozzles. Ability to solve one dimensional isothermal flow problems. Ability to solve one dimensional compressible flow problems with heat transfer. Ability to solve two dimensional compressible flow problems involving oblique shock waves, Prandtl-Meyer expansion waves. Ability to analyze overexpansion and underexpansion flow regimes in de Laval nozzles, oblique shock diffusers and airfoils.
{"url":"https://odtusyllabus.metu.edu.tr/get.php?package=YG-Ma1m0YUObUgqMueANvZ01yEg-Trxljz6we59S2Kd4StBFufoqmz9vnlranmOe5Bzzlpwl8yWmiGnvMB1tjtmrzUpjP0dM25XBvKjNJmxGQrTPQrMmpGovQ9SDCvjOU2joy60hJMnq21rjKELziJ8vRtSg8mRacPVU-6H-YBYvmO8zU_yXzunvt5VkSErz0EqWQ_Po0vtrlta0zATvSQ","timestamp":"2024-11-13T11:48:29Z","content_type":"text/html","content_length":"3756","record_id":"<urn:uuid:58172548-09f6-4836-8286-ff09f0f0b329>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00109.warc.gz"}
CSC5101 – Advanced Programming of Multicore Architectures PAAM – Exam 2021-2022 Duration: 2 hours, any document allowed Course questions (6 points) NUMA (2 points) We assume an NUMA architecture consisting of four nodes A, B, C, and D. Each node contains 10 cores and is connected to a memory with a maximum throughput of 5Gb/s. We suppose 4 interconnect links: between A and B, B and C, C and D, and D and A. Each link has a maximum throughput of 1Gb/s. We assume an application consisting of 40 threads, in which each thread generates memory accesses with a constant rate of 0.1Gb/s. The application allocates and initializes the memory used during computation in the initial thread. In this setting, which policy will give the best performance, first-touch or interleaved? Justify. For each policy, you can represent the throughput generated on the different links on a figure. We suppose that memory access is uniformly distributed among the virtual address space. We have 40 threads that generate each a load of 0.1 GB/s. We have a total generated load of 4GB/s. • If we use an interleaved policy, we distribute equally the load of 4GB/s among each interconnect link, which means 1 GB/s per link, which do not saturate. The memory controllers, with their throughput of 5GB/s don't saturate either. The actual throughput is thus 4GB/s. • If we use a first touch policy, the maximal load possible generated by the application to node A (4GB/s) does not saturate the memory controller of node A. Since the maximum capacity of the links AB and AC is equal to 2GB/s, node A will only process 3GB/s: the 2GB/s received from the links AB and AC (from B, C and D), and 1GB/s generated by the node A itself (10 threads at 0.1GB/s). The total throughput of the application is thus 3GB/s, which is lower than the 4GB/s of the interleaved policy. Modèle mémoire (2 points) We consider four variables ok1, ok2, msg1 and msg2, all of which are initialized to 0. We also consider the following code in which reads and writes are atomic: Thread 1 a. msg1 = 42; b. ok1 = 1; Thread 2 c. while(ok1 == 0) { } d. msg2 = msg1; e. ok2 = 1 Thread 3 f. while(ok2 == 0) { } g. printf("%d %d\n", msg1, msg2); If we consider a relaxed memory model, what are all the possible displays? What are also the possible displays if the machine does not reorder the writes between them, nor the reads between them? With a machine that neither reorders reads between them and writes between them, if "<" means "executed before", we necessarily have a < b and d < e (a, b, d and e are writes). We also necessarily have f < g (f and g are reads). However, we can still have d < c and e < c because c is a read, and d and e are writes. For this reason, we can have: • "0 0" with d < e < f < g (then a < b < c) • "42 0" with d < a < e < f < g (then b < c) • "42 42" with a < d < e < f < g (then b < c) With the relaxed memory model, we can have the same output. Note that "0 42" is impossible: if msg2 = 42, then necessarily, msg1 = 42 because of line d. Persistent memory (2 points, more difficult) We assume that our computer has the latest eADR technology. This technology ensures that, in case of a failure, there is enough power to propagate the data from the caches into persistent memory. On such a machine, is it necessary to have the pwb and pfence instructions? Justify. pwb is not required anymore: any dirty cache line will eventually be propagated to memory in case of failure. We still need pfence because eADR is not related to out of order execution. The radix tree (14 points) In this exercise, we implement a radix tree. A radix tree associates keys with values. It mainly provides an interface with two functions: • void* insert(char* key, void* value): associates the key with the value. If there is already a value associated with the key, insert returns the old value associated with the key and replaces the old value with the new value. Otherwise, insert creates an association between key and value, and returns NULL, • void* get(char* key): returns the value associated with the key if it exists, and NULL otherwise. A node of a radix tree is represented by the following structure: struct node { void* value; struct node* nexts[256]; }; When searching for a key in the tree, we go down the tree letter by letter starting from the root. Technically, the root of the tree represents the empty key (string with 0 letters). The nexts field of the root is used to find the nodes associated with single letter strings. For example, if root is a pointer to the root of the tree, then root->nexts['a'] points to the node representing the string "a". When root->nexts['a']->value is not NULL, it gives the value associated to "a". Otherwise, it means that the key "a" is not in the tree. The nexts field of root->nexts["a"] is used to retrieve two-letter strings starting with the string "a". For example, root->nexts['a']->nexts['b'] points to the node representing the string "ab", and root->nexts['a']->nexts['b']->value, if not NULL, gives its value. Mono-threaded implementation In this first part, you have to implement the radix tree without considering concurrent accesses: you can consider that there is only a single thread in your process. Jumping one level in the tree (1 points) To start, write a function struct node* get_next(struct node** p). If *p is NULL, the function has to allocate a struct node, to store it in *p, and to return it. If *p is not NULL, this function has simply to return *p. The insert function (3 points) Implement the insert function. In this question, you have to use get_next to go down the tree and create the intermediate nodes. In details, you can use a struct node** cur to traverse the tree. At the beginning of insert, you can simply initialize it to &root, where root is a global variable of type struct node* pointing to the root of the radix tree. The get function (2 points) Multi-threaded implementation Lock-based synchronization (3 points) Write a main method that creates 10 threads and terminates once all 10 threads are finished. Each thread must insert 10 different words (threads can insert the same words 10 times) before retrieving them one after another by calling get. Don't forget to handle concurrent accesses. Lock-free synchronization (4 points, more difficult) Rewrite the get_next, insert and get methods in such a way that your code does not use locks anymore but remains correct with multiple threads. You may notice that, in get_next, if two threads create a node at the same time, the second writer has simply to use the node created by the first writer and to free its already allocated node. You will have to make sure that you read and write the variables potentially accessed by several threads with atomic_load and atomic_store. You will also have to use atomic_compare_exchange_strong. Transactional memory (1 point) Explain briefly the changes that you should make to the single-threaded code of insert and get (questions 2.b and 2.c) to make it correct using transactional memory.
{"url":"http://www-inf.telecom-sudparis.eu/COURS/chps/paam/?page=annales/2021-2022/cf1-en&soluce=true","timestamp":"2024-11-04T12:08:37Z","content_type":"text/html","content_length":"25824","record_id":"<urn:uuid:dd3fd2ae-7b62-4c78-b68a-0c229a40db7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00723.warc.gz"}
Light & Engineering 30 (4) Pages 25–30 • The effect of spectral composition of white light sources with variable chromaticity on objective and subjective evaluations of their colour rendering was studied. The dependences of the colour rendering indices on colour temperature were obtained for four light sources: a tungsten halogen lamp, white light emitting diodes with adjustable correlated colour temperature, fluorescent lamps of three colours, and three-colour light emitting diodes. The dependences of general colour rendering index (CRI) Ra and Smet’s memory colour rendering index (MCRI) Rm on the distance uv between a point corresponding to the studied radiation and the blackbody line in the 1960 CIE system (u, v) at constant correlated colour temperature 6500 K were also studied for an installation with three-colour light emitting diodes. The conditions, in which these indices reach their maximum values, were identified. • 1. CIE13.3–1995: Method of Measuring and Specifying Colour Rendering Properties of Light Sources. 2. ANSI/IES TM‑30–20–2019: IES Method for Evaluating Light Source Colour Rendition. 3. Davis, W., Ohno, Y. Colour quality scale // Opt. Eng, 2010, 49(3), 033602. 4. The International Lighting Vocabulary / Eds D.N. Lazarev, Moscow: Russkiy Yazyk, 1979, 278 p. 5. Lindsay, J. All about Colour [Vsyo o tsvete] / Trans. from Eng., science editor V. Babenko / Moscow: Knizhnyi klub 36.6, 2011, 427 p. 6. Sanders, C.L. Colour preferences for natural objects // Illum. Eng., 1959, Vol. 54, pp. 452–456. 7. Judd, D.B. A flattery index for artificial illuminants // Illum. Eng., 1967, Vol. 62, pp. 593–598. 8. Smet, K.A.G., Hanselaer, P. Memory and preferred colours and the colour rendition of white light sources // Lighting Res. Technol., 2015, 484, pp. 393–411. 9. Smet, K.A.G., Ryckaert, W.R., Pointer, M.R., Deconinck, G., Hanselaer, P. Colour appearance rating of familiar real objects // Colour Research and Application, 2011, Vol. 36, pp. 192–200. 10. Mangkuto, R.A., Revantino, E.A., Munir, F. Effect of Duv Variation on Preference of Lighting Qualities / Proc. The 4th Asia Conference of International Building Performance Simulation Association – “ASIM 2018” At: Hong Kong. [Electronic source]. URL: https://www.researchgate.net/publication/ 329735697_The_Effect_of_Duv_Variation_on_Preference_of_Lighting_Qualities (date of reference: 25.07.2021).
{"url":"https://l-e-journal.com/en/journals/light-engineering-30-4/","timestamp":"2024-11-10T12:20:28Z","content_type":"text/html","content_length":"295723","record_id":"<urn:uuid:0f8d45c3-cecd-44bb-92bd-dd1088e7201a>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00207.warc.gz"}
RAM Cholesky Decomposition Replied on Thu, 10/08/2020 - 07:59 The model expected covariance for RAM is F (I − A)^{−1} S (I − A)^{−T} F^T. Suppose S is set to the identity matrix. If you put a Cholesky factor in the (I − A)^{−1} term then it is multiplied with itself. See twoACEc.R at [Hermine's script collection](https://hermine-maes.squarespace.com/#/two/) Replied on Thu, 10/08/2020 - 10:43 I see, but if I set S to the I see, but if I set S to the identity, wouldn't that mean that the off-diagonals are zero including the covariances of the latent A's and C's and the diagonals including the error variances of the manifests are set to one? Replied on Thu, 10/08/2020 - 11:31 see example I'm not sure whether I follow. Here's some code to play with. File attachments Replied on Sat, 10/10/2020 - 04:59 Thanks for the code. Now I Thanks for the code. Now I see your point, but I still doubt what are the steps to come to the point where I have all the matrices necessary for the colesky style computation. In the concrete case of a bivariate cholesky model I have 4 observed and 12 latent variables, so 16 in total. That is, my A, S and I matrices have a shape of 16x16, the F matrix of 4x16. I suppose that I have to divide these original matrices to get into the starting position for a Cholesky decomposition with the RAM-matrices? Attached you find the script with the "normal" RAM matrices which may help following my question. The results I get seem valid. File attachments Replied on Sat, 10/10/2020 - 08:58 In reply to Thanks for the code. Now I by benruk Are you trying to re-express the model in mxPath notation? I don't think this is currently possible. You can only express the model in matrix algebra. Replied on Tue, 10/13/2020 - 10:32 No, I want to use the matrix No, I want to use the matrix algebra with mxExpectationNormal() using the RAM matrix approach (not mxExpectationRAM()) but I am not sure whether the script I posted above does properly address the Cholesky decomposition. Replied on Wed, 10/21/2020 - 06:07 Comparison RAM, LISREL, manual Now I wrote a script with LISREL matrices and calculated a bivariate model using the matrix approach within OpenMx. I compared the results with the model with RAM matrices and the "manual" matrix approach with an adapted script by [Hermine Maes](https://hermine-maes.squarespace.com/#/two/) (I adapted the starting values and switched from mxRun to mxTryHard). The models have the same fit and the coefficients and SEs are quite similar. All in all, it seems to me that using the RAM or LISREL matrices for the matrix approach seems to be ok, since the results are comparable to the scripts whith the "manual" matrices. But maybe I am missing something? File attachments
{"url":"https://openmx.ssri.psu.edu/forums/opensem-forums/behavioral-genetics-models/ram-cholesky-decomposition","timestamp":"2024-11-11T10:31:56Z","content_type":"text/html","content_length":"42810","record_id":"<urn:uuid:d0e21833-551b-488c-a8f1-745543d2cadb>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00052.warc.gz"}
Previous ... Next The $18$th letter of the Greek alphabet. Minuscules: $\sigma$ and $\varsigma$ Majuscule: $\Sigma$ The $\LaTeX$ code for \(\sigma\) is \sigma . The $\LaTeX$ code for \(\varsigma\) is \varsigma . The $\LaTeX$ code for \(\Sigma\) is \Sigma . Let $\EE$ be an experiment whose probability space is $\struct {\Omega, \Sigma, \Pr}$. The event space of $\EE$ is usually denoted $\Sigma$ (Greek capital sigma), and is the set of all outcomes of $\EE$ which are interesting. By definition, $\struct {\Omega, \Sigma}$ is a measurable space. Hence the event space $\Sigma$ is a sigma-algebra on $\Omega$. Let $\struct {S, +}$ be an algebraic structure where the operation $+$ is an operation derived from, or arising from, the addition operation on the natural numbers. Let $\tuple {a_1, a_2, \ldots, a_n} \in S^n$ be an ordered $n$-tuple in $S$. The composite is called the summation of $\tuple {a_1, a_2, \ldots, a_n}$, and is written: $\ds \sum_{j \mathop = 1}^n a_j = \tuple {a_1 + a_2 + \cdots + a_n}$ The $\LaTeX$ code for \(\ds \sum_{j \mathop = 1}^n a_j\) is \ds \sum_{j \mathop = 1}^n a_j . The $\LaTeX$ code for \(\ds \sum_{1 \mathop \le j \mathop \le n} a_j\) is \ds \sum_{1 \mathop \le j \mathop \le n} a_j . The $\LaTeX$ code for \(\ds \sum_{\map \Phi j} a_j\) is \ds \sum_{\map \Phi j} a_j . $\map {\sigma_\alpha} n$ Let $\alpha \in \Z_{\ge 0}$ be a non-negative integer. A divisor function is an arithmetic function of the form: $\ds \map {\sigma_\alpha} n = \sum_{m \mathop \divides n} m^\alpha$ where the summation is taken over all $m \le n$ such that $m$ divides $n$). The $\LaTeX$ code for \(\map {\sigma_\alpha} n\) is \map {\sigma_\alpha} n . $\map {\sigma_0} n$ Let $n$ be an integer such that $n \ge 1$. The divisor count function is defined on $n$ as being the total number of positive integer divisors of $n$. It is denoted on $\mathsf{Pr} \infty \mathsf{fWiki}$ as $\sigma_0$ (the Greek letter sigma). That is: $\ds \map {\sigma_0} n = \sum_{d \mathop \divides n} 1$ where $\ds \sum_{d \mathop \divides n}$ is the sum over all divisors of $n$. The $\LaTeX$ code for \(\map {\sigma_0} n\) is \map {\sigma_0} n . $\map {\sigma_1} n$ Let $n$ be an integer such that $n \ge 1$. The divisor sum function $\map {\sigma_1} n$ is defined on $n$ as being the sum of all the positive integer divisors of $n$. That is: $\ds \map {\sigma_1} n = \sum_{d \mathop \divides n} d$ where $\ds \sum_{d \mathop \divides n}$ is the sum over all divisors of $n$. The $\LaTeX$ code for \(\map {\sigma_1} n\) is \map {\sigma_1} n . Let $X$ be a random variable. Then the standard deviation of $X$, written $\sigma_X$ or $\sigma$, is defined as the principal square root of the variance of $X$: $\sigma_X := \sqrt {\var X}$ The $\LaTeX$ code for \(\sigma_X\) is \sigma_X . $\map \sigma {\mathbf r}$ Let $B$ be a body made out of an electrically conducting substance. Let $B$ be under the influence of an electric field $\mathbf E$ under which a surface charge is induced on $B$. Let $\delta S$ be an area element which is smaller than the scale used for a macroscopic electric field, but still large enough to contain many atoms on the surface of $B$. Let $P$ be a point in the vicinity of $\delta S$ whose position vector is $\mathbf r$. Let $\delta V$ be a volume element just thick enough to enclose the whole of the surface charge $\map \sigma {\mathbf r} \delta S$ associated with $\delta S$. The surface charge density is the charge density of the macroscopic electric field on the surface $P$, defined as: $\ds \map \sigma {\mathbf r} = \dfrac 1 {\delta S} \int_{\delta V} \map {\rho_{\text {atomic} } } {\mathbf r'} \rd \tau'$ $\d \tau'$ is an infinitesimal volume element $\mathbf r'$ is the position vector of $\d \tau'$ $\map {\rho_{\mathrm {atomic} } } {\mathbf r'}$ is the atomic charge density caused by the electric charges within the atoms that make up $B$. The $\LaTeX$ code for \(\map \sigma {\mathbf r}\) is \map \sigma {\mathbf r} . Sometimes used, although $\rho_A$ (Greek letter rho) is more common, to denote the area mass density of a given two-dimensional body: $\sigma = \dfrac m A$ $m$ is the body's mass $A$ is the body's area. The symbol for the Stefan-Boltzmann constant is $\sigma$. Its $\LaTeX$ code is \sigma . The symbol for Poisson's ratio is $\sigma$. Its $\LaTeX$ code is \sigma . Used to denote the property of countability. The $\LaTeX$ code for \(\sigma\) is \sigma . Previous ... Next
{"url":"https://proofwiki.org/wiki/Symbols:Sigma","timestamp":"2024-11-13T04:31:38Z","content_type":"text/html","content_length":"56913","record_id":"<urn:uuid:30c22c13-0590-4b9a-90be-62419b230363>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00400.warc.gz"}
The Stacks project Lemma 15.28.9. Let $R$ be a ring. Let $A_\bullet $ be a complex of $R$-modules. Let $f, g \in R$. Let $C(f)_\bullet $ be the cone of $f : A_\bullet \to A_\bullet $. Define similarly $C(g)_\bullet $ and $C(fg)_\bullet $. Then $C(fg)_\bullet $ is homotopy equivalent to the cone of a map \[ C(f)_\bullet [1] \longrightarrow C(g)_\bullet \] Comments (5) Comment #2773 by Darij Grinberg on "this this". Comment #2776 by Darij Grinberg on This said, I really wouldn't mind a more mundane proof by explicit description of the map and of the quasi-isomorphisms... I'm not sure if I can follow the proof above. Comment #2882 by Johan on Thanks for "this this". Also added a few more lines. See here. Comment #2960 by Darij Grinberg on This looks a lot clearer now, but I still can't find "the required homotopy" in the R -> R case... Comment #3086 by Johan on @#2960 Either you can prove the required homotopy does not exist by giving a counter example and then of course the argument is wrong or you can just write one down. I've now checked this two times by writing it out on paper and each time it is obvious what you have to do, so I am going to leave it as is. There are also: • 2 comment(s) on Section 15.28: The Koszul complex Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 062A. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 062A, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/062A","timestamp":"2024-11-09T00:31:23Z","content_type":"text/html","content_length":"19305","record_id":"<urn:uuid:395e64ee-29fb-47fe-b0c3-1554f2d3d5d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00432.warc.gz"}
Assignment Help Number Theory - Math Assignment Help Importance of Number Theory:- • Number theory is an important branch of mathematics. It is widely used in research field. • Number theory is very important in mathematics as well as in daily life including security, memory management, authentication, coding theory, etc. • Number theory is base of the algebra. In modern arithmetic also number theory is useful. • In cryptographic number theory is used. • In cryptographic number theory is used for security purpose. Using number theory we can hide information in cryptographic. Prime numbers, divisors and congruencies are used for this purpose. The congruencies are used for RSA public key cryptography and in Caesar ciphering key cryptography. Number theory and cryptographic together used in computer applications. . • Number theory is quite useful in studying continued fractions, Diophantine equations, • Fibonacci numbers and writing algorithms for computing the value of irrational number pi. • Some basic concepts of number theory such as Pythagorean triplet, Fermat numbers, Divisibility and lattice points are quite useful in algebra and geometry. • Number theory has large applications in understanding geometry. • Number theory guide us to study the information theory which in turn useful for AI development and many other areas of study. • Number theory gives us some interesting facts about the development of human thought and philosophy. • Number theory is quite important in studying the number phi, which is also called the Golden Ratio. • Phi is quite important in studying the Fibonacci numbers, study of nature, energy patterns seen in space. It is the most important number for researchers to understand how energy • Synthesis occurs in the universe from pre-atomic wave forms. • We all know about pi, the ratio of the circumference of a circle to its diameter is important part of number theory. • Number theory is used extensively in network security. • Number Theory is also used in code designs for Telecommunication theory. • Public and Private Key encryption is based on number theory. • Every theorem in elementary number theory used in natural way by computers to do High-speed numerical calculations. Number theory also used in numerical analysis. • We all know that whole numbers are important part of our daily life mathematics. Difficulties faced by a student while solving Number Theory problems Base of number theory is not clear at early stage of learning mathematics. Most of the students are not able to understand the base of the number theory at an early stage. They do not understand the depth of the number theory. It requires extra coaching which is not affordable for many students There are many concepts in number theory which are difficult to understand by an average student. In such cases they want to go for extra classes to understand the concepts clearly but the extra coaching is so expensive that generally many students are not able to afford it. It is time consuming Number theory requires lot of practice by a student to understand its concept clearly. Means number theory is one of the time consuming branches of mathematics. Many Concepts require lot of hard work and time from student side. Students are not aware of its applicability Many students are not aware of the applicability of number theory. They do not find it Useful in real life. They study number theory so that they can pass the exam. Generally students do not score as per their expectation Students generally find it difficult to understand number theory because of various theorems and new concepts. Therefore, generally students do not score as per their expectation. Incomplete solutions Generally, author does not give detailed solution of a problem by assuming that reader can solve it further. Problems require extra skill Some problems require use of calculator but generally in exam calculator is not allowed. Some problems demands high level of thinking which is sometimes not possible for an average student. Geometric proofs require in depth knowledge of almost every branch of mathematics. Thus, students get demoralize. Problems related to prime numbers Two things which always bother many students about the prime numbers is that there are Infinitely many prime numbers and they cannot be generated by a simple polynomial function. Therefore, understanding the prime numbers at elementary school level is not sufficient. Lack of craze and strategy knowledge Many students are reluctant to go into the depth of the subject. Though students have good knowledge but they are inconsistent in analyzing and computing the problem. Sometimes they make errors in reading the problem or taking wrong numbers. Hence, students struggle where basic computation and correct answers is demanded by the examiner. Few important topics cover under Number Theory • Arithmetic functions • Perfect numbers and Fermat numbers • Congruence • Quadratic residue Number Theory Assignment Help - Math Tutors Support 24x7 Tutors of Expertsminds are eager to help you to solve your number theory problems. Our writers will be pleased to give you support any time you have a query. You can contact our math writer's team at your convenient time. Our math tutors are available 24x7. You can contact us in one of the following way: Live Chat. Call us. Write an email. We offer number theory assignment help, solutions to number theory problems, math homework solutions and mathematics expert tutors support 24x7. Our math tutors are solving your number theory math problems within your deadline and it may help you to learn better number theory concept. They not only provide you solutions to problems but also give you better learning of math problems and it may help you to solve similar problems without help of any expert. How may we help you? Thanks for your interest in Expertsminds service. We are ready to help you. Please feel free to write us and share your problem clearly. Please submit the online form so that we can categorize your problem and solve it in faster way. Expertsminds offers Number Theory Assignment Help, Number Theory Assignment Writing Help, Number Theory Assignment Tutors, Number Theory Solutions, Number Theory Answers, Mathematics Assignment Experts Online. Few steps to get Number theory assignments done online • Ask question or submit requirement • Get quote and make payment • Work is allocated to math tutor • Delivered to your after completion • Clarifications are done till you are done! Why us for your Number Theory projects? Expertsminds has rich experience of helping students in solving their number theory problems. You will make a noticeable difference. Our expert math writers help you to submit your projects on time. They not only provide you answer but also give you clear concepts of solving similar problems. Our mathematics assignment help service is used by all grade levels of students, whether it is from K-12, college or engineering level. We have hired math experts for each grade level and they have years of experience in teaching or to solve complex or toughest math problems. Get all your math difficulties within short time. Few features of services are listed below • Step by step explanations of number theory problems • High standard quality solution • Solved by an expert only - No plagiarism • We keep confidentiality of your private information • On time delivery • Experienced and highly qualified math writers • 24x7 hours support till you are satisfied • More than 97% satisfied ratio • Affordable price - To cover maximum students in service area Popular Writing Services:- • Electronics Engineering get electronics engineering assignment help online, assignment writing service from engineering assignment experts. • Mergers and Acquisitions Looking for Mergers and Acquisitions assignment help, Mergers and Acquisitions assessments writing service, solutions to finance problems from live online tutor • Conceptual Finance Finance theory and concepts - Discuss capital budgeting techniques including: the Payback Rule, IRR, NPV, and the Profitability Index. • Overheads and Methods of Overheads overhead and methods of overheads tutorial, ask question on overhead and methods of overheads in accounting, get assignment help and homework help from tutors. • Lab Reports get lab reports assignment help online, lab reports writing service, lab report write-ups from academic writing assignment experts. • Essay Writing get custom essay writing services, essay assignment help online, cheap essay writing services from academic essay experts writers, • Mycology get mycology assignment help online, mycology biology assessments writing service from biology assignment experts. • Client Virtualization Desktop virtualization provides powerful solutions to many problems faced by IT in terms of cost, hardware, applications, upgrades, software conflicts etc.
{"url":"https://www.expertsminds.com/assignment-help/math/number-theory-432.html","timestamp":"2024-11-06T02:21:48Z","content_type":"text/html","content_length":"39107","record_id":"<urn:uuid:7ee9b02e-272d-4a37-886f-038845873181>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00638.warc.gz"}
Is y = 3x/2 a direct variation? | HIX Tutor Is # y = 3x/2# a direct variation? Answer 1 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 Yes, ( y = \frac{3x}{2} ) represents a direct variation because it is in the form ( y = kx ), where ( k ) is a constant. In this case, the constant ( k ) is ( \frac{3}{2} ), indicating that ( y ) is directly proportional to ( x ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/is-y-3x-2-a-direct-variation-8f9af922e3","timestamp":"2024-11-03T18:56:55Z","content_type":"text/html","content_length":"565647","record_id":"<urn:uuid:b5948a66-ddba-4437-9f7b-3d501525ae0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00112.warc.gz"}
Reduced unitary matrix models and the hierarchy of τ-functions We study reductions of unitary one-matrix models. The unitary model admits an especially rich class of reductions of which the widely known symmetric model is only one. The partition function of a certain class of reduced models is shown to be a product of distinct Toda chain τ-functions. Virasoro constraints are also derived for the case of unitary models. It is claimed, in analogy with the reduced hermitian model, that in the continuous limit the Virasoro constraints must be imposed on a fractional power of the original partition function. ASJC Scopus subject areas • Nuclear and High Energy Physics Dive into the research topics of 'Reduced unitary matrix models and the hierarchy of τ-functions'. Together they form a unique fingerprint.
{"url":"https://experts.syr.edu/en/publications/reduced-unitary-matrix-models-and-the-hierarchy-of-%CF%84-functions","timestamp":"2024-11-14T00:45:27Z","content_type":"text/html","content_length":"48124","record_id":"<urn:uuid:77017995-9e6c-4081-9a59-1b0804eb9aa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00178.warc.gz"}
Wisemen.ai - AI-Powered Self-Learning Tutor & Curriculum Generator Chapter 3: Advanced Vedic Addition Strategies 3.1: Introduction to Advanced Vedic Addition Strategies In the previous chapters, we have explored the fundamental Vedic techniques for addition, equipping you with a solid foundation in this essential mathematical operation. However, as we venture into more complex numerical calculations, the need for specialized strategies becomes increasingly apparent. This sub-chapter will introduce you to the concept of advanced Vedic addition strategies, highlighting their importance and the principles that underlie them. The ancient Indian system of Vedic mathematics is renowned for its efficiency, elegance, and intuitive problem-solving approaches. When it comes to addition, the Vedic techniques go beyond the traditional column-based methods, offering a more streamlined and versatile way of performing calculations. As we delve into larger numbers, decimal-based operations, and intricate addition problems, the advantages of Vedic addition strategies become increasingly evident. The key principles that guide these advanced techniques include: 1. Digit Manipulation: Vedic mathematics emphasizes the strategic rearrangement and manipulation of digits to simplify addition problems. By identifying patterns and leveraging the inherent properties of numbers, Vedic methods enable us to perform calculations more efficiently. 2. Holistic Approach: Vedic addition strategies encourage a holistic perspective, where the entire number is considered as a single entity rather than a series of individual digits. This mindset shift allows for more intuitive and streamlined problem-solving. 3. Mental Arithmetic: A hallmark of Vedic mathematics is the emphasis on mental calculation. The advanced addition techniques equip you with the ability to perform complex additions in your head, without relying on written algorithms or calculators. 4. Adaptability: Vedic addition strategies are designed to be highly adaptable, allowing you to select the most appropriate technique based on the specific characteristics of the addition problem at hand. This versatility enhances your problem-solving skills and expands the range of challenges you can tackle. Throughout this chapter, you will explore a diverse array of advanced Vedic addition strategies, each tailored to address specific types of addition problems. By mastering these techniques, you will not only enhance your speed and accuracy in performing calculations but also develop a deeper understanding of the underlying principles that make Vedic mathematics such a powerful and transformative mathematical system. Key Takeaways: • Advanced Vedic addition strategies are essential for efficiently solving complex numerical calculations. • Vedic mathematics emphasizes digit manipulation, holistic approaches, mental arithmetic, and adaptability. • Mastering these advanced techniques will equip you with a versatile set of problem-solving skills. 3.2: Vertical and Crosswise Addition Techniques One of the hallmarks of advanced Vedic addition strategies is the vertical and crosswise addition technique. This powerful method allows for the efficient addition of large numbers by leveraging the inherent patterns and properties of digits. The vertical and crosswise addition technique consists of two main steps: 1. Vertical Addition: □ Arrange the numbers to be added in a vertical format, aligning the digits in their respective columns. □ Add the digits in each column, starting from the rightmost column and working your way towards the left. □ Keep track of any carry-over digits and incorporate them into the subsequent column additions. 2. Crosswise Addition: □ Identify the diagonal pairs of digits in the vertical arrangement. □ Add the diagonal pairs in a crosswise manner, following a specific pattern (e.g., first and last digits, second and second-to-last digits, and so on). □ Include any carry-over digits from the previous step in the crosswise additions. By combining the vertical and crosswise addition techniques, you can efficiently perform large number additions with remarkable speed and accuracy. The key advantages of this approach include: • Simplified Carry-over Management: The vertical addition helps manage carry-over digits systematically, reducing the cognitive load compared to traditional column-based addition. • Leveraging Digit Patterns: The crosswise addition exploits the inherent patterns and relationships between the digits, allowing for more intuitive and streamlined calculations. • Enhanced Mental Arithmetic: The vertical and crosswise techniques can be performed entirely in the mind, fostering the development of strong mental calculation skills. Let's consider an example to illustrate the application of this advanced Vedic addition strategy: + 3,458 1. Vertical Addition: □ Align the digits in a vertical format: + 3,458 □ Add the digits in each column, starting from the rightmost: + 3,458 2. Crosswise Addition: □ Identify the diagonal pairs of digits: + 3,458 □ Add the diagonal pairs in a crosswise manner: 8 + 4 = 12 9 + 5 = 14 7 + 3 = 10 6 + 8 = 14 □ Incorporate any carry-over digits: By applying the vertical and crosswise addition techniques, we have efficiently calculated the sum of 8,976 and 3,458, arriving at the final answer of 12,434. Key Takeaways: • The vertical and crosswise addition technique is a powerful Vedic method for efficiently adding large numbers. • The vertical addition helps manage carry-over digits, while the crosswise addition leverages digit patterns for streamlined calculations. • This approach can be performed entirely in the mind, enhancing mental arithmetic skills. 3.3: Shifting and Balancing Approach Another advanced Vedic addition strategy is the shifting and balancing approach. This technique simplifies the addition of large numbers by strategically rearranging and manipulating the digits to make the calculation more intuitive and efficient. The shifting and balancing approach involves the following steps: 1. Identify Shifting Patterns: □ Examine the numbers to be added and identify any repeating digit patterns or symmetries. □ Recognize opportunities to shift digits within the numbers to create more manageable addition problems. 2. Shift Digits Strategically: □ Shift the identified digits to different positions within the numbers, maintaining the overall value of the numbers. □ The goal is to create simpler addition problems that can be solved more efficiently. 3. Balance the Numbers: □ After shifting the digits, ensure that the numbers are balanced, meaning that the sum of the digits in corresponding columns is the same across the numbers. □ This balancing step further simplifies the addition process. 4. Perform the Addition: □ With the numbers now shifted and balanced, proceed to add the digits in a straightforward manner. □ The simplified structure of the numbers will make the addition calculations more intuitive and efficient. Let's consider an example to illustrate the shifting and balancing approach: + 2,918 1. Identify Shifting Patterns: □ We can observe that the digits "7" and "9" are in the same column. □ By shifting the "7" to the tens place and the "9" to the ones place, we can create a more manageable addition problem. 2. Shift Digits Strategically: □ Rearrange the numbers as follows: + 2,859 3. Balance the Numbers: □ Examine the corresponding columns: ☆ The sum of the digits in the thousands column is 5 + 2 = 7. ☆ The sum of the digits in the hundreds column is 8 + 8 = 16. ☆ The sum of the digits in the tens column is 4 + 5 = 9. ☆ The sum of the digits in the ones column is 7 + 9 = 16. □ The numbers are now balanced, with the sums of the corresponding columns being equal. 4. Perform the Addition: □ With the numbers shifted and balanced, the addition becomes straightforward: + 2,859 By applying the shifting and balancing approach, we have simplified the original addition problem and arrived at the final answer of 8,706 efficiently. The key advantages of the shifting and balancing approach include: • Simplifying Complex Calculations: By strategically rearranging the digits, you can transform complex addition problems into more manageable ones. • Leveraging Digit Patterns: The ability to identify and exploit repeating digit patterns is a crucial skill in this Vedic addition technique. • Enhanced Intuition: The balanced structure of the numbers makes the addition process more intuitive and easier to perform mentally. Key Takeaways: • The shifting and balancing approach simplifies large number additions by strategically rearranging the digits. • This technique involves identifying shifting patterns, shifting digits, balancing the numbers, and then performing the addition. • The shifting and balancing approach leverages digit patterns to create more manageable and intuitive addition problems. 3.4: Dealing with Decimal-based Addition The Vedic addition strategies covered so far have focused primarily on integer-based calculations. However, in the real world, we often encounter addition problems involving decimal numbers. In this sub-chapter, we will explore how to seamlessly apply Vedic addition techniques to decimal-based operations. When dealing with decimal-based addition, the underlying principles and approaches remain the same as those used for integer addition. The key is to integrate the decimal places into the Vedic strategies in a systematic manner. Let's consider the following example to illustrate the application of Vedic addition to decimal numbers: + 3.45 1. Align the Decimal Places: □ Arrange the numbers vertically, ensuring that the decimal points are aligned: + 3.45 2. Perform Vertical Addition: □ Add the digits in each column, starting from the rightmost: + 3.45 3. Apply Crosswise Addition: □ Identify the diagonal pairs of digits and add them in a crosswise manner: 8 + 4 = 12 7 + 5 = 12 .7 + .4 = 1.1 □ Incorporate any carry-over digits: By seamlessly integrating the decimal places into the vertical and crosswise addition techniques, we have efficiently calculated the sum of 8.76 and 3.45, arriving at the final answer of 12.21. It's important to note that the principles of Vedic addition, such as digit manipulation, holistic thinking, and mental arithmetic, can be equally applied to decimal-based problems. The key is to maintain a consistent approach and adapt the techniques to accommodate the decimal places. Additionally, Vedic mathematics offers specialized strategies for dealing with more complex decimal-based addition challenges, such as: • Additive Decomposition: Breaking down decimal numbers into more manageable parts to simplify the addition process. • Balancing Decimal Places: Strategically aligning and balancing the decimal places to streamline the calculations. • Mental Calculation Techniques: Developing the ability to perform decimal-based additions entirely in the mind. As you continue to explore and master the advanced Vedic addition strategies, you will find that they can be readily applied to a wide range of decimal-based problems, empowering you to tackle even the most complex addition challenges with confidence and efficiency. Key Takeaways: • Vedic addition strategies can be seamlessly applied to decimal-based operations by integrating the decimal places into the techniques. • Aligning the decimal points, performing vertical and crosswise addition, and leveraging specialized strategies like additive decomposition and balancing are key to handling decimal-based • Mastering Vedic addition for decimal numbers enhances your versatility in solving a wide range of real-world addition challenges. 3.5: Simplified Addition of Composite Numbers In the world of mathematics, certain numbers exhibit a unique property known as "composites." Composite numbers are integers greater than 1 that can be divided by other integers besides 1 and themselves. Mastering the addition of composite numbers is a crucial skill in Vedic mathematics, as it empowers you to tackle a wide range of complex numerical challenges. The Vedic approach to adding composite numbers involves breaking down these complex constructs into more manageable parts, performing the additions on these parts, and then recombining the results. This process leverages the inherent patterns and properties of composite numbers to simplify the overall calculation. Let's consider an example to illustrate the Vedic approach to adding composite numbers: + 36 1. Identify the Composite Nature of the Numbers: □ Both 27 and 36 are composite numbers, as they can be divided by integers other than 1 and themselves. 2. Decompose the Composite Numbers: □ 27 can be expressed as 3 × 9. □ 36 can be expressed as 4 × 9. 3. Perform Vedic Addition on the Decomposed Parts: □ Add the first parts (3 and 4): + 4 □ Add the second parts (9 and 9): + 9 4. Recombine the Results: □ Reconstruct the original composite numbers using the results from the previous step: 7 × 9 = 63 By breaking down the composite numbers, applying Vedic addition techniques to the parts, and then recombining the results, we have efficiently calculated the sum of 27 and 36, arriving at the final answer of 63. The key advantages of the Vedic approach to adding composite numbers include: • Simplified Calculations: Breaking down the composite numbers into more manageable parts makes the addition process more straightforward and less prone to errors. • Leveraging Inherent Patterns: The Vedic method exploits the inherent patterns and properties of composite numbers, allowing for more intuitive and efficient problem-solving. • Enhanced Versatility: Mastering the addition of composite numbers expands your mathematical capabilities, enabling you to tackle a wider range of complex numerical challenges. As you progress through this chapter, you will encounter more advanced techniques for adding composite numbers, such as additive decomposition and recombination (covered in the next sub-chapter). These strategies will further enhance your ability to efficiently solve complex addition problems involving composite numbers. Key Takeaways: • Composite numbers can be broken down into more manageable parts to simplify the addition process. • The Vedic approach involves decomposing the composite numbers, performing Vedic addition on the parts, and then recombining the results. • Mastering the addition of composite numbers through Vedic techniques enhances your versatility in solving a wide range of complex numerical challenges. 3.6: Additive Decomposition and Recombination Building upon the concepts covered in the previous sub-chapter, we now explore the Vedic technique of additive decomposition and recombination. This advanced strategy takes the simplification of composite number addition to an even higher level, enabling you to efficiently solve complex addition problems. The additive decomposition and recombination approach involves the following steps: 1. Identify Additive Decomposition Opportunities: □ Examine the numbers to be added and recognize opportunities to break them down into more manageable, additive parts. □ Look for patterns, symmetries, or inherent properties that can facilitate the decomposition process. 2. Perform Additive Decomposition: □ Strategically break down the numbers into smaller, additive components. □ Ensure that the decomposition maintains the overall value of the original numbers. 3. Apply Vedic Addition to the Decomposed Parts: □ Utilize the various Vedic addition techniques, such as vertical and crosswise addition, shifting and balancing, and mental arithmetic, to efficiently add the decomposed parts. 4. Recombine the Results: □ Reconstruct the original numbers by combining the results of the Vedic additions performed on the decomposed parts. □ Ensure that the final answer accurately reflects the sum of the original numbers. Let's consider an example to illustrate the additive decomposition and recombination approach: + 367 1. Identify Additive Decomposition Opportunities: □ We can observe that both 584 and 367 can be decomposed into their place value components (hundreds, tens, and ones). 2. Perform Additive Decomposition: □ 584 can be expressed as 500 + 80 + 4. □ 367 can be expressed as 300 + 60 + 7. 3. Apply Vedic Addition to the Decomposed Parts: □ Add the hundreds: 500 + 300 = 800 □ Add the tens: 80 + 60 = 140 □ Add the ones: 4 + 7 = 11
{"url":"https://wisemen.ai/app/courses/66151e42a5c445c475dcc14f/3","timestamp":"2024-11-10T18:47:54Z","content_type":"text/html","content_length":"75339","record_id":"<urn:uuid:02c27aed-cc76-447e-8741-3635b1a58bc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00518.warc.gz"}
MAT 497E - Tensor Analysis Course Objectives 1. To examine the general structure of spaces by using techniques of the tensor analysis, 2. To teach the techniques of the tensor calculus which have wide applications for the study of mathematics, mechanics, physics and engineering Course Description Transformation of coordinates, scalar invariants, covariant and contravariant vector fields. Covariant and contravariant tensor fields, symmetric and antisymmetric tensor fields, algebraic operations on tensors. Contraction, quotient rule, metric tensor, reciprocal tensor, Christoffel symbols, covariant derivative, gradient, divergence and rotational. Some applications to physics. Course Coordinator Fatma Özdemir Course Language
{"url":"https://ninova.itu.edu.tr/en/courses/faculty-of-science-and-letters/8540/mat-497e/","timestamp":"2024-11-12T18:56:51Z","content_type":"application/xhtml+xml","content_length":"7691","record_id":"<urn:uuid:66d400ae-784d-46c2-b2ec-2bec870c3ac4>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00520.warc.gz"}
A review on large k minimal spectral k-partitions and Pleijel's Theorem In this survey, we review the properties of minimal spectral $k$-partitions in the two-dimensional case and revisit their connections with Pleijel's Theorem. We focus on the large $k$ problem (and the hexagonal conjecture) in connection with two recent preprints by J. Bourgain and S. Steinerberger on the Pleijel Theorem. This leads us also to discuss some conjecture by I. Polterovich, in relation with square tilings. We also establish a Pleijel Theorem for Aharonov-Bohm Hamiltonians and deduce from it, via the magnetic characterization of the minimal partitions, some lower bound for the number of critical points of a minimal partition. arXiv e-prints Pub Date: September 2015 □ Mathematics - Spectral Theory; □ 35B05 Conference in honor of James Ralston's 70-th birthday
{"url":"https://ui.adsabs.harvard.edu/abs/2015arXiv150904501H/abstract","timestamp":"2024-11-06T16:02:37Z","content_type":"text/html","content_length":"36351","record_id":"<urn:uuid:f2ac8350-1672-4666-860f-b151f0b1b0b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00444.warc.gz"}
Swapping Turing Machine A Swapping Turing Machine is a computational model similar to a standard Turing machine, except that instead of being able to mark new symbols on the tape, it can only swap the position of existing A swapping Turing machine is a Finite state automaton that has access to an unbounded tape of symbols in a finite alphabet. This finite state automaton can read symbols, move the tape head, and swap the position of symbols using a swap register, which can contain either nothing or a tape position. The transition function of a swapping Turing machine has the current state and the current tape symbol as inputs, and the next state, swap action, and tape head movement as outputs. The swap action can be either N, i.e. "no swap", or S, i.e. "swap". The action N means to do nothing, and S that if the swap register is empty, the current tape head position is stored in it; otherwise, the symbol at the current tape head position is swapped with the symbol at the tape head position in the swap register, and the swap register is cleared. Since the number of non-blank symbols is always the same as it is when the machine starts, it is also necessary to define a minimum tape configuration for a swapping Turing machine to be able to do useful computation. Minsky machine → Swapping Turing machine conversion A swapping Turing machine is equivalent in computational power to a standard Turing machine, which can be shown by converting a Minsky machine to a swapping Turing machine. A Minsky machine is defined as having a fixed number N of unbounded registers being able to store a positive integer, and a finite state machine operating on those registers, with two operations (in addition of a halt state): • ⟨Inc, R, S′⟩ — Increment register R and change state to S′. • ⟨DecZ, R, S′[1], S′[2]⟩ — If register R is nonzero, decrement register R and change state to S′[1]; otherwise, change state to S′[2]. A Minsky machine with N registers can be converted to a swapping Turing machine, with the alphabet {0, 1}. The initial state of the tape is 2N 1's, with the tape head pointing to the leftmost such 1. As a shortcut, tape movements are able to move by any positive integer, indicated with a superscript. Each state S of the register machine is converted to swapping Turing machine states as follow: • ⟨Inc, R, S′⟩ □ ⟨S, 1⟩ → ⟨S[search], N, R^R+N⟩ □ ⟨S[search], 0⟩ → ⟨S[search], N, R^N⟩ □ ⟨S[search], 1⟩ → ⟨S[swap], S, R^N⟩ □ ⟨S[swap], 0⟩ → ⟨S[return], S, L^N⟩ □ ⟨S[return], 0⟩ → ⟨S[return], N, L^N⟩ □ ⟨S[return], 1⟩ → ⟨S′, N, L^R⟩ • ⟨DecZ, R, S′[1], S′[2]⟩ □ ⟨S, 1⟩ → ⟨S[check], N, R^R+N⟩ □ ⟨S[check], 0⟩ → ⟨S[search], N, R^N⟩ □ ⟨S[check], 1⟩ → ⟨S′[2], N, L^R+N⟩ □ ⟨S[search], 0⟩ → ⟨S[search], N, R^N⟩ □ ⟨S[search], 1⟩ → ⟨S[swap], S, L^N⟩ □ ⟨S[swap], 0⟩ → ⟨S[return], S, L^N⟩ □ ⟨S[return], 0⟩ → ⟨S[return], N, L^N⟩ □ ⟨S[return], 1⟩ → ⟨S′[1], N, L^R⟩ The way this conversion works is by representing each register as a pair of bits: a "start bit" and a "value bit", with the value of the register being the distance between the start bit and its corresponding value bit. Therefore, incrementing is just a matter of swapping the value bit further away, and decrementing of swapping it closer. The bits of the registers are interleaved, to allow for having more than one register in a single tape. It is also possible to convert a 2-register Minsky machine to a swapping Turing machine using only 3 1 symbols, by having the registers extend left and right instead of interleaving them. This shows that 3 non-blank symbols is all a swapping Turing machine needs for Turing-completeness. Actually doing the conversion is pretty simple and as such will be left as an exercise for the reader. See also
{"url":"https://esolangs.org/wiki/Swapping_Turing_Machine","timestamp":"2024-11-07T11:58:41Z","content_type":"text/html","content_length":"22755","record_id":"<urn:uuid:1d9f8357-62d6-4b4c-8e49-32298882a5d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00329.warc.gz"}
Konvolut The Personal Distribution of Income 1 Full text: Konvolut The Personal Distribution of Income 1 provided we ’«crow the distribution of wealth. But the distribution of wealth is known: It follows the Pareto law - over a fairly wide range - and its pattern has also been explained theoretically / i3/- Denoting wealth by of the wealth distribution p* - c W“*” 1 dtf Tf , let us for the density or putting w » ln W p( w) = c e ~ cLc** for w -^ > p(w) => 0 for w <. If Y denotes income and y= ln T the conditional density function of income can be represented in the form f*(y-w), the density of a certain return on wealth. Sven without knowing this f’unction we might manage to derive the distribution of income from that of wealth provided we can make certain assumptions about independence, ' we shall provisionally assume that the distribution of the rate ^of return is independent of the amount of wealth. In terms of random variables, if ^ and 'Xj denote income, wealth and the rate of return, we have If the random variables wealth and the rate of return are independent, their sum can be represented by a convolution of the corresponding density functions, and we shall in this way obtain the distribution of income. For the purposes of this calculation we shall replace the density f*(y-w) by the mirror function f(w-y) which is also independent of wealth. The two functions are symmetric and have the same value ( in fact, the only difference is in the dimension : TTnile the former refers to a rate of return per year, the reciprocal value refers to the number of years income contained in the wealth ). The calculation of the density of income q(y) proceeds then by mixing the function f(w-y) with the density, of wealth:
{"url":"https://viewer.wu.ac.at/viewer/fulltext/AC14446015/59/","timestamp":"2024-11-08T19:06:57Z","content_type":"application/xhtml+xml","content_length":"70769","record_id":"<urn:uuid:a34dfe97-2b6d-469b-88f4-a4ba4b1b9e30>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00373.warc.gz"}
When the sum of the forces is zero Forces and Motion When the sum of the forces is zero Teaching Guidance for 14-16 Said, and best left unsaid Teacher Tip: “We'd suggest using the phrase ‘the forces add to zero’ as a step on the way to saying ‘the resultant force is zero’, and definitely avoiding ‘the forces cancel out’.” There are two main reasons for suggesting this: • Adding the forces and finding that the resultant is zero is exactly what you do. • The process of “cancelling” might evoke memories of dealing with ratios or fractions. That's not appropriate for vectors – there are no good mathematical rules for performing these operations with vectors.
{"url":"https://spark.iop.org/when-sum-forces-zero","timestamp":"2024-11-04T10:29:08Z","content_type":"text/html","content_length":"45411","record_id":"<urn:uuid:50e8ad2a-5564-4863-8bb3-4f1a8b90c56d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00288.warc.gz"}
Difficult Puzzle Grid Solutions, Logic Puzzles, Solutions THE RACEHORSES- show / hide- The order of finish: 1.Giddy-up Thom-Grey. 2.Speedy- Midi-White. 3.Diamond- Paul- Silver. 4.Fanny- Shorty- Spotted. 5.Laughy- Matty- Brown. 6.Willow- Madge- Black. step-by-step:( show / hide) • This first clue "Matty's horse(Laughy) finished 5th and was not Black." Gives us a few good clues, beginning with Matty-Laughy-5th and ending with "....not black". Locate the following grid squares: [Laughy-5th, Laughy-Matty, Matty-5th] and highlight ALL with 'green boxes'. Next, make all appropriate grid eliminations in the Laughy row including: Laughy- 1,2,3,4, and 6. Laughy- Shorty,Thom, Madge, Midi, and Paul. Laughy- Black. Then the grid eliminations for Column 5: 5-Diamond, Fanny, Giddy up, Speedy, Willow, Black, Shorty,Thom, Madge, Midi, and Paul. Then make these additional eliminations: The row Matty-1,2,3,4, and 6, as well as the column, Matty-Diamond, Fanny, Giddy up, Speedy, Willow, Black. • This next clue "Paul's horse (Diamond) was not a winner." Locate the following grid squares: [Diamond-1, Paul-1] and make the eliminations for these grid squares. Next highlight (with a 'green box') the grid square [Diamond-Paul]. Then make the grid eliminations for Row Diamond: Diamond- Shorty,Thom, Madge,and Midi. Then make the grid eliminations for Column Paul: Paul- Fanny, Giddy up, Speedy, and Willow. • This clue "The horse named Fanny, was the one with the speckles or spots." Locate the following grid square: [Fanny-Spotted] and highlight with a 'green box'. Next, make all appropriate grid eliminations in the Fanny row including: Fanny - White, Brown, Black, Grey and Silver. Then the grid eliminations for Column Spotted: Spotted - Diamond, Giddy up, Laughy,Speedy, and Willow. **NOTE** Since neither Diamond (owned by Paul) or Laughy (owned by Matty) are the spotted horses, it follows that: neither Matty or Paul own the spotted horse , and the spotted horse (which is Fran, and which did not finish 5th), we can now make these eliminations: [Spotted-5, Spotted-Matty, Spotted-Paul ]. • The next clue "Giddy-up, Speedy, and Diamond finished, (in some order) in the top three as were the following owners Thom, Midi and Paul, whose horses had these characteristics: Grey, White, and ( also, in some order)." This is vital because it tells us three components of the three top finishers, namely: Horses names: Giddy-up, Speedy, and Diamond; as well as , Owners: Thom, Midi and Paul, and the Colors: Grey, White, and Silver. (Which allows us to make a multitude of logical conclusions/combinations). To begin with : Locate the rows for Giddy-up, Speedy, and Diamond, and make the eliminations: Diamond - 4, 6, (from which this elimination follows Paul - 4, 6). Giddy Up - 4, 6, Shorty, Madge, Brown, Black. Speedy - 4,6, Shorty, Madge, Brown, Black. Next, Locate the rows for Grey, White, and Silver, and make the eliminations: White - 4, 5, 6, Shorty, and Madge. . Grey - 4, 5, 6, Shorty, and Madge. Silver - 4, 5, 6, Shorty, and Madge. Finally locate these owner's rows: Thom, Midi and Paul (and make the eliminations): Thom - 4, 6. Midi - 4, 6. Paul - 4, 6. **NOTE** It is important that we now look at the bottom finishers(4th, 5th and 6th), that is these Horses names: Fanny, Laughy, and Willow; as well as these Owners: Shorty, Madge, and Matty. and the Colors: Spotted, Brown, and Black. (From these combinations we can make some additional eliminations) First locate the rows for the horses Fanny, Laughy, and Willow and make the eliminations Fanny - 1,2,3, Thom, Midi, Laughy - White, Grey, Silver. Willow - 1,2,3, Thom, Midi, White, Grey, Silver. Now locate the rows for the non-winning color combos Spotted, Brown, Black (and eliminate) Spotted - 1, 2, 3, Thom, Midi, Brown - 1, 2, 3, Thom, Midi,Paul. Black - 1, 2, 3, Thom, Midi, Paul. Finally locate the rows for the non-winning owners Shorty, Madge, and Matty and eliminate these combos: Shorty - 1, 2, and 3. Madge - 1, 2, and 3. **NOTE** Before we proceed to the next clue we can make some observations from the grid as follows: Locate the solution for grid square [Laughy-Brown], (which is the only possible solution remaining for Laughy's row), and leads to the eliminations in Brown's column : Brown - Diamond, Speedy, Willow. (Which leads to another solution Willow-Black), and further eliminations of the single grid square [Diamond-Black ],and yet two other solutions [Brown-5th, Brown-Matty ] ,which, in turn , allows more eliminations in the Brown row: Brown - 6, Shorty, and Madge , (Which allows us to complete Matty's column):(by eliminating) Matty - White, Spotted. • The next clue is "Speedy the white horse owned by Midi, finished , either, 1st or 2nd." We can make an immediate connection namely Speedy-White-Midi , so highlight (with a 'green-box') The following grid squares in Speedy's row : [Speedy-Midi, Speedy-White ] while eliminating these Speedy - Thom, Grey, Silver, and 3 (because, we know, from the end of this clue , that the horse Speedy, "...finished, either, 1st or second"). **NOTE** ( We can now make some additional eliminations for both Midi, and the White (rows/columns)). Lets start with column Midi, and eliminate : Midi - Giddy Up, Grey, Silver (after highlighting the grid square [Midi-White]), which, of course allows us to eliminate items in both White's column (White-Diamond , Giddy Up) and row: (White-3, Thom and Paul.). Lastly, we can eliminate (from Midi's row) the grid square: [Midi-3]. • Lets look at another given clue "Shorty owned either the Spotted horse or , Willow, the one that finished last." The only useful portion of this clue (for now) is the last part "...Willow, the one that finished last." So locate and highlight grid square [Willow-6], then make the appropriate logical eliminations in Willow's row : Willow - 4, (which also leads to the solution of Fanny-4) , (after eliminating the grid square [Fanny-6]). • Now this clue : "Madge does not own the horse that finished 4th. ." Let's first locate grid square [Madge-4], make the elimination ,and we discover that Madge must(and can only) own the horse that finished 6th (- which is Willow, the black horse), therefore, we can finish column 6 eliminations as follows: 6 - Spotted , Shorty (after highlighting 6 - Black, Madge), leaving only Spotted - 4 as the possible solution, (and after completing the Spotted row), we conclude that Shorty was it's owner , this leaves the final horse-owner solution (Giddy Up-Thom). **NOTE** (You can also now complete the 4-Column). It then remains to determine only the final order of finish! • Lets look at that last clue "The winning horse did not have a white or silver tail." If we start with the elimination of grid squares [White-1, Silver-1], this leaves the "ONLY" solution for the winning horse's color to be'Grey' (so highlight Grey-1), and we may conclude that: the 2nd place horse had to have been White , and (by consequence), the silver horse was 3rd , (and since the white horse finished second), this means the 2nd place horse could have only have been Speedy (owned by Midi), which means Diamond (owned by Paul), finished 3rd, (which was the Silver horse), which now makes the winning horse Giddy Up, which can only be owned by Thom, which has the Grey tail! . • Congratulations! Puzzle solved. To summarize: The order of finish: 1.Giddy-up Thom-Grey. 2.Speedy- Midi-White. 3.Diamond- Paul- Silver. 4.Fanny- Shorty- Spotted. 5.Laughy- Matty- Brown. 6.Willow- Madge- Black.
{"url":"https://www.puzzles-on-line-niche.com/difficult-puzzle-grid-solutions.html","timestamp":"2024-11-06T13:59:51Z","content_type":"text/html","content_length":"72335","record_id":"<urn:uuid:909e6912-afb3-4919-b684-02b12f84a8d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00607.warc.gz"}
of Statistics Teaching Bits: A Resource for Teachers of Statistics Journal of Statistics Education v.6, n.3 (1998) Robert C. delMas General College University of Minnesota 333 Appleby Hall Minneapolis, MN 55455 William P. Peterson Department of Mathematics and Computer Science Middlebury College Middlebury, VT 05753-6145 This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Bob abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts. From the Literature on Teaching and Learning Statistics Statistical Education -- Expanding the Network eds. Lionel Pereira-Mendoza (Chief Editor), Lua Seu Kea, Tang Wee Kee, and Wing-Keung Wong (1998). Proceedings of the Fifth International Conference on Teaching of Statistics, Singapore, June 21-26, This three-volume set contains over 200 papers presented by statistics educators from around the world. Each paper falls into one of the following broad categories: VOLUME 1 Statistical Education at the School Level Statistical Education at the Post-Secondary Level Statistical Education for People in the Workplace Statistical Education and the Wider Society VOLUME 2 An International Perspective of Statistical Education Research in Teaching Statistics The Role of Technology in the Teaching of Statistics VOLUME 3 Other Determinants and Developments in Statistical Education Contributed Papers The three volume set can be purchased from: CTMA Ltd., 425 Race Course Road, Singapore 218671 Telephone: (65) 299 8992 FAX: (65) 299 8983 The cost is: IASE/ISI Member $65 plus shipping/handling Non-member $80 plus shipping/handling "Dice and Disease in the Classroom" by Marilyn Stor and William L. Briggs (1998). The Mathematics Teacher, 91(6), 464-468. This article presents an interesting mathematical modeling project. The goal of the activity is to model the exponential growth of communicable diseases by adding a risk factor. The required equipment is fairly simple: each student needs a die and a data sheet that is illustrated in the article. To simulate disease transmission, a student walks around the classroom and meets another student. The two students then roll their dice and sum the two outcomes. If the sum is below some predetermined cutoff, the encounter is designated as "risky," meaning that if either student was a carrier, the disease has been passed on to the other student. At the end of the activity, one student is randomly chosen to be the initial carrier of the disease, and the spread of the disease through the classroom is tracked. The article describes how the data are collected, graphed, and analyzed, and also presents suggestions for discussion and variations on the activity. "Roll the Dice: An Introduction to Probability" by Andrew Freda (1998). Mathematics: Teaching in the Middle School, 4(2), 8-12. The author describes a simple dice game that helps students understand why it is important to collect data to test beliefs. The game can also be used to help students develop an understanding of why large samples are more beneficial than smaller samples. To play the game, two students each roll a die. The absolute difference of the two outcomes is computed. Player A wins if the difference is 0, 1, or 2. Player B wins if the difference is 3, 4, or 5. Most students' first impressions are that this is a fair game. Over several rounds of the game the students come to realize that the stated outcomes are not equiprobable. The activity promotes skills in data collection and hypothesis testing, as well as mathematical modeling. The author presents examples of a program written in BASIC and a TI-82 calculator simulation, both of which can be used to help students explore the effects of sample size in modeling the dice game. The American Statistician: Teacher's Corner "A One-Semester, Laboratory-Based, Quality-Oriented Statistics Curriculum for Engineering Students" (with discussion) by Russell R. Barton and Craig A. Nowack (1998). The American Statistician, 52(3), 233-243. This article describes a new laboratory-based undergraduate engineering statistics course being offered by the Department of Industrial and Manufacturing Engineering at Penn State. The course is intended as a service course for engineering students outside of industrial engineering. We describe the topics covered in each of the eight modules of the course and how the laboratories are linked to the lecture material. We describe how the course is implemented, including facilities used, the course text, grading, student enrollment, and the implementation of continuous quality improvement in the classroom. We conclude with some remarks on the benefits of the laboratories, the effects of CQI in the classroom, and dissemination of course materials. "The Blind Paper Cutter: Teaching About Variation, Bias, Stability, and Process Control" by Richard A. Stone (1998). The American Statistician, 52(3), 244-247. The intention of this article is to provide teachers with a student activity to help reinforce learning about variation, bias, stability, and other statistical quality control concepts. Blind paper cutting is an effective way of generating tangible sequences of a product, which students can use to address many levels of questions. No special apparatus is required. "Expect the Unexpected from Conditional Expectation" by Michael A. Proschan and Brett Presnell (1998). The American Statistician, 52(3), 248-252. Conditioning arguments are often used in statistics. Unfortunately, many of them are incorrect. We show that seemingly logical reasoning can lead to erroneous conclusions because of lack of rigor in dealing with conditional distributions and expectations. "Some Uses for Distribution-Fitting Software in Teaching Statistics" by Alan Madgett (1998). The American Statistician, 52(3), 253-256. Statistics courses now make extensive use of menu-driven, interactive computer software. This article presents some insight as to how a new class of PC-based statistical software, called "distribution fitting" software, can be used in teaching various courses in statistics. Teaching Statistics A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes. The Circulation Manager of Teaching Statistics is Peter Holmes, ph@maths.nott.ac.uk, RSS Centre for Statistical Education, University of Nottingham, Nottingham NG7 2RD, England. Teaching Statistics has a website at http://www.maths.nott.ac.uk/rsscse/TS/. Teaching Statistics, Autumn 1998 Volume 20, Number 3 "Lawn Toss: Producing Data On-the-Fly" by Eric Nordmoe, 66-67. The author describes an activity based on common lawn tossing games such as horseshoes, flying disc golf, and lawn darts. The activity involves helping students to design and carry out a two-factor experiment. The two factors are Distance to the Target (short or long) and Hand Used for Throwing (left or right). An example of a tabulation sheet for collecting the data is provided, and a simple paired t-test is described for testing whether people tend to have a dominant hand. Suggestions are also provided for how to use the data to illustrate the effects of outliers, conduct a two-sample t-test, explore issues of experimental design, and conduct nonparametric tests. "Why Stratify?" by Ted Hodgson and John Borkowski, 68-71. The article describes an activity to help students understand the benefits of using stratified random sampling. The materials consist of equal numbers of red cards and black cards. Each card has a number written on one side. The values on the red cards are consistently smaller than those on the black cards, which creates two strata in the population of cards that differ with respect to the measure of interest. Students draw simple random samples and stratified random samples for samples of the same size from the population of cards and create distributions for the sample means. Students can determine that the average sample mean from both types of samples provide good estimates of the population mean. Comparison of the two distributions illustrates that the sample means from stratified random samples show much less variability. "Introducing Dot Plots and Stem-and-Leaf Diagrams" by Chris du Feu, 71-73. The author describes an activity that introduces dot plots and stem-and-leaf diagrams to elementary school students by capitalizing on a basic skill that is well rehearsed: measuring the length of things with a ruler. The list of equipment includes a worksheet, a ruler, a protractor, and a piece of string. An example of the worksheet, which contains lines and angles that can be measured, is provided for photocopying. One line is curved in a complex way so that the piece of string is used to match the curved line, and then a ruler is used to measure the length of the string as an estimate. The measurements produced by the children for a particular line typically show some variation. A dot plot or stem-and-leaf diagram of the measurements can visually demonstrate that there is variability, yet one measurement occurs more often than other measurements. The author has found this activity to spontaneously generate discussion of many statistical ideas. "A Constructivist Approach to Teaching Permutations and Combinations" by Robert J. Quinn and Lynda R. Wiest, 75-77. The authors present an approach to teaching about permutations and combinations where students are given an opportunity to explore these concepts within the context of a problem situation. Students are not instructed in the formal mathematics of permutations and combinations. Instead, they are asked to solve a problem regarding how many different ways someone can wallpaper three rooms given there are four different patterns to choose from. Students work in groups to come up with answers to the question. Each group reports back to the class with their answer and provides arguments for why their answer is reasonable. The instructor then identifies which groups have produced answers based on permutations and which have produced answers based on combinations. The formal terminology, symbolic notation, and methods are then introduced and students are shown how the approach taken by a group is modeled by the formal approach. The authors find that this approach empowers students by reinforcing the validity of their own intuitions. "BUSTLE -- A Bus Simulation" by John Appleby, 77-80. The author describes a computer simulation that demonstrates to students how statistical modeling can be used to account for the perception that buses always arrive in bunches. The program assumes that the rate at which passengers arrive at a bus stop follows a Poisson process, and that the delay caused as passengers board can account for buses eventually bunching up along a route. The program allows various parameters to be changed such as the initial delay between bus departures from the terminal, the number of buses, and the number of stops along the route. The author provides a web address from which the program can be downloaded and used for educational purposes. "Testing for Differences Between Two Brands of Cookies" by Rhonda C. Magel, 81-83. The author describes two activities that involve measurements of two different brands of chocolate chip cookies. In the first activity, students count the number of chips in cookies from samples of each brand. The students use the data to conduct a two-sample t-test, first testing for assumptions of normality. In the second activity, each student provides a rating for the taste of each brand. This provides an opportunity for students to conduct a matched-pairs t-test, as well as an opportunity to see the distinction between the two-sample and matched-pairs situations. The author has found these to be very motivating activities that provide students with a concrete and personal understanding of hypothesis testing. "A Note on Illustrating the Central Limit Theorem" by Thomas W. Woolley, 89-90. The article describes the use of the phone book to generate data for illustrating the Central Limit Theorem. In class, students discuss the expected shape of the distribution of the last four digits of a telephone number. It can be argued that if the digits are produced randomly, they should form a uniform distribution of digits between 0 and 9 with an expected value of 4.5 and a standard deviation of approximately 2.872. Outside of class, students randomly select a page of the phone book, randomly select a telephone number from the page, record the last four digits of the telephone number, and compute the average of the four digits. Each student repeats this process until 20 sample means are generated. The sample means from all students are collected in class and entered into statistical software to produce a distribution and obtain summary statistics. The distribution is typically unimodal, normal in its shape, centered around 4.5, with a standard deviation of approximately 1.436. This concrete example allows students to empirically test the Central Limit Theorem. Topics for Discussion from Current Newspapers and Journals "Following Benford's Law, or Looking Out for No. 1" by Malcolm W. Browne. The New York Times, 4 August 1998, F4. This article is based on "The First Digit Phenomenon" by Theodore P. Hill (American Scientist, July-August 1998, Vol. 86, pp. 358-363). It describes how Hill and other investigators have successfully applied a statistical phenomenon known as Benford's Law to detect problems ranging from fraud in accounting to bugs in computer output. The law is named for Dr. Frank Benford, a physicist at General Electric who identified it using a variety of datasets some sixty years ago. But in fact, as Hill's original article notes, the law had already been discovered in 1881 by the astronomer Simon Newcomb. Newcomb observed that tables of logarithms showed greater wear on the pages for lower leading digits. Most people intuitively expect leading digits to be distributed uniformly, but the log table evidence suggests that everyday calculations involve relatively more numbers with lower leading digits. Newcomb theorized that the chance of leading digit d is given by the base 10 logarithm of (1 + 1/d) for d = 1,2,...,9. Thus the chance of a leading 1 is not one in nine, but rather log(2) = .301 -- nearly one in three! Benford's law has been observed to hold in a wide range of datasets, including the numbers on the front page of The New York Times, tables of molecular weights of compounds, and random samples from a day's stock quotations. Dr. Mark Nigrini, an accounting consultant now at Southern Methodist University, wrote his Ph.D. dissertation on using Benford's Law to detect tax fraud. He recommended auditing those returns for which the distribution of digits failed to conform to Benford's distribution. In a test on data from Brooklyn, his method correctly flagged all cases in which fraud had previously been admitted. However, Nigrini points out that Benford's Law is not universally applicable. For example, analyses of corporate accounting data often turn up too many 24's, apparently because business travelers have to produce receipts for expenses of $25 or more. He also notes that the law won't help you pick lottery numbers. For example, in a "Pick Four" numbers game, lottery officials take great pains to ensure that the four digits are independently selected and are uniformly distributed on d = 1,2,...,9. The Times article describes a classroom experiment conducted by Dr. Hill in his classes at Georgia Tech. For homework, Hill asks those students whose mother's maiden name begins with A through L to flip a coin 200 times and record the results and the rest of the students to imagine the outcomes of 200 flips and write them down. As the article points out, the odds are overwhelming that there will be a run of at least six consecutive heads or tails somewhere in a sequence of 200 real tosses. In his class experiment, Hill reports a high rate of success at detecting the imaginary sequences by simply flagging those that fail to contain a run of length six or greater. While this is not an example of Benford's Law, it is another situation in which people are surprised by the true probability distribution. Readers of JSE may be familiar with this coin tossing experiment from the activity "Streaky Behavior: Runs in Binomial Trials" in Activity-Based Statistics by Schaeffer, Gnanadesikan, Watkins, and Witmer (1996, Springer). "Fate ... or Blind Chance" by Bruce Martin. The Washington Post, 9 September 1998, H1. This article on coincidences is excerpted from Martin's article in the September-October issue of The Skeptical Inquirer. The article starts with the famous (to statisticians!) birthday problem, illustrated with birthdays and death days of US presidents. The problem is extended to treat the chance that at least two people in a random sample will have birthdays within one day (i.e., on the same day or on two adjacent days). In this formulation, only 14 people are required to have a better than even chance. The article then mentions some popularly reported examples of coincidences, such as the similarities between the Lincoln and Kennedy assassinations. Martin observes that, as far as we know, the decimal digits of the number In the original Skeptical Inquirer article, Martin discusses the above examples, and also investigates the randomness of prices in the stock market. Again using the digits from the expansion of Deadly Disparities; Americans' Widening Gap in Incomes May be Narrowing our Lifespans" by James Lardner. The Washington Post, 16 August 1998, C1. Since the 1970s, virtually all income gains in the US have gone to households in the top 20% of the income distribution -- the greatest inequality observed in any of the world's wealthy nations. Beyond the fairness issues, a growing body of research indicates that countries with more pronounced differences in incomes tend to experience shorter life expectancies and greater risks of chronic illness in all income groups. Moreover, the magnitude of these risks appears to be larger than the more widely publicized health risks associated with cigarettes or high-fat foods. Richard Wilkinson, an economic historian at Sussex University, found that, among nations with gross domestic products of at least $5000 per capita, one nation could have twice the per capita income of another, yet still have a lower life expectancy. On the other hand, income equality emerged as a reliable predictor of health. This finding ties together a variety of international comparisons. For example, the greatest gains in British civilian life expectancy came during World War I and World War II, periods characterized by compression of incomes. In contrast, over the last ten years in Eastern Europe and the former Soviet Union, small segments of the population have had tremendous income gains, while living conditions for most people have deteriorated. These countries have actually experienced decreases in life expectancy. Among developed nations, the US and Britain today have the largest income disparities and the lowest life expectancies. Japan has a 3.6 year edge over the US in life expectancy (79.8 years vs. 76.2 years) even though it has a lower rate of spending on health care. The difference is roughly equal to the gain the US would experience if heart disease were eliminated as a cause of death! The July 1998 issue of the American Journal of Public Health presents analogous data comparing US states, cities, and counties. Research directed by John Lynch and George Kaplan of the University of Michigan found that mortality rates are more closely associated with measures of relative, rather than absolute, income. Thus the cities Bixoli, Mississippi; Las Cruces, New Mexico; and Steubenville, Ohio have both high inequality and high mortality. By contrast, Allentown, Pennsylvania; Pittsfield, Massachusetts; and Milwaukee, Wisconsin share low inequality and low mortality. "Driving While Black; A Statistician Proves that Prejudice Still Rules the Road" by John Lamberth. The Washington Post, 16 August 1998, C1. Lamberth is a member of the psychology department of Temple University. In 1993, he was contacted by attorneys whose African-American clients had been arrested on the New Jersey Turnpike for possession of drugs. It turned out that 25 blacks had been arrested over a three-year period on the same portion of the turnpike, but not a single white. The attorneys wanted a statistician's opinion of the trend. Lamberth was a good choice. Over 25 years his research on decision-making had led him to consider issues including jury selection and composition, and application of the death penalty. He was aware that blacks were underrepresented on juries and sentenced to death at greater rates than whites. In this article, Lamberth describes the process of designing a study to investigate the highway arrest issue. He focused on four sites between Exits 1 and 3 of the Turnpike, covering one of the busiest segments of highway in the country. His first challenge was to define the "population" of the highway, so he could determine how many people traveling the turnpike in a given time period were black. He devised two surveys, one stationary and one "rolling." For the first, observers were located on the side of the road. Their job was to count the number of cars and the race of their occupants during randomly selected three-hour blocks of time over a two-week period. From June 11 to June 24, 1993, his team carried out over 20 recording sessions, counting some 43,000 cars, 13.5% of which had one or more black occupants. For the "rolling survey," a public defender drove at a constant 60 miles per hour (5 miles per hour over the speed limit), counting cars that passed him as violators and cars that he passed as non-violators, noting the race of the drivers. In all, 2096 cars were counted, 98% of which were speeding and therefore subject to being stopped by police. Black drivers made up 15% of these violators. Lamberth then obtained data from the New Jersey State Police and learned that 35% of drivers stopped on this part of the turnpike were black. He says, "In stark numbers, blacks were 4.85 times as likely to be stopped as were others." He did not obtain data on race of drivers searched after being stopped. However, over a three-year period, 73.2% of those arrested along the turnpike by troopers from the area's Moorestown barracks were black, "making them 16.5 times more likely to be arrested than others." These findings led to a March 1996 ruling by New Jersey Superior Court Judge Robert E. Francis, who ruled that state police were effectively targeting blacks, violating their constitutional rights. Francis suppressed the use of any evidence gathered in the stops. Lamberth speculates that department drug policy explains police behavior in these situations. Testimony in the Superior Court case revealed that troopers' performance is considered deficient if they do not make enough arrests. Since police training targets minorities as likely drug dealers, the officers had an incentive to stop black drivers. But when Lamberth obtained data from Maryland (similar data has not been available from other states), he found that about 28% of drivers searched in that state have contraband, regardless of their race. Why, then, is there a continued perception that blacks are more likely to carry drugs? It turns out that, of 1000 searches in Maryland, 200 blacks were arrested, compared to only 80 non-blacks. But the problem is that the sample was biased: of those searched, 713 were black, and 287 were non-black. "Excerpts from Ruling on Planned Use of Statistical Sampling in 2000 Census." The New York Times , 25 August 1998, A13. Ruling on a lawsuit filed by the House of Representatives against the Commerce Department, a three-judge Federal panel says that plans to use sampling in the 2000 Census violate the Census Act. Since the Constitution requires an "actual enumeration," opponents of sampling have long argued that no statistical adjustment can be allowed. Significantly, the court did not rule on these constitutional issues. It more narrowly addressed whether sampling for the purpose of apportioning Representatives is allowed under Congress's 1976 amendments to sections 141(a) and 195 of the Census Act. The amended version of section 141(a) states: The Secretary shall, in the year 1980 and every 10 years thereafter, take a decennial census of population ... in such form and content as he may determine, including the use of sampling procedures and special surveys. The 1976 amendment to section 195 more directly addresses the apportionment issue: Except for the determination of population for purposes of apportionment of Representatives in Congress among the several States, the Secretary shall, if he considers it feasible, authorize the use of the statistical method known as sampling in carrying out the provisions of this title. The court ruled that these amendments must be considered together. Therefore, the case hinges on whether the exception stated in the amendment to section 195 meant "you cannot use sampling methods for purposes of apportionment" or "you do not have to use sampling methods." The court provided the following two examples of the use of the word except: Except for Mary, all children at the party shall be served cake. Except for my grandmother's wedding dress, you shall take the contents of my closet to the cleaners. The court argues that the interpretation of "except" must be made in the context of the situation. In the first example, one could argue it would be all right if Mary were also served cake. But in the second example, the intention is more clearly that grandmother's delicate wedding dress should not be taken to the cleaners. The judges stated that "the apportionment of Congressional representatives among the states is the wedding dress in the closet..." The Clinton administration appealed this ruling to the Supreme Court, which is scheduled to begin hearings on the matter on 30 November. A decision could come by March. This should be an interesting story to follow. After the Federal court ruling, an excellent discussion of the issues surrounding the Census was presented on "Talk of the Nation" (National Public Radio, August 28, 1998, http://www.npr.org/ramfiles The first hour of the program is entitled "Sampling and the 2000 Census." Guests are Harvey Choldin, sociologist and author of Looking for the Last Percent: The Controversy over Census Undercounts, statistician Stephen Fienberg, who has written a series of articles on the census for Chance, and Stephen Holmes, a New York Times correspondent. At the end of the hour, the group specifically addresses the need for statisticians to explain the issues surrounding sampling in a way that the public and Congress can understand. Fienberg describes the proposed Census adjustment in the context of the capture-recapture technique for estimating the number of fish in a lake. (The Activity-Based Statistics text mentioned previously devotes a chapter to a classroom experiment designed to illustrate the capture-recapture method.) In a previous "Teaching Bits" (Vol. 6, No. 1), we described ASA President David Moore's response to a William Safire editorial on adjusting the Census. Safire gave a laundry list of concerns about public opinion polling, and Moore properly took him to task for failing to address sampling issues in the specific context of the Census. However, some professional statisticians still worry about whether the current sampling proposals can provide sufficiently accurate estimates to improve the Census. For a good summary of these concerns, see "Sampling and Census 2000" by Morris Eaton, David A. Freedman, et al. (SIAM News, November 1998, 31(9), p. 1). "Prescription for War" by Richard Saltus. The Boston Globe, 21 September 1998, C1. At a Boston conference session on "Biology and War," psychologists Christian G. Mesquida and Neil I. Wiener of York University in Toronto presented a new theory about what triggers war: a society that is "bottom-heavy with young, unmarried and violence-prone males." This theory is based on an analysis of the relationship of population demographics to the occurrence of wars and rebellions over the last decade. Societies in which wars and rebellions had occurred tended to have a large population of unmarried males between the ages of 15 and 29. From the standpoint of evolutionary biology, it makes sense to ask whether war-like behavior confers an evolutionary advantage. The researchers explain that war is "a form of intrasexual male competition among groups, occasionally to obtain mates but more often to acquire the resources necessary to attract and retain mates." They point out that the argument makes sense as an explanation for offensive, but not defensive, wars. For example, the United States was reluctantly drawn into World War II, so there the theory applies to the young Nazis in Germany. Similarly, it applies to the Europeans who conquered the native populations of America, but not to the native peoples. In nearly half of the countries in Africa, young, unmarried males comprise more than 49% of the overall population. In the last 10 years, there have been at least 17 major civil wars in countries in Africa, along with several conflicts that crossed national borders. In contrast, Europe has few countries where the young, unmarried male population makes up even 35% of the total. In the last 10 years there has been only one major civil war, and that was in Yugoslavia, which has more than 42% young, unmarried males. "Ask Marilyn: Good News for Poor Spellers" by Marilyn vos Savant. Parade Magazine, 27 September 1998, p. 4. A letter from Judith Alexander of Chicago reads as follows: "A reader asked if you believe that spelling ability is a measure of education, intelligence or desire. I was fascinated by the survey you published in response. The implication of the questions is that you believe spelling ability may be related to personality. What were the results? I'm dying to know." The "biggest news," according to Marilyn, is that poor spelling has no relationship to general intelligence. On the other hand, she is sure that education, intelligence, or desire logically must have something to do with achieving excellence in spelling. But her theory is that, even if one has "the basics," personality traits can interfere with success at spelling. She bases her conclusions on a write-in poll, to which 42,603 of her readers responded (20,188 by postal mail and 22,415 by e-mail). Participants were first asked to provide a self-assessment of their spelling skills, on a scale from 1 to 100. They then ranked other personal traits on the same scale. For each respondent, Marilyn identified the quality or qualities that were ranked closest to spelling. She considers this quality to be most closely related to spelling ability. She calls her analytical process "self-normalization," explaining that matching up the ratings for each individual respondent overcomes the problem that respondents differ in how accurately they can assess their own abilities. The trait that she found most frequently linked to spelling in this analysis was "ability to follow instructions." Next was "ability to solve problems," followed by "rank as an organized person." The first two were related very strongly for strong spellers, but hardly related at all for weak spellers. Marilyn reports that only 6% of the weak spellers ranked their ability to follow instructions closest to their spelling ability, and only 5% ranked their ability to solve problems closest to their spelling ability. On the other hand, the relationship with organizational ability showed up at all spelling levels, with top spellers being the most organized, and weak spellers being the least organized. Marilyn says she asked for a ranking of leadership abilities in order to validate her methods. She did not believe this trait was related to spelling, and, indeed, leadership was linked least often to spelling in the data. Similarly, she reports that creativity appeared to be unrelated to spelling ability. "When Scientific Predictions Are So Good They're Bad" by William K. Stevens. The New York Times, 29 September 1998, F1. This article discusses various problems that can occur when the public is presented with predictions. A key problem is the tendency to rely on point estimates without taking into account the margin of error. For example, in spring 1997, when the Red River of the North was flooding in North Dakota, the National Weather Service forecast that the river would crest at 49 feet. Unfortunately, the river eventually crested at 54 feet. Those residents of Grand Forks who relied on the estimate of 49 feet later faced evacuation on short notice. In this case, the article argues, it would have been far better for the weather service to report an error statement along with its point estimate. The discussion over global warming illustrates our difficulties in dealing with predictions. Models used to predict the increase in the Earth's average temperature over the next century are necessarily based on many assumptions, each of which entails substantial uncertainty. Popular reports of the predictions place relatively little emphasis on the sizes of the possible errors, and the public may be misled by the implied precision. On the other hand, governments are seen as taking any uncertainty as a reason to avoid action on the issue. With other types of forecasting, people have learned from experience that the predictions are fallible, and have adjusted their behavior accordingly. Weather forecasts are a prime example. Similarly, the public has learned not to expect accurate prediction of the exact timing and magnitude of earthquakes, and seismologists focus instead on long-range forecasts. "Placebos Prove So Powerful Even Experts Are Surprised" by Sandra Blakeslee. The New York Times, 13 October 1998, D1. A recent study of a baldness remedy found that 86% of the men taking the treatment either maintained or increased the amount of hair on their heads -- but so did 42% of the placebo group. Dr. Irving Kirsch, a University of Connecticut psychologist, reports that placebos are 55-60% as effective as medications like aspirin and codeine for treating pain. But if some patients really do respond to placebos, might there not be a biological mechanism underlying the effect? Using new techniques of brain imagery, scientists are now discovering that patients' beliefs can indeed produce biological changes in cells and tissues. According to the article, some of the results "border on the miraculous." One explanation explored here is that the body's responses may be based more on what the brain expects to happen based on past experience, rather than on the information currently flowing to the brain. Thus a patient who expects a drug to make him better would have a positive response to a placebo. A related idea is that, by reducing stress, a placebo may allow the body to regain a natural state of health. In fact, a recent study showed that animals experiencing stress produce a valium-like substance in their brains, provided they have some control over the stress. Return to Table of Contents | Return to the JSE Home Page
{"url":"http://jse.amstat.org/v6n3/resource.html","timestamp":"2024-11-02T12:05:48Z","content_type":"text/html","content_length":"41083","record_id":"<urn:uuid:03b6602e-104c-4ad8-b776-33e0eedb9018>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00743.warc.gz"}
THE LEVI DECOMPOSITION OF THE LIE ALGEBRA M_2 (R)⋊gl_2 (R) THE LEVI DECOMPOSITION OF THE LIE ALGEBRA M_2 (R)⋊gl_2 (R) Keywords: Frobenius Lie algebra, Levi Decomposition, Lie algebra, Radical The idea of the Lie algebra is studied in this research. The decomposition between Levi sub-algebra and the radical can be used to define the finite dimensional Lie algebra. The Levi decomposition is the name for this type of decomposition. The goal of this study is to obtain a Levi decomposition of the Lie algebra of dimension 8. We compute its Levi sub-algebra and the radical of Lie algebra with respect to its basis to achieve this goal. We use literature studies on the Levi decomposition and Lie algebra in Dagli result to produce the radical and Levi sub-algebra. It has been shown that can be decomposed in the terms of the Levi sub-algebra and its radical. In this resulst, it has been given by direct computations and we obtained that the explicit formula of Levi decomposition of the affine Lie algebra whose basis is is written by with is is the Levi sub-algebra of . Download data is not yet available. J. E. Humphreys, Introduction to Lie Algebras and Representation Theory, vol. 9. New York, NY: Springer New York, 1972. doi: 10.1007/978-1-4612-6398-2. Mehmet Dagli, “Levi Decomposition of Lie Algebras; Algorithms for its Computation,”master thesis, Iowa State University , Ames, Iowa, 2004. J. Hilgert and K.-H. Neeb, Structure and Geometry of Lie Groups. New York, NY: Springer New York, 2012. doi: 10.1007/978-0-387-84794-8. M. Rais, “La représentation coadjointe du groupe affine,” Annales de l’institut Fourier, vol. 28, no. 1, pp. 207–237, 1978, doi: 10.5802/aif.686. M. A. Alvarez, M. C. Rodríguez-Vallarte, and G. Salgado, “Contact and Frobenius solvable Lie algebras with abelian nilradical,” Commun Algebra, vol. 46, no. 10, pp. 4344–4354, Oct. 2018, doi: 10.1080 M. A. Alvarez, M. Rodríguez-Vallarte, and G. Salgado, “Contact nilpotent Lie algebras,” Proceedings of the American Mathematical Society, vol. 145, no. 4, pp. 1467–1474, Oct. 2016, doi: 10.1090/proc/ J. R. Gómez, A. Jimenéz-Merchán, and Y. Khakimdjanov, “Low-dimensional filiform Lie algebras,” J Pure Appl Algebra, vol. 130, no. 2, pp. 133–158, Sep. 1998, doi: 10.1016/S0022-4049(97)00096-0. I. V Mykytyuk and C. E. by B Vinberg, “Structure of the coadjoint orbits of Lie algebras,” Journal of Lie theory, vol. 22, pp.251-268, 2012. H. Henti, E. Kurniadi, and E. Carnia, “Quasi-Associative Algebras on the Frobenius Lie Algebra M_3 (R)⊕gl_3 (R),” Al-Jabar : Jurnal Pendidikan Matematika, vol. 12, no. 1, pp. 59–69, Jun. 2021, doi: B. Csikós and L. Verhóczki, “Classification of Frobenius Lie algebras of dimension ≤6,” Publicationes Mathematicae Debrecen, vol. 70, no. 3–4, pp. 427–451, Apr. 2007, doi: 10.5486/PMD.2007.3556. A. Diatta and B. Manga, “On Properties of Principal Elements of Frobenius Lie Algebras ,” Journal of Lie Theory, vol. 24, no. 3, pp. 849–864, 2014. E. Kurniadi, E. Carnia, and A. K. Supriatna, “The construction of real Frobenius Lie algebras from non-commutative nilpotent Lie algebras of dimension,” J Phys Conf Ser, vol. 1722, no. 1, p. 012025, Jan. 2021, doi: 10.1088/1742-6596/1722/1/012025. E. Kurniadi and H. Ishi, “Harmonic Analysis for 4-Dimensional Real Frobenius Lie Algebras,” 2019, pp. 95–109. doi: 10.1007/978-3-030-26562-5_4. E. Kurniadi, “Dekomposisi dan Sifat Matriks Struktur Pada Aljabar Lie Frobenius Berdimensi 4,” Prosiding Seminar Nasional Hasil Riset dan Pengabdian (SNHRP)-5, Universitas PGRI Adi Buana, 2021. E. Kurniadi, N. Gusriani, and B. Subartini, “A Left-Symmetric Structure on The Semi-Direct Sum Real Frobenius Lie Algebra of Dimension 8,” CAUCHY: Jurnal Matematika Murni dan Aplikasi, vol. 7, no. 2, pp. 267–280, Mar. 2022, doi: 10.18860/ca.v7i2.13462. How to Cite E. Kurniadi, H. Henti, and E. Carnia, “THE LEVI DECOMPOSITION OF THE LIE ALGEBRA M_2 (R)⋊gl_2 (R)”, BAREKENG: J. Math. & App., vol. 18, no. 2, pp. 0717-0724, May 2024. Copyright (c) 2024 Edi Kurniadi, Henti Henti, Ema Carnia This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Authors who publish with this Journal agree to the following terms: 1. Author retain copyright and grant the journal right of first publication with the work simultaneously licensed under a creative commons attribution license that allow others to share the work within an acknowledgement of the work’s authorship and initial publication of this journal. 2. Authors are able to enter into separate, additional contractual arrangement for the non-exclusive distribution of the journal’s published version of the work (e.g. acknowledgement of its initial publication in this journal). 3. Authors are permitted and encouraged to post their work online (e.g. in institutional repositories or on their websites) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published works.
{"url":"https://ojs3.unpatti.ac.id/index.php/barekeng/article/view/10419","timestamp":"2024-11-06T08:17:54Z","content_type":"text/html","content_length":"46683","record_id":"<urn:uuid:342e054f-6de6-4eb9-8100-b2e455b34d17>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00537.warc.gz"}
If I am given a formula and I am ignorant of its meaning, it cannot teach me anything, but if I already know it what does the formula teach me? TOPICS: formulas, knowledge, learning Students attend our lectures, not because the mathematics we teach ‘makes lots of fun’ for us, but because they believe they can learn some essential knowledge from us. And each of our young students has only one life to live. We should therefore be able to justify ourselves to our listeners with respect to what we teach them. TOPICS: teaching, learning, students, lectures To appreciate the living spirit rather than the dry bones of mathematics, it is necessary to inspect the work of a master at first hand. Textbooks and treatises are an unavoidable evil…The very crudities of the first attack on a significant problem by a master are more illuminating than all the pretty elegance of the standard texts which has been won at the cost of perhaps centuries of finicky polishing. TOPICS: learning, textbooks, mastery In Samoa, when elementary schools were first established, the natives developed an absolute craze for arithmetical calculations. They laid aside their weapons and were to be seen going about armed with slate and pencil, setting sums and problems to one another and to European visitors. The Honourable Frederick Walpole declares that his visit to the beautiful island was positively embittered by ceaseless multiplication and division. TOPICS: teaching, learning, arithmetic, humor Thus metaphysics and mathematics are, among all the sciences that belong to reason, those in which imagination has the greatest role. TOPICS: imagination, learning, philosophy, science, reason I suppose you are two fathoms deep in mathematics, and if you are, then God help you, for so am I, only with this difference, I stick fast in the mud at the bottom and there I shall remain. TOPICS: learning, humor An educated mind is, as it were, composed of all the minds of preceding ages. TOPICS: education, learning Each problem that I solved became a rule which served afterwards to solve other problems. TOPICS: learning, problems This therefore is Mathematics: She reminds you of the invisible forms of the soul; She gives life to her own discoveries; She awakens the mind and purifies the intellect; She brings light to our intrinsic ideas; She abolishes oblivion and ignorance which are ours by birth. TOPICS: learning, education, philosophy, psychology A good stack of examples, as large as possible, is indispensable for a thorough understanding of any concept, and when I want to learn something new, I make it my first job to build one. TOPICS: learning, examples The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver. TOPICS: learning, homework, problems The art of doing mathematics consists in finding that special case which contains all the germs of generality. TOPICS: art, learning, research The tantalizing and compelling pursuit of mathematical problems offers mental absorption, peace of mind amid endless challenges, repose in activity, battle without conflict, refuge from the goading urgency of contingent happenings, and the sort of beauty changeless mountains present to senses tried by the present-day kaleidoscope of events. TOPICS: problems, learning A child[’s]…first geometrical discoveries are topological…If you ask him to copy a square or a triangle, he draws a closed circle. TOPICS: learning, topology There is no more a math mind, than there is a history or an English mind. TOPICS: learning, mind
{"url":"https://platonicrealms.com/index.php/quotes/topics/learning","timestamp":"2024-11-13T12:56:25Z","content_type":"text/html","content_length":"166881","record_id":"<urn:uuid:d5d23456-f388-4cff-8fee-a0ee1b57d646>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00284.warc.gz"}
Weight and Terminal Velocity | Quizizz 6. Multiple Choice 30 seconds 1 pt 5. Multiple Choice 30 seconds 1 pt Just after she jumps out of the plane (before the parachute opens). The sky diver will have a constant speed a constant acceleration a constant velocity 4. Multiple Choice 30 seconds 1 pt Just after she jumps out of the plane (before the parachute opens). The net force will be 3. Multiple Choice 30 seconds 1 pt Just after she jumps out of the plane (before the parachute opens). The relative forces will be Weight force is equal to the friction force Friction force is constant Weight force is greater that the friction force Friction force is greater than the weight force 2. Multiple Choice 30 seconds 1 pt Just after she jumps out of the plane (before the parachute opens). The relative forces will be 1. Multiple Choice 1 minute 1 pt A parachutist of mass 75 kg jumps from a plane at a height of 4 000 m above sea level. The parachutist falls through a distance of 2 400 m during the first 60 seconds. Calculate the average speed of the parachutist during this time. 20 ms^-1 40 ms^-1 67 ms^-1 80 ms^-1 After the 60 seconds, the parachutist pulls the cord and opens her parachute. Explain how the parachute reduces the speed of the parachutist when it is just opened. The friction force decreases The weight force decreases The weight force increases The friction force increases 7. Multiple Choice 30 seconds 12 pts Describe the motion at point a. on the graph. constant speed 8. Multiple Choice 30 seconds 12 pts If this was the journey of a parachutist, what is happening at point e. ? He is decelerating He is accelerating He has reached a faster constant speed He has reached a lower constant speed. 9. Multiple Choice 30 seconds 1 pt The graph shows the vertical speed of a parachutist. At which point did they open the parachute? 10. Multiple Choice 30 seconds 1 pt If a skydiver changes from a vertical to a horizontal position during freefall: His terminal speed would increase His terminal speed would decrease His terminal speed would stay the same 11. Multiple Choice 30 seconds 1 pt If this graph were to show a parachutist jumping from a plane, how would you describe the forces acting at point B. His weight is bigger than air resistance Air resistance is bigger than his weight. His weight and air resistance are balanced. 12. Multiple Choice 30 seconds 1 pt When a parachute canopy is opened more drag is created because: It has a large mass It has a large surface area It is travelling at terminal speed The person is now vertical 13. Multiple Choice 5 minutes 1 pt The value for gravity on Earth is around _____ m/s^2 14. Multiple Choice 30 seconds 1 pt Weight is calculated by mass X gravity mass X velocity acceleration X velocity 15. Multiple Choice 30 seconds 1 pt What is mass? the amount of matter in an object how heavy an object is how fast an object falls how big an object is 16. Fill in the Blank 30 seconds 1 pt The weight of a girl with a mass of 40 kg is ___ N. 17. Fill in the Blank 30 seconds 1 pt The weight of a cart with a mass of 150 kg is ___ N. 18. Multiple Choice 30 seconds 1 pt An astronaut weighing 588 N on earth notices that he weighs only 98 N on moon. His mass on moon is ___ kg.
{"url":"https://quizizz.com/admin/quiz/5fe0d1e248f2c0001b82f2fa/weight-and-terminal-velocity","timestamp":"2024-11-02T15:52:43Z","content_type":"text/html","content_length":"375497","record_id":"<urn:uuid:b0bd2bc8-2945-4bc7-8206-cb0605b8c6fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00123.warc.gz"}
A- LEVEL MATHS - Global Math Institute \(\frac{dy}{dx}\) is the rate of change of y with respect to x For a local maximum or a local minimum f'(x)=0 f ” (x)> 0 implies a minimum f ”(x)<0 implies a maximum f ”(x)= 0 is a point of inflection The slope of the tangent at any point is \(\frac{dy}{dx}\) = \(\frac{sinθ}{1- cosθ}\) – Differential equations and applications \(\frac{dy}{dx}\) = 3x +2 The solution is y = \(\frac{3}{2}x^2\) + 2x +C \(\frac{dy}{dx}\) = \(5x^3y^3\) , The solution is y = ± \(\sqrt{\frac{3}{2(5x^3 + C)}}\)
{"url":"https://www.globalmathinstitute.com/course/a-level-maths/lessons/7-differentiation/","timestamp":"2024-11-04T21:54:00Z","content_type":"text/html","content_length":"1049027","record_id":"<urn:uuid:95a45c27-87a1-43b3-8705-061a0bb8738d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00348.warc.gz"}
Matrix multiplier This page is a tool allowing you to rapidly compute the multiplication (or any other formula) of two matrices. You have only to enter your matrices, and click! Enter your matrices (type line by line, separating the elements of each line by commas): To compute the invariants of a matrix or to evalue formulas on more than two matrices, you can use the Matrix calculator This page is not in its usual appearance because WIMS is unable to recognize your web browser. Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program. • Description: input two matrices and get their product (or other formula). This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical recreation and games • Keywords: wims, mathematics, mathematical, math, maths, interactive mathematics, interactive math, interactive maths, mathematic, online, calculator, graphing, exercise, exercice, puzzle, calculus, K-12, algebra, mathématique, interactive, interactive mathematics, interactive mathematical, interactive math, interactive maths, mathematical education, enseignement mathématique, mathematics teaching, teaching mathematics, algebra, geometry, calculus, function, curve, surface, graphing, virtual class, virtual classes, virtual classroom, virtual classrooms, interactive documents, interactive document, algebra, linear_algebra, matrix
{"url":"https://wims.univ-cotedazur.fr/wims/en_tool~linear~matmult.en.html","timestamp":"2024-11-03T18:49:56Z","content_type":"text/html","content_length":"6255","record_id":"<urn:uuid:ca2afaea-a325-4b34-9a77-9cdc07b52fe8>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00343.warc.gz"}
Feature Scaling Let’s understand feature scaling. Here is the housing dataset In this dataset, the total number of rooms ranges from about 6 to 39320 while the median incomes range from 0 to 15. Machine Learning algorithms don’t perform well when the input numerical features have very different scales. So how should we handle the case when the input data contains such a varied scale? There are two ways to make all attributes on the same scale. min-max scaling and standardization. So what is min-max scaling? Min-max scaling is also known as normalization. In min-max scaling, all the values are shifted and rescaled so that they end up ranging between 0 to 1. The minimum value in the original data becomes 0 in the normalized data and the maximum value in the original data becomes 1 in the normalized data. The remaining values in the original data take values between 0 and 1 in the normalized data. To calculate the normalized value we first subtract the original value with the minimum value of the list. And then we divide it with the range of the list i.e difference of maximum value and the minimum value in the list. Let’s find the normalized value of 50 Original data consist of minus 100, minus 50, 0, 50 and 100. In the original data, the minimum value is minus 100 And the maximum value is 100 So normalized value of 50 will be 50 minus negative 100 divided by 100 minus negative 100 which is 0.75. In short in min-max scaling values are shifted and rescaled so that the new values are between 0 and 1. The second approach of scaling is standardization. It is quite different from min-max scaling. As you can see in the above chart, min-max scaling scaled the input data in the range of 0 and 1 But standardization does not bound values to a specific range. In the standardization, we scale the values by calculating how many standard deviations away the value is from the mean. In standardization, features are rescaled so that output has the properties of standard normal distribution with ... Zero mean and Unit variance. So which approach should we use for feature scaling - min-max scaling or standardization? Min-max scaling is good for neural network algorithms as neural network algorithms often expect input values in the range of 0 and 1. Unlike min-max scaling, standardization does not bound values to a specific range. In other words, the min-max scaling always results in values between 0 and 1 while the standardization may result in larger range. Compared to min-max scaling, standardization is less affected by outliers. If we are using machine learning algorithms like support vector machines and logistic regression, we use standardization for One important point to note is scaling the target values or label is generally not required.
{"url":"https://cloudxlab.com/assessment/displayslide/7591/feature-scaling?playlist_id=1275","timestamp":"2024-11-10T08:17:38Z","content_type":"text/html","content_length":"124378","record_id":"<urn:uuid:fb1df4b7-e8ed-40c6-b464-d25645a2871c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00423.warc.gz"}
Geometry Unveiled Are you ready to unlock the secrets of geometry? In this article, we will reveal how to find the maximum area of a rectangle. You’ll discover the fascinating relationship between length and width, as well as the intriguing connection between perimeter and area. By following the step-by-step process, you’ll learn how to calculate the maximum area of a rectangle. Get ready to apply these concepts to real-life situations and maximize the potential of Let’s dive in! Understanding the Area of a Rectangle To understand the area of a rectangle, you need to know the length and width of the shape. This fundamental concept is essential in geometry. The area of a rectangle is determined by multiplying its length and width. Imagine you have a rectangle with a length of 5 units and a width of 3 units. By multiplying these values, you find that the area of the rectangle is 15 square units. The area represents the total amount of space that the rectangle covers. It’s a measure of the surface enclosed by the rectangle’s sides. Knowing how to calculate the area of a rectangle is crucial for various real-life applications, such as calculating the amount of flooring needed for a room or determining the size of a garden bed. The Relationship Between Length and Width You can determine the relationship between the length and width of a rectangle by analyzing how changes in one dimension affect the other. As the length of a rectangle increases, its width must decrease in order to maintain a constant area. Conversely, if you decrease the length, the width must increase to maintain the same area. This relationship is due to the fact that the area of a rectangle is determined by multiplying its length and width. Therefore, any change in one dimension will directly impact the other dimension to maintain the same area. Understanding this relationship is crucial when trying to find the maximum area of a rectangle, as it allows you to make informed decisions about the dimensions that will yield the greatest area. Exploring the Perimeter-Area Relationship Now, let’s consider the tradeoff between perimeter and area when it comes to rectangles. By exploring this relationship, you can determine the optimal dimensions for a rectangle that maximize its Understanding this concept can be useful in various real-world applications, such as optimizing the amount of fencing needed for a garden or maximizing the amount of space inside a room. Perimeter Vs. Area Tradeoff Explore the tradeoff between perimeter and area by considering how the size of a rectangle’s sides impacts its overall dimensions. When you increase the length of one side of the rectangle, while keeping the other side constant, the perimeter will increase, but the area will stay the same. This is because the additional length added to one side is compensated by a reduction in the length of the other side. On the other hand, if you increase both sides of the rectangle by the same amount, the perimeter will increase, and so will the area. This is because the additional length on each side adds to the total perimeter, while also increasing the overall area. Therefore, there’s a tradeoff between perimeter and area, where increasing one will often result in a decrease or increase in the other, depending on how the sides are adjusted. Optimal Rectangle Dimensions To uncover the optimal dimensions of a rectangle and delve into the relationship between perimeter and area, consider how the size of its sides impacts its overall dimensions. When it comes to finding the maximum area of a rectangle, it’s important to understand that the length and width play a crucial role. As you increase the length of the rectangle, the area will also increase. However, if you keep the length the same and increase the width, the area will also increase. This demonstrates that the dimensions of a rectangle are intimately connected to its area. Real-World Application Examples As you delve into real-world applications, you can further understand the relationship between the perimeter and area of a rectangle. This relationship plays a significant role in various fields, such as architecture, engineering, and urban planning. For example, when designing a garden or park, you need to consider the area you have available and the perimeter that surrounds it. By understanding the relationship between the perimeter and area, you can optimize the use of space and create a functional and aesthetically pleasing design. Similarly, in construction, knowing the relationship between the perimeter and area helps in determining the amount of materials needed, such as fencing or flooring. By applying this knowledge, you can save costs and minimize waste. Real-life scenarios highlight the practical importance of understanding the perimeter-area relationship in rectangles. Introducing the Maximum Area Formula To find the maximum area of a rectangle, you need to know the length and width. The formula to calculate the maximum area is A = L x W, where A represents the area, L represents the length, and W represents the width. By using this formula, you can determine the dimensions that will result in the largest possible area for a given perimeter or a fixed length of fencing. This formula is derived from the fact that a square has the largest area among all rectangles with the same perimeter. Step 1: Finding the Critical Points To find the critical points in the process of maximizing the area of a rectangle, you’ll need to analyze the relationship between the length and width of the rectangle. Critical points are the values where the area of the rectangle changes from increasing to decreasing or vice versa. In this case, the area of the rectangle is given by the formula A = length × width. To find the critical points, you’ll need to take the derivative of the area formula with respect to either the length or width. This derivative will give you the rate of change of the area with respect to the chosen variable. By setting the derivative equal to zero and solving for the variable, you can find the critical points. These critical points will help you determine the maximum area of the rectangle. Step 2: Calculating the Maximum Area Now that you have found the critical points, it’s time to calculate the maximum area of the rectangle. To do this, you’ll need to determine the optimal length and width that will result in the largest possible area. Optimal Length and Width Determine the maximum area of a rectangle by figuring out the length and width that yield the greatest possible value. To find the optimal length and width, you need to consider the relationship between the two dimensions. Remember that the area of a rectangle is calculated by multiplying its length and width. So, when one dimension increases, the other dimension must decrease to maintain a constant perimeter. This means that the maximum area occurs when the length and width are equal, forming a square. By making the length and width of the rectangle equal, you ensure that the area is maximized. Finding the Turning Point Calculate the maximum area of a rectangle by finding the turning point. To do this, you need to understand that the area of a rectangle is given by the formula A = length × width. In order to maximize the area, you need to find the dimensions that will give you the largest possible product. Start by assigning a variable to one of the dimensions, let’s say the length. Then express the other dimension, width, in terms of this variable. Now, differentiate the area formula with respect to the variable, and set the derivative equal to zero. Solve for the value of the variable that makes the derivative zero. This value is the turning point or critical point. Once you have the value of the variable, substitute it back into the formula to find the corresponding value for the other dimension. This will give you the dimensions that will result in the maximum area for the rectangle. Real-Life Applications of Maximum Area How can you identify real-life scenarios where finding the maximum area of a rectangle is crucial? One example is in the field of architecture, where maximizing the usable space within a building is essential. By finding the maximum area of a rectangle, architects can determine the most efficient layout for rooms and ensure that every square foot is utilized effectively. Another application is in agriculture, specifically when planning the layout of fields. Maximizing the area of a rectangular plot allows farmers to maximize their crop yield, thereby increasing their Additionally, in urban planning, finding the maximum area of a rectangle is important when designing parks and public spaces. By maximizing the area, city planners can create larger green spaces for the community to enjoy. Conclusion: Maximizing the Potential of Rectangles To maximize the potential of rectangles, you can employ various strategies that optimize their area. By understanding the relationship between length and width, you can determine the dimensions that will yield the largest possible area. One effective strategy is to use the formula for finding the maximum area of a rectangle, which involves setting the derivative of the area function equal to zero. This allows you to find the critical points, which represent the dimensions that result in the maximum area. Another strategy is to consider the special case of a square, where all sides are equal. Squares have the unique property of maximizing area for a given perimeter, making them an ideal choice in certain scenarios. Frequently Asked Questions What Is the Formula for Calculating the Perimeter of a Rectangle? To find the perimeter of a rectangle, you can use the formula: P = 2(l + w), where P represents perimeter, l represents length, and w represents width. How Does the Area of a Rectangle Change When the Length and Width Are Equal? When the length and width of a rectangle are equal, the area is maximized. This occurs because a square is a special type of rectangle with equal sides, resulting in the largest possible area. What Are Some Practical Uses of the Maximum Area Formula for Rectangles? Practical uses of the maximum area formula for rectangles include optimizing space in architecture, maximizing crop yield in farming, and determining the most efficient dimensions for packaging Can the Maximum Area Formula Be Applied to Other Shapes or Is It Specific to Rectangles? The maximum area formula can be applied to other shapes, but it is specific to rectangles. Other shapes have their own formulas to find the maximum area. Are There Any Limitations or Restrictions When Using the Maximum Area Formula to Find the Maximum Area of a Rectangle? There are no limitations or restrictions when using the maximum area formula to find the maximum area of a rectangle. It applies to all rectangles and can be used without any constraints. So, by understanding the relationship between length and width, exploring the perimeter-area relationship, and applying the maximum area formula, we can find the maximum area of a rectangle. This knowledge can be applied in real-life scenarios to maximize the potential of rectangles in various fields such as construction and design. By maximizing the area, we can make the most efficient use of space and resources.
{"url":"http://higheducations.com/geometry-unveiled/","timestamp":"2024-11-09T19:01:08Z","content_type":"text/html","content_length":"103911","record_id":"<urn:uuid:ed649bbe-8052-4dfd-9842-4c9d0a4c3eae>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00748.warc.gz"}
PROC FASTCLUS: PROC FASTCLUS Statement :: SAS/STAT(R) 9.22 User's Guide Table 34.1 PROC FASTCLUS Statement Options Option Description Specify input and output data sets DATA= specifies input data set INSTAT= specifies input SAS data set previously created by the OUTSTAT= option SEED= specifies input SAS data set for selecting initial cluster seeds VARDEF= specifies divisor for variances Output Data Processing CLUSTER= specifies name for cluster membership variable in OUTSEED= and OUT= data sets CLUSTERLABEL= specifies label for cluster membership variable in OUTSEED= and OUT= data sets OUT= specifies output SAS data set containing original data and cluster assignments OUTITER specifies writing to OUTSEED= data set on every iteration OUTSEED= or MEAN= specifies output SAS data set containing cluster centers OUTSTAT= specifies output SAS data set containing statistics Initial Clusters DRIFT permits cluster to seeds to drift during initialization MAXCLUSTERS= specifies maximum number of clusters RADIUS= specifies minimum distance for selecting new seeds RANDOM= specifies seed to initializes pseudo-random number generator REPLACE= specifies seed replacement method Clustering Methods CONVERGE= specifies convergence criterion DELETE= deletes cluster seeds with few observations LEAST= optimizes an MAXITER= specifies maximum number of iterations STRICT prevents an observation from being assigned to a cluster if its distance to the nearest cluster seed is large Arcane Algorithmic Options BINS= specifies number of bins used for computing medians for LEAST=1 HC= specifies criterion for updating the homotopy parameter HP= specifies initial value of the homotopy parameter IRLS uses an iteratively reweighted least squares method instead of the modified Ekblom-Newton method for Missing Values IMPUTE imputes missing values after final cluster assignment NOMISS excludes observations with missing values Control Displayed Output DISTANCE displays distances between cluster centers LIST displays cluster assignments for all observations NOPRINT suppresses displayed output SHORT suppresses display of large matrices SUMMARY suppresses display of all results except for the cluster summary
{"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_fastclus_sect005.htm","timestamp":"2024-11-04T14:41:03Z","content_type":"application/xhtml+xml","content_length":"58106","record_id":"<urn:uuid:a8629f39-10ff-48c4-9c5f-ed72de265179>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00413.warc.gz"}
3. Formal Proof of the Collatz Conjecture $f ( x )$ be the Collatz function defined as: And let $R ( x )$ be the multivalued inverse function of $f ( x )$ given by: We now formally define the Algebraic Inverse Tree: Definition 3.1. Let $T k$ be the directed tree rooted at k constructed recursively as: • The root node of $T k$ is k. • If n is a node in $T k$, its child nodes are the elements of $R ( n )$. • The edges from n to each child h are labeled with the operation $n → h$. $T k$ is the Algebraic Inverse Tree (AIT) of parameter k. We now prove two key lemmas about the properties of AITs: Lemma 3.1. [Collatz Function and its Inverse] Let $f : N → N$ be the Collatz function defined by The function f is invertible in a multi-valued sense. Specifically, for each $x ∈ N$, there exists a finite, non-empty set $R ( x ) ⊂ N$ such that for all $y ∈ R ( x )$, $f ( y ) = x$. Define the function $R : N → 2 N$ $2 N$ denotes the power set of natural numbers) by $x = 1$ $R ( 1 ) = ∅$ since 1 is the end of any Collatz sequence. We now consider two cases for x: Case 1: $x ≢ 4 ( mod 6 )$ $x = 1$ . Here, $R ( x ) = { 2 x }$ . We then have establishing the inverse relationship in this case. Case 2: $x ≡ 4 ( mod 6 )$ $x > 1$ . In this situation, $R ( x ) = { 2 x , x − 1 3 }$ . Applying to both elements of this set, we have: This confirms that, for all $x ∈ N$, there exists a finite set $R ( x )$ such that for all $y ∈ R ( x )$, $f ( y ) = x$. Lemma 3.2. Every natural number appears as a node in the AIT $T 1$. We will use strong induction on n. Base case: $n = 1$ is the root node of $T 1$, so the lemma holds. Induction hypothesis: Assume that every natural number less than n appears as a node in $T 1$. Inductive step: Consider two cases for n: • Case 1: n is odd. In this case, $n − 1 3 < n$ is a natural number. By the induction hypothesis, $n − 1 3$ is a node in $T 1$. The tree construction guarantees that if $n − 1 3$ is in $T 1$, then by adding the edge $n − 1 3 → n$, n will also be included in $T 1$. • Case 2: n is even. Here, $n 2 < n$ is a natural number. By our induction hypothesis, $n 2$ is already a node in $T 1$. Similarly, the tree construction ensures that adding the edge $n 2 → n$ will include n in $T 1$ In both cases, n is ensured to be a node in $T 1$. Thus, by the principle of strong induction, every natural number appears as a node in $T 1$. □ Lemma 3.3. [Complete Invariance Lemma] Let $R : N → P ( N )$ be the multivalued inverse function of the Collatz algorithm defined as: Then, if we take $N$ as the full domain where $R ( x )$ is defined, the complete image is exactly $N$. Let us define the function $P : N → P ( N )$ $P ( n ) = R ( 6 n ) ∪ R ( 6 n + 1 ) ∪ R ( 6 n + 2 ) ∪ R ( 6 n + 3 ) ∪ R ( 6 n + 4 ) ∪ R ( 6 n + 5 )$ Expanding this, we obtain: $P ( n ) = { 12 n } ∪ { 12 n + 2 } ∪ { 12 n + 4 } ∪ { 12 n + 6 } ∪ { 12 n + 8 , 2 n + 1 } ∪ { 12 n + 10 }$ Note that for any $n ∈ N$ , we have $P ( n ) ⊆ N$ , since each element in the union is a natural number obtained by applying to various values congruent to 0, 1, 2, 3, 4, 5 modulo 6. Now we claim that $⋃ n = 0 ∞ P ( n ) = N$. To see this, take any $m ∈ N$. We can write $m = 6 q + r$ where $0 ≤ r < 6$ for some $q ∈ N$. Then $m ∈ P ( q )$ by the definition of P, since applying R to the residue class $r ( mod 6 )$ generates m. Hence every natural number is contained in $P ( n )$ for some n, implying $⋃ n = 0 ∞ P ( n ) = N$. Therefore, taking $N$ as the full domain of $R ( x )$, the complete image under R is precisely $N$. This proves the Complete Invariance. □ Theorem 3.4. [Finite Steps Theorem in AIT] Let $A I T ( n )$ be the algebraic inverse tree with parameter n defined recursively as: • The root node of $A I T ( n )$ is n. • If m is a node in $A I T ( n )$, its child nodes are the elements of the set $R ( m )$, where R is the multivalued inverse function of the Collatz algorithm. Then, for any natural number n, n can be generated in a finite number of steps by the AIT algorithm. We will prove the theorem by strong induction on n. Base Case: For $n = 1$, $A I T ( 1 )$ starts with the root node 1. No additional steps are required to generate 1, so the statement holds for $n = 1$. Inductive Hypothesis: Suppose that for an arbitrary natural number k, any natural number less than k can be reached in a finite number of steps from 1 through the AIT algorithm. Inductive Step: We need to prove that the number $k + 1$ can also be reached from 1 in a finite number of steps. Let’s consider the inverse function R: There are two cases to consider: • Case 1: $k + 1 ≢ 4 ( mod 6 )$. In this case, there exists a unique predecessor $2 ( k + 1 )$. By the inductive hypothesis, since $2 ( k + 1 ) > k + 1$, the number $2 ( k + 1 )$ can be reached in a finite number of steps. Thus, $k + 1$ is reachable in an additional step. • Case 2: $k + 1 ≡ 4 ( mod 6 )$. In both cases, $k + 1$ can be reached in a finite number of steps. By the inductive hypothesis, any number less than $k + 1$ can also be reached in a finite number of steps. Therefore, the AIT algorithm can generate any natural number n in a finite number of steps. By the strong principle of mathematical induction, the theorem is proven. □ Lemma 3.5. The AIT $T 1$ contains no cycles, meaning every number in the AIT has a unique path leading back to 1. Assume for the sake of contradiction that there exists a cycle in $T 1$. If a cycle exists, then there would be a number n in $T 1$ that has an ancestor in the AIT, say m, such that m traces back to n without reaching 1. This implies that n does not have a unique path to However, by the construction and properties of the AIT, every number in $T 1$ traces its way uniquely back to 1. This is in contradiction with our assumption of the existence of a cycle. Thus, our initial assumption is false, and no cycles can exist in $T 1$. Therefore, every number in the AIT $T 1$ has a unique path leading back to 1. □ Theorem 3.6. Given a parameter k, $T k$ is unique. We proceed by proof by contradiction. Assume, for the sake of contradiction, that there exists another tree, let’s call it $T k ′$, that is constructed using the same rules as $T k$ but is different from $T k$. This means that there must be at least one node in $T k$ that is not in $T k ′$ or vice versa. Consider the construction process of $T k$ and $T k ′$: 1. Both trees have k as their root node by definition. 2. Every node n in $T k$ (or $T k ′$) has children which are the elements of $R ( n )$, by definition. 3. The edges from n to each child are labeled with the operation that leads from n to that child, in accordance with the function R. Now, following these construction steps, every node that is added to $T k$ must also be added to $T k ′$, and vice versa, since both trees are built using the same rules. Therefore, there cannot be a node in $T k$ that is not in $T k ′$ or vice versa. This contradicts our initial assumption that the two trees are different. Hence, our initial assumption was incorrect, and $T k$ must be unique. □ Theorem 3.7. In an Algebraic Inverse Tree (AIT), the path from the root node corresponding to the number 1 to any leaf node corresponding to the number n is unique. To prove the theorem, we will use induction on n. Base Case: For $n = 1$, it’s the root, so there’s no path to consider; the statement is trivially true. Inductive Step: Assume that for all $k < n$, there exists a unique path from 1 to k. We need to prove that there is a unique path from 1 to n. There are two cases for n: • $n ≢ 4 ( mod 6 )$ In this case, the only possible predecessor of n in the AIT is $n 2$. Since we assume a unique path for all values less than n, there is a unique path from 1 to $n 2$. This gives a unique path from 1 to n by extending the path from 1 to $n 2$ with the edge $n 2$ to n. • $n ≡ 4 ( mod 6 )$ Here, n can have two possible predecessors: $n − 1 3$ and $n 2$. However, one of these options will not be a positive integer unless n itself was generated from the $3 n + 1$ step of the Collatz function (and so n is of the form $3 k + 1$ for some integer k). Given $n ≡ 4 ( mod 6 )$, it’s clear that $n − 1 3$ is an integer. Thus, n can be obtained from $n − 1 3$ using the Collatz function. This means that the unique path from 1 to n goes through $n − 1 3$ and not through $n 2$. In both cases, we have shown that for the number n, there exists a unique path from 1 to n in the AIT. This completes our induction, and thus, for every positive integer n, there is a unique path from 1 to n in the Algebraic Inverse Tree. □ We are now ready to formally prove the Collatz Conjecture: 3.1. The Proof Theorem 3.8. [Collatz Conjecture] For every natural number n, iterating the function $f ( x )$ will eventually reach the number 1. The Collatz Conjecture is true. • Every natural number appears as a node in the AIT $T 1$. (Lemma 3.2) • Every number in the AIT $T 1$ has a unique path leading back to 1. (Lemma 3.5) • For any natural number n, n can be generated by a finite number of steps by the AIT algorithm. (Theorem 3.4) • The multivalued inverse function $R ( x )$ can be used to trace back from n to 1 by repeatedly applying $R ( x )$ to n. (Theorem 3.1) • Applying $f ( x )$ to n will eventually reach 1, since applying the inverse $R ( x )$ repeatedly on n will get us to 1, and the functions $f ( x )$ and $R ( x )$ have unique paths in the AIT. (Theorem 3.7) Conclusion:Therefore, for any natural number n, iterating the function $f ( x )$ will eventually reach the number 1. This proves the Collatz Conjecture. □ 6. Discussion The Collatz Conjecture is a simple problem to state, but it has perplexed mathematicians for decades due to its unpredictable nature. Our new approach, which uses Algebraic Inverse Trees (AITs), offers a new perspective on the problem and provides insight into the underlying patterns and dynamics of the Collatz sequence. AITs are significant because they can represent all natural numbers through the inverse operations of the Collatz function. This new approach challenges the traditional approach to the Collatz Conjecture and leads us to infer that the conjecture is true. Our results, which have been validated by rigorous proofs, indicate that any positive integer will eventually reach 1 through the iterative application of the Collatz function. Our work has two significant implications. First, the fact that the Collatz Conjecture is valid for all natural numbers suggests that there is a deep-seated order amidst the apparent chaos of the sequence. Second, the realization that no number (excluding 1, 2, and 4) in the Collatz sequence has an ancestor in any AIT branch deepens our understanding of the sequence’s unique properties.
{"url":"https://www.preprints.org/manuscript/202310.0773/v6","timestamp":"2024-11-12T08:47:45Z","content_type":"text/html","content_length":"648065","record_id":"<urn:uuid:bb9900c7-2e16-4cc8-bd16-a77d427a3cbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00090.warc.gz"}
Triplet (or more) paradox vs Twin paradox Let there is a triplet of A, B, and C on an asteroid initially. A stays on an asteroid while B and C set out for a long space journey with high speed (say 0.5c and 0.9c) at the same time in the same direction relative to A. Assume each 10 years old at the time of departure. B and C are gone for 60 years relative to A. Afterward, B and C return home at the same time and reunited with A on an [DEL:What would be the age of A relative to B and C? What would be the age of B relative to C and A? What would be the age of C relative to A and B?:DEL] Since A, B and C can have only one physical appearance and one age. Thus who would be right on the physical appearance w.r.t their respective time of one another especially that of A? Make it more simple: Let there are 96 clone brothers. Clone #96 stays on an asteroid while the rest take off at the same time with the following speeds relative to Clown#96, in the same direction for their long synchronized space journey. Assume each 10 years old at the time of departure. All 95 clones gone for 60 years relative to clone 96. Afterward, 1 to 95 return home at the same time and reunited with clone 96 on an asteroid. Speed of clone #1 is 0.01c , #2 is 0.02c, #3 is 0.03c, #4 is 0.04 c, ..., #10 is 0.1c, ......, #20 is 0.2c, ......, #90 is 0.9c, ....,#95 is 0.95c Each clone can have only one age and one physical appearance therefore who would be right on the physical appearance of #96 clone?
{"url":"https://www.theflatearthsociety.org/forum/index.php?topic=74208.0","timestamp":"2024-11-12T22:44:56Z","content_type":"application/xhtml+xml","content_length":"113731","record_id":"<urn:uuid:bd8ca804-4564-4552-95cf-ef716b6fb63b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00612.warc.gz"}
Help with a question (Bernoulli General solution) • MHB • Thread starter Ultra • Start date In summary: Hello, Thanks for your reply. I am currently reading a text book on partial derivatives. Any Calculus level text (you are looking for something like Calc III) should do the trick. I'm not certain that you are going to find a source on just partials but there ought to be any number of videos and examples online. Hello guys I hope you all are doing well. :) I found below question in a book by Martin Braun "Differential Equations and Their Applications An Introduction to Applied Mathematics (Fourth Edition)" The question : The Bernoulli differential equation is (dy/dt)+a(t)y=b(t)y^n. Multiplying through by µ(t)=exp(Integral of a(t).dt), we can rewrite this equation in the form d/dt(µ(t)y)=b(t)µ(t)y^n. Find the general solution of this equation by finding an appropriate integrating factor. Hint: Divide both sides of the equation by an appropriate function of y. I have problems solving this question. because this question is located in section (1.9) of the book which talks about exact equations and my problem is that I cannot turn "d/dt(µ(t)y)=b(t)µ(t)y^n" to an exact equation form. I'd appreciate any help. :) Gold Member POTW Director Welcome, Ultra! Multiply both sides of the Bernoulli equation by $y^{-n}$ to obtain $y^{-n} dy/dt + a(t)y^{1-n} = b(t)$. By the chain rule, $d/dt(y^{1-n}) = (1-n)y^{-n} dy/dt$. So the equation can be written $d/dt(y ^{1-n}) + (1-n)a(t)y^{1-n} = (1-n) b(t)$. If $u = y^{1-n}$, then $du/dt + (1-n)a(t)u = (1-n)b(t)$. In this form, you can now find an appropriate integrating factor and solve the equation. Hello Euge, Thanks for your reply. However, I think the question asks for something else! Would u please read the attached PDF? Gold Member POTW Director If $z = \mu y$, then $y = z/\mu$ and $d/dt(\mu(t)y) = b(t)\mu(t)y^n$ becomes $$\frac{dz}{dt} = \frac{b}{\mu^{n-1}} z^n$$ Alternatively, $$\frac{dz}{z^n} - \frac{b(t)}{\mu(t)^{n-1}}\, dt = 0$$ This equation is already exact, in fact, separable. So there is no integrating factor to be found. Hello Euge, And thank you for your reply. So, the attached PDF is totally wrong? Ultra said: Hello Euge, And thank you for your reply. So, the attached PDF is totally wrong? Everything up to the point where you say "And" is correct. But pretty much everything below the statement "Now could we say below" is not. When you take your partial with respect to t you will not get the RHS. I'm not sure how you calculated it, but it's wrong. topsquark said: Everything up to the point where you say "And" is correct. But pretty much everything below the statement "Now could we say below" is not. When you take your partial with respect to t you will not get the RHS. I'm not sure how you calculated it, but it's wrong. Hello Dan, And thank you. topsquark said: Everything up to the point where you say "And" is correct. But pretty much everything below the statement "Now could we say below" is not. When you take your partial with respect to t you will not get the RHS. I'm not sure how you calculated it, but it's wrong. Dan what book u suggest for someone who wants bone up his understanding of partials? (after all these years I still think I haven't known them by heart!) Ultra said: Dan what book u suggest for someone who wants bone up his understanding of partials? (after all these years I still think I haven't known them by heart!) Any Calculus level text (you are looking for something like Calc III) should do the trick. I'm not certain that you are going to find a source on just partials but there ought to be any number of videos and examples online. FAQ: Help with a question (Bernoulli General solution) 1. What is Bernoulli's General solution? Bernoulli's General solution is a mathematical formula used to solve differential equations of the form dy/dx + P(x)y = Q(x)y^n. It is named after the Swiss mathematician Jacob Bernoulli and is a generalization of the Bernoulli equation. 2. How is Bernoulli's General solution derived? The general solution is derived by first dividing the differential equation by y^n and then using a substitution u = y^(1-n). This transforms the equation into a linear differential equation, which can then be solved using standard methods. 3. What are the applications of Bernoulli's General solution? Bernoulli's General solution has various applications in physics, engineering, and economics. It is commonly used to model population growth, chemical reactions, and fluid dynamics. It is also used in the design of aircraft wings and in financial mathematics. 4. Are there any limitations to using Bernoulli's General solution? One limitation of Bernoulli's General solution is that it can only be applied to differential equations of the specific form dy/dx + P(x)y = Q(x)y^n. It also assumes that the values of P(x) and Q(x) are continuous and differentiable. 5. How can I use Bernoulli's General solution to solve a specific problem? To use Bernoulli's General solution to solve a specific problem, you will need to first identify if the problem can be represented by the general form of the equation. Then, you can follow the steps of substitution and solving the resulting linear differential equation to find the general solution. Finally, you can use initial conditions or boundary conditions to determine the particular solution for the problem at hand.
{"url":"https://www.physicsforums.com/threads/help-with-a-question-bernoulli-general-solution.1044567/","timestamp":"2024-11-03T13:43:31Z","content_type":"text/html","content_length":"118504","record_id":"<urn:uuid:604fb467-c2a0-4522-8385-45101b927110>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00339.warc.gz"}
Online calculators Counting pipes from the end face The approach to the calculation of the number of nested circles of smaller radius at a known length of the describing circle of a larger radius&period; Created by user request Regular Polygon Incircle and Circumcircle Calculator This page presents two online calculators&colon; the Incircle of a regular polygon calculator and the circumcircle of a regular polygon calculator&period; Circular segment Here you can find the set of calculators related to circular segment&colon; segment area calculator&comma; arc length calculator&comma; chord length calculator&comma; height and perimeter of circular segment by radius and angle calculator&period; Area of circle segment by radius and height Calculates area&comma; perimeter&comma; angle&comma; chord of circle segment defined by radius and height Ball Volume to Radius Calculator This calculator calculates the radius of a ball from its volume&period; Pipe cold bending&period; Flexure depth with the main shaft&period; Calculates the section flexure depth with pipe bender or bending machine to obtain given parameters&period; Regular polygon&comma; number of sides and length of side from incircle and circumcircle radii This online calculator finds number of sides and length of side of regular polygon given the radii of incircle and circumcircle Side of regular polygon from polygon area Calculates the side length of regular polygon given the area of the regular polygon and number of sides Cone development Calculator of right circular cone &sol; truncated right circular cone development Earth Radius by Latitude &lpar;WGS 84&rpar; This online calculator calculates Earth radius at given latitude using WGS 84 reference ellipsoid Distance through the Earth This calculator calculates the distance from one point on the Earth to another point&comma; going through the Earth&comma; instead of going across the surface&period; Sphere Volume and Radius Calculator This calculator is a two-in-one tool that can be used to find the volume of a sphere given its radius&comma; or the radius of a sphere given its volume&period; Arc length calculator This universal online calculator can find arc length of circular segment by radius and angle&comma; by chord and height and by radius and height&period; Area-to-Radius Calculator This online calculator calculates the radius of a circle from the given area of a circle&period; General to Standard Form Circle Converter The calculator takes the equation of a circle in general form&comma; with variables for x&comma; y&comma; and constants a&comma; b&comma; c&comma; d and e&comma; and converts it to the standard form equation for a circle with variables h&comma; k&comma; and r&period; It then calculates the center of the circle &lpar;h&comma; k&rpar; and its radius r&period; If the equation cannot be converted to standard form&comma; the calculator reports an error message&period;
{"url":"https://planetcalc.com/search/?tag=1279","timestamp":"2024-11-09T06:19:59Z","content_type":"text/html","content_length":"20067","record_id":"<urn:uuid:e21a331b-e64e-48de-9caa-43814987e21f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00181.warc.gz"}
Advanced Methods in Mathematical Finance Working in the Merton's optimal consumption framework with continuous time we consider an optimization problem for a portfolio with an illiquid, a risky and a risk-free asset. Our goal in this paper is to carry out a complete Lie group analysis of PDEs describing value function and investment and consumption strategies for a portfolio with an illiquid asset that is sold in an exogenous random moment of time $T$ with a prescribed liquidation time distribution. The problem of such type leads to three dimensional nonlinear Hamilton-Jacobi-Bellman (HJB) equations. Such equations are not only tedious for analytical methods but are also quite challenging form a numeric point of view. To reduce the three-dimensional problem to a two-dimensional one or even to an ODE one usually uses some substitutions, yet the methods used to find such substitutions are rarely discussed by the authors. We use two types of utility functions: general HARA type utility and logarithmic utility. We carry out the Lie group analysis of the both three dimensional PDEs and are able to obtain the admitted symmetry algebras. Then we prove that the algebraic structure of the PDE with logarithmic utility can be seen as a limit of the algebraic structure of the PDE with HARA-utility as $\gamma \to 0$. Moreover, this relation does not depend on the form of the survival function $\overline{\Phi} (t)$ of the random liquidation time $T$. We find the admitted Lie algebra for a broad class of liquidation time distributions in cases of HARA and log utility functions and formulate corresponding theorems for all these cases. We use found Lie algebras to obtain reductions of the studied equations. Several of similar substitutions were used in other papers before whereas others are new to our knowledge. This method gives us the possibility to provide a complete set of non-equivalent substitutions and reduced equations. We also show that if and only if the liquidation time defined by a survival function $\overline{\Phi} (t)$ is distributed exponentially, then for both types of the utility functions we get an additional symmetry. We prove that both Lie algebras admit this extension, i.e. we obtain the four dimensional $L^{HARA}_4$ and $L^{LOG}_4$ correspondingly for the case of exponentially distributed liquidation time. We list reduced equations and corresponding optimal policies that tend to the classical Merton policies as illiquidity becomes small. This research was supported by the European Union in the FP7-PEOPLE-2012-ITN Program under Grant Agreement Number 304617 (FP7 Marie Curie Action, Project Multi-ITN STRIKE - Novel Methods in Computational Finance)
{"url":"https://indico.math.cnrs.fr/event/3123/timetable/?view=standard_inline_minutes","timestamp":"2024-11-02T20:57:57Z","content_type":"text/html","content_length":"243985","record_id":"<urn:uuid:16cfd686-0af0-46b7-adb0-8f2831239afa>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00544.warc.gz"}
GeometryEngine Class Members—ArcGIS Pro The following tables list the members exposed by GeometryEngine. Public Properties Name Description Instance Gets the GeometryEngine instance. Public Methods Name Description AccelerateForRelationalOperations Produces a copy of the given geometry that is an accelerated geometry used to speed up relational operations. Only polyline and polygon geometries can be accelerated. If the geometry cannot be accelerated, the method returns the same input geometry. Area Gets the area of the geometry. This is a planar measurement using 2D Cartesian mathematics to compute the area. AutoComplete Constructs a polygon that fills in the gaps between the existing polygon and polyline. Boundary Calculates the boundary of the input geometry. Buffer Overloaded. CalculateNonSimpleMs Calculates M attribute values for each non-simple (NaN) M-value from existing simple (non-NaN) M attributes on the specified geometry. The non-simple M values are obtained by extrapolation/interpolation for polylines and interpolation for polygons. CalculateNonSimpleZs Calculates Z attribute values for each non-simple (NaN) Z-value from existing simple (non-NaN) Z attributes on the specified geometry. The non-simple Z values are obtained by extrapolation/interpolation for polylines and interpolation for polygons. CalibrateByMs Calibrates M values using M values of input points. CenterAt Center the envelope at the specified x and y coordinates. Centroid Gets the centroid (center of gravity) of the geometry. Clip Constructs the polygon created by clipping geometry by envelope. ConstructMultipatchExtrude Creates a multipatch from the input polygon or polyline. ConstructMultipatchExtrudeAlongLine Creates a multipatch from the input polygon or polyline. ConstructMultipatchExtrudeAlongVector3D Creates a multipatch from the input polygon or polyline. ConstructMultipatchExtrudeFromToZ Creates a multipatch from the input polygon or polyline. ConstructMultipatchExtrudeToZ Creates a multipatch from the input polygon or polyline. Contains Returns true if geometry1 contains geometry2. ConvexHull Constructs the convex hull of the geometry. Crosses Returns true if geometry1 crosses geometry2. Cut Splits this geometry into parts. A polyline will be split into two parts at most. DensifyByAngle Densifies the specified geometry. DensifyByDeviation Densifies the specified geometry. DensifyByLength Densifies the specified geometry. DensifyByLength3D Densifies the specified geometry. Difference Performs the topological difference operation on the two geometries. Disjoint Returns true if geometry1 and geometry2 are disjoint. Disjoint3D Returns true if geometry1 and geometry2 are disjoint in a 3-dimensional manner. Distance Measures the planar distance between two geometries. Distance3D Measures the 3-dimensional planar distance between two geometries. Equals Overloaded. Expand Overloaded. ExportToEsriShape Overloaded. ExportToJSON Writes a JSON version of the input geometry to a string. ExportToWKB Overloaded. ExportToWKT Writes an OGC well-known text formatted version of the input geometry to a string. Performs the extend operation on a polyline using a polyline as the extender. The output polyline will have the first and last segment of each part extended Extend to the extender if the segments can be interpolated to intersect the extender. In the case that the segments can be extended to multiple segments of the extender, the shortest extension is chosen. Only end points for parts that are not shared by the end points of other parts will be extended. If the polyline cannot be extended by the input extender, then a null will be returned. Generalize Performs the generalize operation on the geometry. Generalize3D Performs the generalize operation on the geometry. GeodesicArea Gets the geodesic area of a polygon. GeodesicBuffer Overloaded. GeodesicDistance Calculates the geodesic distance between two geometries. The function returns a piecewise approximation of a geodesic ellipse (or geodesic circle, if semiAxis1Length = semiAxis2Length). Constructs a geodesic ellipse centered on the specified point. If this method is used to generate a polygon or a polyline, the result may have more than one part, depending on GeodesicEllipse the size of the ellipse and its position relative to the horizon of the coordinate system. When the method generates a polyline or a multipoint, the result vertices lie on the boundary of the ellipse. When a polygon is generated, the interior of the polygon is the interior of the ellipse, however the boundary of the polygon may contain segments from the spatial reference horizon, or from the GCS extent. GeodesicLength Gets the geodesic length of the input geometry. The function returns a piecewise approximation of a geodesic ellipse (or geodesic circle, if SemiAxis1Length = SemiAxis2Length). Constructs a geodesic ellipse centered on the specified point. If this method is used to generate a polygon or a polyline, the result may have more than one part, depending on GeodesicSector the size of the sector and its position relative to the horizon of the coordinate system. When the method generates a polyline or a multipoint, the result vertices lie on the boundary of the ellipse. When a polygon is generated, the interior of the polygon is the interior of the sector, however the boundary of the polygon may contain segments from the spatial reference horizon, or from the GCS extent. GeodeticDensifyByDeviation Creates geodetic segments connecting existing vertices and densifies the segments. GeodeticDensifyByLength Creates geodetic segments connecting existing vertices and densifies the segments. GeodeticMove Moves each point in the input array by the given distance. The function returns the number of points that has been moved. Points that are outside of the horizon will be discarded. GetEsriShapeSize Returns the size of the buffer in bytes that will be required to hold the Esri shapefile version of the input geometry. GetMinMaxM Gets the minimum and maximum M value. GetMMonotonic Determines whether Ms are monotonic, and if so, whether they are ascending or descending. GetMsAtDistance Get the M values at the specified distance along the multipart. Two M values can be returned if the specified distance is exactly at the beginning or the ending of a part. GetNormalsAtM Gets the line segments corresponding to the normal at the locations along the geometry where the specified M occurs. GetPointsAtM Gets a multipoint corresponding to the locations along the multipart where the specified M value occurs. Coordinates/measures are interpolated when GetPredefinedCoordinateSystemList Gets the list of predefined coordinate systems for the given filter. GetPredefinedGeographicTransformationList Gets the list of predefined geographic transformations. GetSubCurve Gets the subcurve of the input multipart between fromDistance and toDistance. GetSubCurve3D Gets the 3D subcurve of the input multipart between fromDistance and toDistance. GetSubCurveBetweenMs Gets a polyline corresponding to the subcurve(s) between the specified M values. GetWKBSize Returns the size of the buffer in bytes that will be required to hold the OGC well-known binary version of the input geometry. GraphicBuffer Overloaded. ImportFromEsriShape Creates a geometry based on the contents of the input Esri shapefile formatted buffer. ImportFromJSON Creates a geometry from the input JSON string. ImportFromWKB Creates a geometry based on the contents of the input well-known binary buffer. ImportFromWKT Creates a geometry from the input well-known text string. InsertMAtDistance Sets the M value at the given distance along the multipart. InterpolateMsBetween Generates M values by linear interpolation over a range of points. Intersection Overloaded. Intersects Returns true if geometry1 and geometry2 intersect. IsSimpleAsFeature Indicates whether this geometry is known to be topologically consistent according to the geometry type for storage in a database. LabelPoint Performs the LabelPoint operation on the geometry. Length Gets the length for a specified geometry. This is a planar measurement using 2D Cartesian mathematics. Length3D Gets the 3D length for a specified geometry. Move Overloaded. MovePointAlongLine Constructs a point the specified distance along a polyline or polygon. MultipartToSinglePart Separates the components of a geometry into single component geometries. NearestPoint Finds the nearest point in the geometry to a specified point. NearestPoint3D Finds the nearest point, in 3D space, on a z-aware geometry to a specified point. NearestVertex Finds the nearest vertex in the geometry to a specified point. NormalizeCentralMeridian Folds the geometry into a range of 360 degrees. This may be necessary when wrap around is enabled on the map. If geometry is an Envelope then a Polygon will be returned unless the Envelope is empty in which case an empty Envelope will be returned. Returns offset version of the input geometry. The offset operation creates a geometry that is a constant distance from an input polyline or polygon. It is similar to buffering, but produces a one sided result. If offset distance > 0, then the offset geometry is constructed to the right of the oriented input Offset geometry, otherwise it is constructed to the left. For a simple polygon, the orientation of outer rings is clockwise and for inner rings it is counter clockwise. So the "right side" of a simple polygon is always its inside. The bevelRatio is multiplied by the offset distance and the result determines how far a mitered offset intersection can be from the input curve before it is beveled. Overlaps Returns true if geometry1 and geometry2 overlap. Projects the given geometry to a new spatial reference. Same as GeometryEngine.ProjectEx(geometry, ProjectionTransformation.Create Project (geometry.SpatialReference, outputSpatialReference)); or, if both spatial references have vertical coordinate systems same as GeometryEngine.ProjectEx (geometry, ProjectionTransformation.CreateWithVertical(geometry.SpatialReference, outputSpatialReference)); ProjectEx Projects the given geometry to a new spatial reference. QueryNormal Overloaded. QueryPoint Overloaded. QueryPointAndDistance Overloaded. QueryPointAndDistance3D Overloaded. QueryTangent Overloaded. ReflectAboutLine Reflects the input geometry about the given line. Relate Performs custom relational operations between two geometries using a Dimensionally Extended Nine-Intersection Model, DE-9IM, formatted string. ReplaceNaNZs Replaces each non-simple (NaN) z-value on the geometry with the specified z-value. All other simple (non-NaN) z-values are unchanged. Reshape Reshapes a polygon or polyline with a single path polyline. ReverseOrientation Reverse the orientation of the geometry. Rotate Rotates the geometry about the specified origin point. Scale Overloaded. SetAndInterpolateMsBetween Sets the Ms at the beginning and the end of the geometry and interpolates the M values between these values. SetConstantZ Replaces each Z value on the geometry with the specified Z value. SetMsAsDistance Sets the M values to the cumulative length from the start of the multipart. ShapePreservingArea Calculates the area of the geometry on the surface of the Earth ellipsoid. This method preserves the shape of the geometry in its coordinate system. ShapePreservingLength Calculates the length of the geometry on the surface of the Earth ellipsoid. This method preserves the shape of the geometry in its coordinate system. SideBuffer Overloaded. SimplifyAsFeature Simplifies the given geometry to make it topologically consistent according to the geometry type for storage in a database. For instance, it rectifies polygons that may be self-intersecting. SimplifyPolyline Use either planar, nonplanar, or network simplify regardless of polyline M awareness. SplitAtPoint Adds a new vertex along the curve at the specified input point, or the projection onto the curve of the specified input point. SymmetricDifference Performs the symmetric difference operation on the two geometries. The symmetric difference is the union of the geometries minus the intersection. Touches Returns true if geometry1 touches geometry2. Transform2D Transforms an array of 2D coordinates. Returns an array of transformed 2D coordinates. Transform3D Transforms an array of 3D coordinates. Returns an array of transformed 3D coordinates. Union Overloaded. Within Returns true if geometry1 is within geometry2. See Also
{"url":"https://pro.arcgis.com/en/pro-app/2.6/sdk/api-reference/topic8197.html","timestamp":"2024-11-07T20:48:55Z","content_type":"application/xhtml+xml","content_length":"62590","record_id":"<urn:uuid:ccc21958-7850-4d1e-bd24-3ebfb684abe1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00252.warc.gz"}
Machine Learning Large amount of data are recorded every day in different fields, including marketing, bio-medical and security. To discover knowledge from these data, you need machine learning techniques, which are classified into two categories: These include mainly clustering and principal component analysis methods. The goal of clustering is to identify pattern or groups of similar objects within a data set of interest. Principal component methods consist of summarizing and visualizing the most important information contained in a multivariate data set. These methods are “unsupervised” because we are not guided by a priori ideas of which variables or samples belong in which clusters or groups. The machine algorithm “learns” how to cluster or summarize the data. Supervised learning consists of building mathematical models for predicting the outcome of future observations. Predictive models can be classified into two main groups: regression analysis for predicting a continuous variable. For example, you might want to predict life expectancy based on socio-economic indicators. Classification for predicting the class (or group) of individuals. For example, you might want to predict the probability of being diabetes-positive based on the glucose concentration in the plasma of patients. These methods are supervised because we build the model based on known outcome values. That is, the machine learns from known observation outcomes in order to predict the outcome of future cases. Here, we present a practical guide to machine learning methods for exploring data sets, as well as, for building predictive models. You’ll learn the basic ideas of each method and reproducible R codes for easily computing a large number of machine learning techniques. Our goal was to write a practical guide to machine learning for every one. The book presents the basic principles of these tasks and provide many examples in R. This book offers solid guidance in data mining for students and researchers. At the end of each chapter, we present R lab sections in which we systematically work through applications of the various methods discussed in that chapter.
{"url":"http://sthda.com/english/articles/11-machine-learning/","timestamp":"2024-11-05T01:05:16Z","content_type":"text/html","content_length":"54588","record_id":"<urn:uuid:5b1541ef-8a0f-4ad4-a27b-ed8ce5e1d39b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00504.warc.gz"}
Mulch Calculator - Treeier A mulch calculator is an essential tool for homeowners and landscapers that simplifies the process of estimating the amount of mulch needed for a specific area. By inputting the dimensions of the space—length, width, and desired depth—users can quickly determine how much mulch to purchase, whether in bags or bulk. This not only helps in making informed purchasing decisions but also ensures proper coverage for effective weed suppression, moisture retention, and soil temperature regulation. Ultimately, a mulch calculator streamlines landscaping projects, saving time and money while enhancing the overall health and appearance of gardens and outdoor spaces. Mulch Calculator Enter the dimensions of your area, the depth of mulch you want, and select your bag size. Length (feet): Width (feet): Depth (inches): Bag Size (cubic feet): Required Bags: 0 bags Required Cubic Yards: 0 cubic yards Mulch Calculator: A Comprehensive Guide Mulching is an essential part of landscaping and gardening. Whether you’re aiming to conserve moisture, reduce weeds, or enhance the aesthetic appeal of your garden, mulch plays a significant role. However, determining how much mulch you need can be tricky. That’s where a mulch calculator comes in handy. In this guide, we’ll explore different types of mulch calculators, such as those offered by Lowe’s, bag-based calculators, and yard calculators, and how to apply them to your specific project. 1. Mulch Calculator at Lowe’s Lowe’s, one of the leading home improvement retailers, offers a convenient and user-friendly mulch calculator on their website. The mulch calculator at Lowe’s simplifies the process of estimating how much mulch you’ll need for your landscaping or gardening project. Here’s a step-by-step guide on how to use the Lowe’s mulch calculator and why it’s a reliable tool for your mulching needs. Features of the Lowe’s Mulch Calculator Lowe’s mulch calculator is designed to give you accurate estimates based on your specific requirements. It allows users to input the dimensions of the area they wish to cover and the desired depth of mulch. This tool is particularly useful for DIY homeowners and professional landscapers alike. The calculator offers several types of mulch, including: • Pine bark mulch • Cedar mulch • Rubber mulch • Wood chips • Compost mulch Each mulch type has its unique properties, and choosing the right one depends on factors such as the type of plants in your garden, the climate, and aesthetic preferences. By selecting the mulch type and entering the area dimensions, the calculator provides an estimate in both bags and cubic yards, making it easy to decide whether to purchase bagged mulch or opt for bulk delivery. Steps for Using the Lowe’s Mulch Calculator 1. Measure Your Space: Start by measuring the length and width of the area you want to mulch. If your space is irregularly shaped, you can break it down into smaller, manageable sections. 2. Enter the Measurements: Input these dimensions into the calculator. Most tools will ask for square footage, but some also allow for direct entry of length and width. 3. Select Mulch Depth: Choose the desired depth of mulch, usually between 2 and 4 inches for most landscaping projects. 4. Mulch Type Selection: Pick the type of mulch you want to use from the drop-down options provided. 5. View Results: The calculator will give you an estimate of how many bags or cubic yards of mulch you’ll need based on your inputs. Advantages of Using the Lowe’s Mulch Calculator • Accuracy: The calculator provides an accurate estimate, ensuring you don’t over-purchase or under-purchase mulch. • Convenience: Available online, it can be accessed from any device, allowing you to plan your project from anywhere. • Multiple Options: It helps you decide whether to purchase mulch in bulk (by cubic yards) or in smaller quantities (by bags). By taking the guesswork out of how much mulch you need, the Lowe’s mulch calculator simplifies the shopping process, ensuring that your project is both cost-effective and efficient. 2. Mulch Calculator for Bags Mulch is often sold in bags, which are convenient for smaller projects. When using bagged mulch, it’s important to calculate how many bags you’ll need to cover a particular area. The size of the mulch bag and the coverage area it offers are key factors in determining the total number of bags required. Understanding Bag Sizes Mulch typically comes in bags measured by cubic feet, with the most common sizes being: • 2 cubic feet per bag • 3 cubic feet per bag The number of bags you need depends on the coverage area of the mulch. A 2-cubic-foot bag generally covers about 12 square feet at a depth of 2 inches, while a 3-cubic-foot bag covers about 18 square feet at the same depth. However, the depth of the mulch plays a crucial role in determining how many bags are required. A thicker layer of mulch will require more bags to cover the same area. Calculating Mulch for Bags To calculate the number of bags required, follow these steps: 1. Measure Your Area: Start by measuring the area where you intend to spread mulch. 2. Select the Depth: Decide on the depth of the mulch. A depth of 2-3 inches is standard for most garden beds, but areas with high foot traffic or areas prone to erosion may require more. 3. Determine Coverage per Bag: A standard 2-cubic-foot bag covers 12 square feet at 2 inches deep. Use this as a guide to determine how many bags you’ll need for your project. 4. Use a Calculator: Many online mulch calculators allow you to input the size of your area and the bag size to determine the total number of bags required. For example: If you have a 300 square foot garden bed and plan to apply mulch 2 inches deep, you can calculate as follows: $$[\frac{300 \, \text{sq. ft.}}{12 \, \text{sq. ft. (coverage per bag)}} = 25 \, \text{bags of mulch}]$$ This assumes you’re using 2-cubic-foot bags. If you’re using 3-cubic-foot bags, the number of bags decreases. Why Choose Bagged Mulch? • Ease of Transport: Bags are easier to handle and transport, especially for small to medium projects. • Reduced Waste: Since you’re buying in smaller quantities, there’s less chance of excess mulch. • Variety of Types: Bagged mulch comes in many different colors, types, and textures, giving you more customization options for your landscaping project. 3. Mulch Calculator for Yard & Cubic Yard If you’re tackling a larger project, like a commercial landscaping job or a large garden bed, buying mulch in bulk by the yard or cubic yard is often more economical. Understanding how to calculate mulch in cubic yards is essential for these larger projects, as it helps ensure you purchase the correct amount while avoiding overages or shortages. What Is a Cubic Yard? A cubic yard is a unit of volume, measuring 3 feet by 3 feet by 3 feet (or 27 cubic feet). This is a common measurement for bulk materials like mulch, soil, gravel, and sand. One cubic yard of mulch typically covers about 100 square feet at a depth of 3 inches. Calculating Mulch in Cubic Yards To calculate how many cubic yards of mulch you need, you can use this formula: $$[\text{Cubic Yards} = \frac{\text{Length (in feet)} \times \text{Width (in feet)} \times \text{Depth (in inches)}}{324}]$$ The constant 324 comes from converting cubic feet to cubic yards. This formula is helpful for estimating large areas. For example: • For a 500-square-foot area that you want to cover with 3 inches of mulch, the calculation would look like this: $$[\text{Cubic Yards} = \frac{500 \times 3}{324} = 4.63 \, \text{cubic yards}]$$ This means you’d need about 5 cubic yards of mulch for your project. Benefits of Bulk Mulch by Cubic Yard • Cost Efficiency: Bulk purchases often cost less per cubic yard compared to buying individual bags. • Convenience: Bulk deliveries are ideal for large-scale projects, eliminating the need to transport dozens of bags yourself. • Eco-Friendly: Bulk mulch reduces plastic waste, as you won’t need individual bags for transport. Most nurseries, garden centers, and home improvement stores like Lowe’s and Home Depot offer mulch by the cubic yard. You can arrange for it to be delivered directly to your site, saving time and 4. Mulch Calculator Formula If you want to calculate how much mulch you need without using an online tool, you can apply a simple formula. This is especially helpful if you want to quickly estimate the amount for irregularly shaped areas or prefer doing the math manually. The mulch calculator formula requires knowing the area and the desired mulch depth. The Basic Mulch Formula To calculate the amount of mulch in cubic yards, use the following formula: $$[\text{Cubic Yards} = \frac{\text{Area (in square feet)} \times \text{Desired Mulch Depth (in inches)}}{324}]$$ Let’s break this formula down: • Area: Measure the total square footage of the area to be mulched. If the area is irregular, divide it into smaller sections, calculate each section’s area, and then add them up. • Depth: Mulch is typically spread at a depth of 2 to 4 inches, depending on the type of mulch and the purpose of mulching. • Constant (324): This number converts your cubic feet into cubic yards, which is the standard unit for purchasing bulk mulch. Example Calculation If you have a 600-square-foot garden bed and you want to apply mulch at a depth of 3 inches, here’s how you calculate the mulch requirement: 1. Multiply Area by Depth: $$[600 \times 3 = 1800]$$ 2. Divide by 324 to convert to cubic yards: $$[\frac{1800}{324} = 5.56 \, \text{cubic yards}]$$ In this case, you’d need approximately 6 cubic yards of mulch to cover the area. 5. Mulch Calculator for Square Feet A mulch calculator designed for square footage is especially useful for small to medium-sized projects. Most home landscaping projects, such as mulching flower beds or around trees, require a square-foot-based approach. Steps for Calculating Mulch in Square Feet 1. Measure the Area: Use a tape measure to determine the length and width of the area you want to mulch. Multiply these two numbers to get the total square footage. • For example, a flower bed measuring 10 feet by 5 feet would have an area of 50 square feet. 1. Select Mulch Depth: Choose a mulch depth appropriate for your needs. Typically, 2-3 inches of mulch is sufficient to prevent weeds and retain moisture. 2. Use a Mulch Calculator: Many mulch calculators allow you to enter your square footage and desired depth to estimate the total amount of mulch needed in both bags and cubic yards. Manual Calculation Example If you prefer calculating manually: • For a 100-square-foot garden bed at a depth of 2 inches: $$[\text{Cubic Feet of Mulch} = \frac{100 \times 2}{12} = 16.67 \, \text{cubic feet}]$$ Since 1 cubic yard equals 27 cubic feet, divide by 27 to get cubic yards: $$[\frac{16.67}{27} = 0.62 \, \text{cubic yards}]$$ So, for 100 square feet at 2 inches deep, you’d need approximately 0.62 cubic yards of mulch. Multiply this by 2 if you plan to mulch at 4 inches deep. 6. How Do I Calculate Mulch? To wrap up, let’s summarize the different ways to calculate mulch for various scenarios. Whether you’re using an online mulch calculator, estimating the amount of mulch bags, or calculating the volume in cubic yards, the key is to measure accurately and consider the depth of mulch you plan to use. Quick Steps to Calculate Mulch 1. Measure the Area: Determine the total square footage of the area to be mulched. 2. Choose Mulch Depth: The depth will depend on your project—typically 2 to 4 inches. 3. Decide Between Bags or Bulk: For small projects, bagged mulch is convenient. For larger projects, purchasing by the cubic yard is more cost-effective. 4. Use the Appropriate Formula: Use the mulch calculator formula for cubic yards, or use an online tool for bag estimates. 5. Adjust for Irregular Shapes: For irregularly shaped areas, break the area down into manageable sections, calculate the mulch for each, and sum the results. Common Mistakes to Avoid • Over-Estimating Depth: Applying too much mulch can suffocate plant roots and lead to rot. • Under-Estimating Area: Incorrect measurements can leave areas uncovered or require last-minute trips to the store for more mulch. • Ignoring Mulch Type: Different mulch types settle differently over time. For instance, wood mulch may decompose and compress faster than rubber mulch, requiring more frequent top-ups. Frequently asked questions (FAQs) about mulch calculators 1. What is a mulch calculator? Answer: A mulch calculator is a tool designed to help homeowners and landscapers estimate the amount of mulch needed for a specific area. By entering the dimensions of the space to be mulched, as well as the desired depth of the mulch, users can quickly determine how much mulch to purchase, whether in bags or bulk (cubic yards). This ensures efficient use of resources and minimizes waste. 2. How do I use a mulch calculator? Answer: To use a mulch calculator, follow these steps: 1. Measure the Area: Determine the length and width of the area to be mulched in feet. If the area is irregularly shaped, break it into smaller sections and calculate each section separately. 2. Select Mulch Depth: Choose the desired depth for the mulch layer, typically between 2 to 4 inches. 3. Input Data: Enter the length, width, and depth into the calculator. If using bags, select the bag size (usually 2 or 3 cubic feet). 4. Calculate: Press the calculate button to get the estimated amount of mulch required in both bags and cubic yards. 5. Review Results: The calculator will provide you with an estimate of how many bags to purchase or how many cubic yards are needed. 3. Why is it important to calculate the amount of mulch I need? Answer: Calculating the right amount of mulch is important for several reasons: • Cost Efficiency: Knowing how much mulch you need prevents overspending on unnecessary materials and avoids multiple trips to the store. • Proper Coverage: An accurate calculation ensures that the mulch adequately covers the area, providing benefits like weed suppression, moisture retention, and temperature regulation for the soil. • Avoiding Waste: By purchasing the correct amount, you minimize waste and reduce the environmental impact associated with excess materials. 4. What are the common types of mulch available? Answer: There are several types of mulch, each with unique properties and benefits: • Organic Mulch: Includes materials like wood chips, bark, straw, grass clippings, and compost. It improves soil quality as it decomposes but may need replenishing more frequently. • Inorganic Mulch: Composed of materials like rubber mulch, gravel, or landscape fabric. These options do not decompose and typically require less maintenance, but they do not enrich the soil. • Decorative Mulch: This includes dyed mulch or pebbles that enhance the aesthetic appeal of a garden while serving the same functional purposes. 5. How deep should I apply mulch? Answer: The recommended depth for applying mulch is typically between 2 to 4 inches. Here’s a breakdown of common depths: • 2 inches: Suitable for flower beds and gardens where plants need some exposure to sunlight and nutrients. • 3 inches: Ideal for most landscaping projects, providing good weed suppression and moisture retention. • 4 inches: Beneficial for areas with high foot traffic or where the soil is prone to erosion, as it offers additional protection. However, applying too much mulch can suffocate plant roots and lead to rot, so it’s essential to adhere to recommended depths. 6. Can I use a mulch calculator for irregularly shaped areas? Answer: Yes, you can use a mulch calculator for irregularly shaped areas by breaking the area down into smaller, manageable sections. Measure each section individually, calculate the mulch needed for each part, and then sum the results. This approach ensures an accurate estimate for your total mulch requirement, even for complex layouts. 7. How do I choose the right mulch for my garden? Answer: When selecting mulch for your garden, consider the following factors: • Plant Needs: Some plants thrive better with specific types of mulch. For instance, cedar mulch repels insects and is good for vegetable gardens, while pine bark is excellent for acid-loving plants like azaleas. • Aesthetic Preferences: Consider the color and texture of the mulch to match your landscape design. • Longevity and Maintenance: Organic mulches decompose over time, while inorganic mulches last longer but don’t improve soil quality. • Cost: Evaluate your budget as some types of mulch can be more expensive than others. 8. What are the benefits of using mulch? Answer: Mulch provides numerous benefits, including: • Weed Suppression: A thick layer of mulch blocks sunlight, preventing weed seeds from germinating. • Moisture Retention: Mulch helps retain soil moisture, reducing the frequency of watering. • Temperature Regulation: It insulates the soil, protecting plant roots from extreme temperatures. • Soil Improvement: Organic mulches break down over time, enriching the soil with nutrients. • Aesthetic Appeal: Mulch enhances the visual appeal of your garden or landscape. 9. How often should I replace or replenish mulch? Answer: The frequency of replenishing mulch depends on the type used and environmental conditions. Here are some guidelines: • Organic Mulch: Typically needs to be replaced every year or every two years, as it decomposes and loses effectiveness. • Inorganic Mulch: May last longer, but you should periodically check for compaction, color fading, or debris accumulation, especially in high-traffic areas. • Monitoring Depth: Regularly check the depth of your mulch to ensure it remains at the recommended level, adding more as necessary. 10. Can I use leftover mulch for different projects? Answer: Yes, leftover mulch can often be repurposed for various gardening and landscaping projects. Here are some ideas: • Garden Paths: Use excess mulch to create decorative paths through your garden. • Tree Rings: Apply leftover mulch around the base of trees to retain moisture and suppress weeds. • New Beds: Use mulch in new garden beds or flower pots to enhance soil quality. • Composting: If the mulch is organic and hasn’t been treated with chemicals, you can mix it into your compost pile for additional carbon content. By understanding these aspects of mulch and how to calculate it accurately, you can optimize your landscaping efforts and enjoy a healthier, more beautiful garden.
{"url":"https://treeier.com/mulch-calculator/","timestamp":"2024-11-08T12:30:15Z","content_type":"text/html","content_length":"308623","record_id":"<urn:uuid:1a20dd2d-be9e-4c7a-ad37-01a188ba403d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00485.warc.gz"}
Toeplitz Matrix View Toeplitz Matrix on LeetCode Time Complexity O(n * m) - Every cell in the matrix must be seen, resulting in the O(n * m) time complexity. Space Complexity O(1) - No variables are declared, resulting in the O(1) space complexity. Runtime Beats 95.80% of other submissions Memory Beats 93.36% of other sumbissions The algorithm traverses the matrix cell by cell, so the loop only iterates over diagonals with more than one element, allowing for the algorithm to look ahead at the next element in the diagonal. Doing this for every element allows the verification of the Toeplitz Matrix. 1 class Solution: 2 def isToeplitzMatrix(self, matrix: List[List[int]]) -> bool: 3 for i in range(len(matrix)-1): 4 for j in range(len(matrix[0])-1): 5 if matrix[i][j] != matrix[i + 1][j + 1]: 6 return False 7 return True
{"url":"https://douglastitze.com/posts/toeplitz-matrix/","timestamp":"2024-11-13T16:31:44Z","content_type":"text/html","content_length":"23754","record_id":"<urn:uuid:a1151fb7-1af2-4a5b-ad02-94a4cf7e1840>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00369.warc.gz"}
Re: st: How to calculate standardized difference in means with survey we Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: How to calculate standardized difference in means with survey weighted data? From Steve Samuels <[email protected]> To [email protected] Subject Re: st: How to calculate standardized difference in means with survey weighted data? Date Mon, 5 Mar 2012 16:28:39 -0500 I can't speak about -pstest-, but -pbalchk- won't solve Lok's problem. It standardizes by the unweighted SD in the "treated" group. [email protected] On Mar 5, 2012, at 2:35 PM, Ariel Linden. DrPH wrote: You can use either -pstest- or -pbalchk- (both are user written programs Since they both allow for weights anyways, I assume you could use your survey weight either in place of the existing weight needed to account for multiple controls, or if need be, multiple the two weights together. Date: Sun, 4 Mar 2012 19:45:43 -0500 From: Lok Wong <[email protected]> Subject: st: How to calculate standardized difference in means with survey weighted data? I need to calculate the standardized bias (the difference in means divided by the pooled standard deviation) with survey weighted data using STATA. I am comparing the means of 2 groups (Y: treatment and control) for a list of X predictor variables. The purpose is to evaluate differences before and after propensity score weighting (not matching so I cannot use PSMATCH2 or other similar packages). This is as far I got: svy: mean X, over (Y) estat sd lincom [X]1 - lincom [x]0 I calculated the means by treatment/control groups. Then obtained the standard deviations for each means (as the SE is reported by svy: means) I used lincom to obtain the difference in means from the svy post-estimation How do I now extract the stored standard deviations for the 2 means, so I can divide the difference in means by the pooled standard deviation? I need to do this twice for 20 variables, so I don't want to just read the output results and calculate by hand. Any suggestions? I did see an earlier posting (from 2005) on Standardized Response Mean, but the suggested code (diff in change score / sd of change score) does not address how to use survey weighted data. Lok Wong Samson Doctoral Candidate [email protected] * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2012-03/msg00225.html","timestamp":"2024-11-10T08:06:49Z","content_type":"text/html","content_length":"12003","record_id":"<urn:uuid:04033604-2f74-4fe7-8464-ef6794d73dd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00216.warc.gz"}
Understanding Mathematical Functions: Is Blank Function Understanding Mathematical Functions: Is blank function Mathematical functions are an essential concept in various fields of study, including mathematics, physics, engineering, and economics. In this blog post, we will delve into the world of mathematical functions, demystifying their significance and exploring a specific type of function in depth. Defining mathematical functions and their significance in various fields A mathematical function is a relationship between a set of inputs (the domain) and a set of possible outputs (the range), where each input is related to exactly one output. Functions are used to model and analyze various real-world phenomena and are integral to solving practical problems in a wide range of disciplines. The significance of mathematical functions lies in their ability to describe and predict the behavior of complex systems, as well as in their applications in optimization, decision-making, and problem-solving. Understanding functions is essential for grasping fundamental concepts in calculus, algebra, and other advanced mathematical topics. Overview of the main components of a function: domain, range, and correspondence Every function consists of several main components, including the domain, range, and correspondence. The domain of a function refers to the set of all possible inputs or independent variables for the function. The range, on the other hand, represents the set of all possible outputs or dependent variables that the function can produce. Furthermore, the correspondence between the domain and range of a function specifies how each input value is associated with a unique output value. This one-to-one mapping is a fundamental characteristic of functions, distinguishing them from relations or mappings that do not meet this criterion. Setting the stage for deeper exploration of a specific type of function in this blog post In this blog post, we will focus on exploring a specific type of function in detail. By examining the properties, applications, and mathematical representations of this particular function, readers will gain a deeper understanding of its role in various contexts and its significance in mathematical analysis. Key Takeaways • Understanding mathematical functions: Is blank function • Definition and characteristics of a mathematical function • Common types of mathematical functions • How to analyze and graph mathematical functions • Applications of mathematical functions in real life The Anatomy of Functions Understanding mathematical functions is essential in various fields, including mathematics, physics, engineering, and computer science. Functions are fundamental in describing relationships between different quantities and are used to model real-world phenomena. Let's delve into the anatomy of functions to gain a better understanding of their components and types. A Detailed description of function components: domain, co-domain, and range A function is a relation between a set of inputs (the domain) and a set of possible outputs (the co-domain). The domain is the set of all possible input values for the function, while the co-domain is the set of all possible output values. The range of a function is the set of all output values actually produced by the function when the entire domain is used as input. How functions map inputs to outputs, including one-to-one and many-to-one mappings Functions map inputs from the domain to outputs in the co-domain. In a one-to-one mapping, each input value corresponds to a unique output value, and no two different input values can produce the same output value. On the other hand, in a many-to-one mapping, multiple input values can produce the same output value. Types of functions: linear, quadratic, polynomial, exponential, and more Functions come in various types, each with its own unique characteristics and properties. Some common types of functions include: • Linear functions: These functions have a constant rate of change and can be represented by a straight line on a graph. • Quadratic functions: These functions have a squared term and can be represented by a parabola on a graph. • Polynomial functions: These functions consist of terms with non-negative integer exponents and can have various shapes on a graph. • Exponential functions: These functions involve a constant base raised to a variable exponent and grow or decay at an increasing rate. • Trigonometric functions: These functions are based on the trigonometric ratios of angles in a right-angled triangle and are used extensively in physics and engineering. Understanding the different types of functions and their properties is crucial in solving mathematical problems and analyzing real-world phenomena. Characterizing the 'Is' Function When it comes to mathematical functions, the 'Is' function holds a unique place due to its specific characteristics and relevance in various practical scenarios. In this chapter, we will delve into the definition and characteristics of the 'Is' function, compare it with other functions, and explore its practical applications. A Delving into the 'Is' function: its definition and characteristics The 'Is' function, also known as the indicator function, is a mathematical function that takes the value 1 if a certain condition is true, and 0 if the condition is false. In other words, it 'indicates' whether a specific property holds true or not. Mathematically, it can be represented as: Is(A) = 1 if A is true, and Is(A) = 0 if A is false This function is commonly used in set theory, logic, and probability theory to define events, properties, or conditions. B Comparing the 'Is' function with other functions to highlight its unique properties Unlike traditional mathematical functions that map elements from one set to another, the 'Is' function operates on a binary output, making it distinct from other functions. While most functions produce a range of values based on the input, the 'Is' function simply evaluates the truth value of a statement and outputs either 1 or 0. For example, when comparing it with a typical mathematical function such as f(x) = x^2, the 'Is' function does not transform the input into a different value, but rather determines whether a specific condition holds true or not. C Practical scenarios where the 'Is' function is relevant and utilized The 'Is' function finds practical applications in various fields, including computer science, statistics, and decision-making processes. In computer programming, the 'Is' function is used to define conditional statements, where certain actions are executed based on the truth value of a condition. In statistics, the 'Is' function is employed to define indicator variables that represent the presence or absence of a specific characteristic within a dataset. This allows for the analysis of categorical data and the identification of patterns or correlations. Moreover, in decision-making processes, the 'Is' function plays a crucial role in formulating logical rules and constraints, enabling the modeling of complex systems and scenarios. Overall, the 'Is' function's ability to succinctly represent the truth value of a condition makes it an essential tool in various mathematical and practical contexts. Functions in Action: Real-world Applications Mathematical functions play a crucial role in various real-world scenarios, providing a framework for understanding and solving complex problems. The 'Is' function, in particular, is widely used across different fields to model relationships and make predictions. How mathematical functions, including the 'Is' function, are applied in real-world scenarios In real-world scenarios, mathematical functions are used to represent relationships between different variables. The 'Is' function, specifically, is employed to define a specific condition or property that must be satisfied. For example, in economics, the 'Is' function can be used to model the relationship between supply and demand, helping businesses make informed decisions about pricing and production. In engineering, the 'Is' function is utilized to define constraints and requirements for designing and building structures, machines, and systems. By accurately defining the 'Is' conditions, engineers can ensure the safety, efficiency, and reliability of their designs. In computer science, the 'Is' function is applied in programming to create logical conditions and decision-making processes. This allows software developers to build algorithms that perform specific tasks based on predefined criteria. In physics, the 'Is' function is used to describe the behavior of physical systems and phenomena. By formulating mathematical functions that represent natural laws and principles, physicists can make predictions and analyze the outcomes of various experiments and observations. Examples from economics, engineering, computer science, and physics Economics: In economics, the 'Is' function is commonly used in macroeconomic models to represent equilibrium conditions, such as the IS-LM model, which describes the relationship between interest rates and output levels. Engineering: In structural engineering, the 'Is' function is employed to define the maximum allowable stress and deformation limits for materials used in construction, ensuring the safety and stability of buildings and infrastructure. Computer Science: In programming, the 'Is' function is utilized to create conditional statements that control the flow of a program, allowing for different actions to be taken based on specific criteria or input values. Physics: In classical mechanics, the 'Is' function is used to express the conditions for equilibrium and motion of objects, enabling physicists to analyze the forces and interactions involved in various physical systems. Case studies showcasing the impact of understanding and using the 'Is' function accurately Case Study 1: Economic Forecasting In the field of economics, accurate modeling of economic relationships using the 'Is' function has a significant impact on forecasting and policy-making. By understanding and using the 'Is' function accurately, economists can make informed predictions about future trends in inflation, unemployment, and economic growth, which in turn influence government policies and business strategies. Case Study 2: Structural Integrity In engineering, the precise application of the 'Is' function is critical for ensuring the structural integrity of buildings, bridges, and other infrastructure. By defining and adhering to the 'Is' conditions, engineers can prevent structural failures and ensure the safety of the built environment, ultimately saving lives and resources. Case Study 3: Algorithmic Decision-making In computer science, the accurate use of the 'Is' function is essential for creating reliable and efficient algorithms. By incorporating logical conditions based on the 'Is' function, software developers can design programs that make intelligent decisions, automate tasks, and optimize processes in various domains, from finance to healthcare. Case Study 4: Predictive Modeling in Physics In the field of physics, the 'Is' function is fundamental for developing predictive models that describe the behavior of natural phenomena. By accurately formulating the 'Is' conditions, physicists can make precise predictions about the motion of celestial bodies, the behavior of materials under extreme conditions, and the interactions of fundamental particles, advancing our understanding of the universe. Troubleshooting Common Misunderstandings When it comes to understanding mathematical functions, it's important to be aware of common pitfalls and misconceptions that can arise, particularly when dealing with the 'Is' function. By identifying these issues and providing correct interpretations, as well as offering tips for avoiding errors, individuals can enhance their understanding and application of functions in academic or professional contexts. Identifying common pitfalls when dealing with functions, specifically the 'Is' function One common pitfall when dealing with the 'Is' function is the misunderstanding of its purpose and usage. The 'Is' function is often used to determine whether a certain condition is true or false, and it is commonly used in programming and mathematical expressions. However, individuals may struggle with the syntax and logic of the 'Is' function, leading to errors in their calculations and Another pitfall is the confusion between the 'Is' function and other comparison operators, such as 'equals to' or 'not equals to.' Understanding the distinctions between these operators is crucial for accurately representing mathematical relationships and conditions. Clarifying misconceptions and providing correct interpretations To clarify misconceptions about the 'Is' function, it's important to emphasize that it is a logical function that returns a boolean value (true or false) based on the evaluation of a given condition. This condition can be a mathematical expression, a comparison, or any logical statement. It's also important to provide correct interpretations of the 'Is' function in various contexts, such as programming, data analysis, and mathematical modeling. By demonstrating practical examples and scenarios, individuals can gain a clearer understanding of how the 'Is' function is applied and its significance in decision-making processes. Tips for avoiding errors when working with functions in academic or professional contexts When working with functions, including the 'Is' function, in academic or professional contexts, it's essential to follow certain guidelines to minimize errors and ensure accurate results. Some tips for avoiding errors include: • Understanding the syntax and logic: Take the time to thoroughly understand the syntax and logic of the 'Is' function, as well as other related functions and operators. This includes being familiar with the rules of mathematical expressions and logical statements. • Testing and validating: Before using the 'Is' function in complex calculations or decision-making processes, test and validate its behavior with simple examples. This can help identify any potential issues or misunderstandings early on. • Seeking clarification: If there are uncertainties or ambiguities regarding the usage of the 'Is' function, seek clarification from reliable sources, such as textbooks, academic resources, or experienced professionals in the field. • Documenting assumptions and interpretations: When using the 'Is' function in academic or professional work, document the assumptions and interpretations made regarding its usage. This can help in reviewing and verifying the correctness of the results. Advancing Your Function Knowledge Understanding mathematical functions is a key aspect of mastering mathematics. To advance your knowledge of functions, it is important to engage with various resources, communities, and continuous A Resources for further learning: books, courses, and online platforms • Books: There are numerous books available that delve into the intricacies of mathematical functions. Some highly recommended books include 'Introduction to the Theory of Functions' by Konrad Knopp and 'Functions and Graphs' by I.M. Gelfand. • Courses: Enrolling in online or in-person courses focused on mathematical functions can provide structured learning and guidance. Platforms like Coursera, Khan Academy, and edX offer a wide range of courses on functions and calculus. • Online platforms: Websites such as Wolfram Alpha, Desmos, and Symbolab provide interactive tools and resources for understanding and visualizing mathematical functions. B Engaging with communities, forums, and study groups focused on mathematics Joining communities, forums, and study groups that are centered around mathematics can provide valuable insights and opportunities for discussion and collaboration. • Communities: Platforms like Reddit and Stack Exchange host communities dedicated to mathematics, where individuals can ask questions, share knowledge, and engage in discussions related to functions and other mathematical concepts. • Forums: Participating in forums such as MathOverflow and Art of Problem Solving can expose you to challenging problems and diverse perspectives on mathematical functions. • Study groups: Forming or joining study groups with peers who share an interest in mathematics can create a supportive environment for learning and exploring functions together. C Encouraging continuous practice with problem sets and real-life function problems Practice is essential for mastering mathematical functions. Engaging with problem sets and real-life function problems can help solidify your understanding and application of functions. • Problem sets: Working through problem sets from textbooks, online resources, or course materials can reinforce your knowledge of functions and provide exposure to different types of function • Real-life function problems: Applying mathematical functions to real-world scenarios, such as modeling population growth or analyzing economic trends, can enhance your ability to recognize and solve function-related problems in practical contexts. Conclusion & Best Practices A Recap of the importance of understanding the 'Is' function within the broader context of mathematical functions Understanding the 'Is' function is crucial in the study of mathematical functions as it helps us determine whether a certain value belongs to the domain or range of a function. By grasping the concept of the 'Is' function, we gain a deeper understanding of how functions operate and how they can be applied in various mathematical and real-world scenarios. Application of best practices: continuous learning, application, and collaboration Continuous learning is essential in mastering the 'Is' function and other mathematical concepts. By staying updated with the latest developments in the field of mathematics, we can enhance our understanding and application of mathematical functions. Additionally, applying the 'Is' function in practical scenarios allows us to see its real-world implications and benefits. Collaboration with peers and experts in the field can also provide valuable insights and perspectives on the 'Is' function, leading to a more comprehensive understanding. Final thoughts on embracing the complexity and beauty of mathematical functions for personal and professional growth Embracing the complexity of mathematical functions, including the 'Is' function, can lead to personal and professional growth. By delving into the intricacies of mathematical functions, we develop critical thinking skills, problem-solving abilities, and a deeper appreciation for the beauty of mathematics. This not only enriches our personal lives but also enhances our professional capabilities, opening up new opportunities for career advancement and innovation.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-is-blank-function","timestamp":"2024-11-15T00:55:34Z","content_type":"text/html","content_length":"231876","record_id":"<urn:uuid:08d40cc6-0536-442a-97f8-651e75bb332c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00585.warc.gz"}
What gas law relates to scuba diving - Travel Blog ## Boyle’s Law and Scuba Diving Boyle’s law is a gas law that describes the inverse relationship between the pressure and volume of a gas at constant temperature. This law is essential for understanding the principles of scuba diving, as it explains how changes in pressure affect the volume of gas in a diver’s lungs and scuba tank. ### Boyle’s Law Formula Boyle’s law can be expressed mathematically as follows: P₁V₁ = P₂V₂ * P₁ is the initial pressure of the gas * V₁ is the initial volume of the gas * P₂ is the final pressure of the gas * V₂ is the final volume of the gas ### Boyle’s Law in Scuba Diving When a diver descends underwater, the pressure on their body increases due to the weight of the water above them. This increased pressure causes the volume of gas in the diver’s lungs and scuba tank to decrease. As the diver ascends, the pressure decreases, and the volume of gas in the lungs and tank increases. This change in volume is important because it can affect the diver’s buoyancy. If the diver ascends too quickly, the gas in their lungs and tank will expand too rapidly, causing them to become more buoyant and possibly ascend uncontrollably. This can lead to decompression sickness, a serious medical condition that can occur when a diver ascends too quickly from a dive. ### Applications of Boyle’s Law in Scuba Diving Boyle’s law has several important applications in scuba diving, including: * **Calculating the pressure at different depths:** Divers can use Boyle’s law to calculate the pressure at different depths in the water. This information is essential for planning dives and avoiding decompression sickness. * **Determining the volume of gas in a scuba tank:** Divers can use Boyle’s law to determine the volume of gas remaining in their scuba tank at different pressures. This information is important for managing gas supply and avoiding running out of air during a dive. * **Calculating the buoyancy of a diver:** Divers can use Boyle’s law to calculate their buoyancy at different depths in the water. This information is important for maintaining neutral buoyancy and avoiding ascending or descending too quickly. ### Conclusion Boyle’s law is a fundamental gas law that has important applications in scuba diving. By understanding this law, divers can better plan their dives, manage their gas supply, and avoid decompression
{"url":"https://travelerschat.com/scuba-diving/what-gas-law-relates-to-scuba-diving/","timestamp":"2024-11-04T16:00:18Z","content_type":"text/html","content_length":"155795","record_id":"<urn:uuid:b6674f9a-a5ca-4de8-a020-a795cbea720f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00801.warc.gz"}
The Generals. Week 13. Edition 0010. 04.03.2022. Week of 03.28 - 04.03.2022 This weekly commentary examines the 10 largest companies, by market capitalization, affectionately called The Generals, that are part of the S&P 500. We analyze size, price, trend, momentum, and breadth. By having an in-depth understanding of what The Generals are doing this should help us understand what the S&P 500 is doing. The General's Market Capitalization & Trend Table. Last week The Generals grew from 29.42% of SPY’s market cap to 29.47%. TSLA overtook GOOGL for 5th largest company in the S&P 500. SPY’s market cap that is in a quantitative uptrend increased from 18.37% to 24.40% with MSFT being upgraded from range to uptrend as price remounted its upward sloping 40-week simple moving average of closes. Performance table sorted by the 1-week rate-of-change. Last week 3 of The Generals outperformed SPY. Those were TSLA, MSFT, and FB. On average, the Generals gained 0.24% last week, while SPY only gained 0.05%. TSLA closed to record a 13-week closing high, while MSFT and FB closed at 4-week highs. The quantitative trend of MSFT was upgraded from range to uptrend. The quantitative momentum of GOOG was upgraded from negative to neutral. UNH and BRK.B remain the only two Generals in an uptrend with a positive momentum condition as judged by our quantitative model. 8 of our generals have their 6-week moving averages of their 40-week moving averages of relative strength of SPY sloping upwards. Year-to-date performance chart. BRK.B was the only General in positive territory for most of this year-to-date. Two weeks ago UNH joined BRK.B in positive territory. Last week TSLA rocketed higher to climb from 7th position to 2nd position. TSLA is now in positive territory with a YTD gain. FB continues to lag. The Generals geometric average and relative strength. The middle panel is a geometric average of the top 16 companies, by cap-weight, in the S&P 500. This is to avoid having to recalculate the average every time the bottom spots switch. The geometric average is a form of weighting that minimizes the impact of the highest priced stocks on the average itself. The bottom panel is the relative strength chart which shows The General’s Geometric Average divided by SPY. The top panel is its momentum using a 10 period Relative Strength Index. The Generals geometric average has made a higher-high after a lower-low, so price is in a trading range above its upward sloping 40-week simple moving average. The relative strength line against SPY has come back up to its previous high showing strength, but is also in a range. RSI has bottomed just above 32, so we will need to see a push above 60 if we want to confirm a bullish rotation. The Generals geometric average and advance-decline data. The top panel is the same geometric average from above. The middle panel is the cumulative advance decline line which, in this case, is measuring weekly advancers minus decliners to determine the net advancers. The net advancers are then summed to form the cumulative advance-decline line. The bottom panel, called the advance-decline percentage, shows the percent of net advancers as histogram bars. The line is a 10-week simple moving average of the net advancers histogram. We can use this to show breadth thrusts, and we can also judge the strength of the underlying 16 stocks. The cumulative AD line made a higher-high after a lower-low and now rangebound just like The Generals' geometric average. 37.50% of The Generals advanced last week, so 6 of 16. The 10-week moving average of the AD% indicator has moved up from 47.5% to 50% but remains in its neutral zone between 40%-60%. The Generals geometric average with percentage above moving average data. Click to enlarge. This top panel shows The Generals Geometric average, explained above, with its 10-week simple moving average in blue and its 40-week simple moving average in red. The middle panel shows the percentage of stocks above their 10-week moving averages. The bottom panel shows the percentage of stocks above their 40-week simple moving averages. The number of Generals above their 10-week simple moving averages has increased from 75% to 81.25% or 13 of 16. The number of Generals above their 40-week simple moving averages has increased from 62.50% to 75% or 12 of 16. The Charts Note: Goog and Googl are very similar, and until there is a change, the analysis of Goog represents both charts. Apple. Click to enlarge. AAPL lost 0.23% in value last week. Price has yet to make it back to its all-time high just under $183. Price remains above its upward sloping 40-week simple moving average. A measured move using the flag pole projects a price target of $214, while the next Fibonacci extension is up at $263. Momentum is turning up to test its average line from below. AAPL bulls would like to see Price get above its all-time high and momentum to break back above its upward sloping trendline. Relative strength remains stable above its breakout level. Apple is in an uptrend. MSFT gained 1.89% in value last week. Price continued higher and has remounted its upward sloping 40-week simple moving average in its descending wedge pattern. Price closed at a 4-week high. Momentum is negative though accelerating towards the upside to test the 0-line and average line from below 0. Relative strength remains coiling around its rising 6-week moving average of its 40-week simple moving average. MSFT remains consolidating in an uptrend. Alphabet. Click to enlarge. Goog lost 0.58% in value last week. Price remains above its upward sloping 40-week simple moving average. Momentum has broken above its 0-line and is testing its average line from below. Relative strength continues to coil and oscillate around its upward sloping 6-week average of its 40-week moving average and trend line. Goog is in a consolidation box in an uptrend. Amazon. Click to enlarge. Amazon lost 0.74% in value and has closed for the 14th week in a row below its downward sloping 40-week simple moving average. Price remains between the $3,552.25 and $2,871.00 levels which define its year and a half long trading range. Momentum is above its downward sloping average line but still under 0. Relative strength remains trending upwards while below its downward sloping 6-week moving average of its 40-week moving average. AMZN is rangebound. Tesla. Click to enlarge. TSLA gained 7.32% in value and led The Generals higher for a 2nd week in a row. Price remains above $900 and below 1,243.50 while the 40-week simple moving average of price remains sloping upward. Price recorded a 13-week closing high. Momentum has turned up from its 0-line and is now testing its average line from below while above 0. Relative strength continues higher and is at multi-month highs. TSLA is rangebound. Meta Platforms. Click to enlarge. FB gained 1.37% in value last week. Price remains above $207. Momentum has flattened out and is accelerating to the upside, slightly. Relative strength remains near all-time lows as it tests it downward sloping trendline from below. FB remains in a downtrend. Nvidia. Click to enlarge. NVDA lost 3.54% in value last week. Price remains above its still upward sloping 40-week simple moving average. A measured move projects a target of $406 while the next Fibonacci extension level is $312 and then $414. Momentum has made a stand at its 0-line and is breaking above its downward sloping trendline. Now it looks ready to test its average line from below. Relative strength is above its upward sloping 6-week moving average of its 40-week simple moving average. NVDA remains in an uptrend. Berkshire Hathaway. Click to enlarge. The famed Berkshire Hathaway lost 1.92% in value last week. In a change from 5 consecutive weeks of all-time closing highs, we see a candlestick pattern called the Dark Cloud Cover has formed. The psychology of this candle pattern speaks to the probabilities of some mean reversion to follow in the week(s) ahead. Momentum and relative strength remain above their upward sloping trendlines. Price is in an uptrend. UnitedHealth Group. Click to enlarge. UNH lost 0.09% in value last week. A measured move out of the last rectangle projects a price target of $572, while the Fibonacci extensions project a move to $651 if price can closed above $523. Momentum and relative strength remain above their upward sloping trendlines. Price is in an uptrend. While several of The Generals are in consolidation patterns, these patterns are forming in uptrends. This is bullish for the S&P 500 as a whole and suggests continuation. That said, this analysis does not consider anything except the 10 largest companies. It does not consider overall market breadth, sentiment, or the macro-economic forces at work.
{"url":"https://www.eastcoastcharts.com/post/the-generals-week-13","timestamp":"2024-11-15T03:02:29Z","content_type":"text/html","content_length":"1050486","record_id":"<urn:uuid:a2d95950-34dd-492f-b8b6-6746cd2b1968>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00203.warc.gz"}
ed25519 vs rsa speed Post summary: Speed performance comparison of MD5, SHA-1, SHA-256 and SHA-512 cryptographic hash functions in Java. How do RSA and ECDSA differ in signing performance? You cannot convert one to another. To generate strong keys make sure you have sufficient entropy generated on your computer (stream a HD YouTube/Netflix video if you have to). Anti-replay security decisions to be handled application layers above TLS, for example by HTTP/2 servers, New, faster and safer Elliptic Curve options. RSA is out of the question for that key size. So: A presentation at BlackHat 2013 suggests that significant advances have been made in solving the problems on complexity of which the strength of DSA and some other algorithms is founded, so they can be mathematically broken very soon. In order to figure out the impact on performance of using larger keys - such as RSA 4096 bytes keys - on the client side, we have run a few tests: Several factors are important when choosing hash algorithm: security, speed, and purpose of use. Mentions; Mentioned In E602: Weekly Standup. The difference in size between ECDSA output and hash size . The Linux security blog about Auditing, Hardening, and Compliance. That is the one place that RSA shines; you can verify RSA signatures rather faster than you can verify an ECDSA signature. 48 bytes - this makes the QR code already a bit unwieldy. RSA, DSA, ECDSA, EdDSA, & Ed25519 are all used for digital signing, but only RSA can also be used for encrypting. Newer Yubikeys (since firmware 5.2.3) support ed25519, cv25519 and brainpool curves. Complete transition to AEAD (authenticated ciphers), bare CBC and bare Stream … Contribute to openssl/openssl development by creating an account on GitHub. Client keys (~/.ssh/id_ {rsa,dsa,ecdsa,ed25519} and ~/.ssh/identity or other client key files). 2001.09.22, 2001.10.29, 2001.11.02: a series of talks on NIST P-224, including preliminary thoughts that led to Curve25519. All were coded in C++, compiled with Microsoft Visual C++ 2005 SP1 (whole program optimization, optimize for speed), and ran on an Intel Core 2 1.83 GHz processor under Windows Vista in 32-bit mode. 25. ECDSA vs ECDH vs Ed25519 vs Curve25519 77 ओपनएसएसएच (ईसीडीएचएसए, एड25519, Curve25519) में उपलब्ध ईसीसी एल्गोरिदम में से, जो सुरक्षा का सबसे अच्छा स्तर … EdDSA, Ed25519, Ed25519-IETF, Ed25519ph, Ed25519ctx, HashEdDSA, PureEdDSA, WTF? save. There is a new kid on the block, with the fancy name Ed25519. Ed25519 and ECDSA are signature algorithms. It might also be useful to use them by default for the OpenPGP app. PuTTY) to the server, use ssh-keygen to display a fingerprint of the RSA host key: ed25519 vs rsa, Ed25519 is a public-key digital signature cryptosystem proposed in 2011 by the team lead by Daniel J. Only RSA 4096 or Ed25519 keys should be used! If you can connect with SSH terminal (e.g. That’s a pretty weird way of putting it. https://blog.g3rt.nl/upgrade-your-ssh-keys.html Related Objects. Crypto++ 5.6.0 Benchmarks. I am not a security expert so I was curious what the rest of the community thought about them and if they're secure to use. WinSCP will always use Ed25519 hostkey as that's preferred over RSA. ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa Now edit your config. Many years the default for SSH keys was DSA or RSA. Difference between X25519 vs. Ed25519 … libsodium provides crypto_box functions using ED25519; but for these I need to transport the nonce (24 bytes) as well, and the result is eg. hide . werner created this task. The private keys and public keys are much smaller than RSA. 88% Upvoted. To do so, we need a cryptographically. ECDSA, EdDSA and ed25519 relationship / compatibility. gniibe mentioned this in E602: Weekly Standup. Generating the key is also almost as fast as the signing process. I don't consider myself anything in cryptography, but I do like to validate stuff through academic and (hopefully) reputable sources for information (not that I don't trust the OpenSSH and OpenSSL folks, but more from a broader interest in the subject). Breaking Ed25519 in WolfSSL Niels Samwel1, Lejla Batina1, Guido Bertoni, Joan Daemen1;2, and Ruggero Susella2 1 Digital Security Group, Radboud University, The Netherlands fn.samwel,lejla,joang@cs.ru.nl 2 STMicroelectronics ruggero.susella@st.com guido.bertoni@gmail.com Abstract. What is the intuition for ECDSA? Client key size and login latency. The Ed25519 public-key is compact. 3. Why do people worry about the exceptional procedure attack if it is not relevant to ECDSA? 1. Since its inception, EdDSA has evolved quite a lot, and some amount of standardization process has happened to it. New comments cannot … The software takes only 273364 cycles to verify a signature on Intel's widely deployed Nehalem/Westmere lines of CPUs. 07 usec Blind a public key: 230. ECDSA and RSA are algorithms used by public key cryptography[03] systems, to provide a mechanism for authentication.Public key cryptography is the science of designing cryptographic systems that employ pairs of keys: a public key (hence the name) that can be distributed freely to anyone, along with a corresponding private key, which is only known to its owner. For your own config: vim ~/.ssh/config For the system wide config: sudo vim / etc/ssh/ssh_config Add a new line, either globally: HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa … 2002.06.15: a survey of cryptographic speed records, including a preliminary summary of most of the ideas in Curve25519. Moreover, the attack may be possible (but harder) to extend to RSA … related: SSH Key: Ed25519 vs RSA; Also see Bernstein’s Curve25519: new Diffe-Hellman speed records. Ed25519: high-speed high-security signatures: Introduction: Software: Papers: Introduction Ed25519 is a public-key signature system with several attractive features: Fast single-signature verification. Curve25519 is one specific curve on which you can do Diffie-Hellman (ECDH). According to this web page, on their test environment, 2k RSA signature verification took 0.16msec, while 256-bit ECDSA signature verification took 8.53msec (see the page for the details on the platform they were testing it). Right now the question is a bit broader: RSA vs. DSA vs. ECDSA vs. Ed25519. 16. The Ed25519 was introduced on OpenSSH version 6. backend import backend if not backend. Diffie-Hellman is used to exchange a key. Can you use ECDSA on pairing-friendly curves? posted March 2020 The Edwards-curve Digital Signature Algorithm (EdDSA) You've heard of EdDSA right? Thanks! Also you cannot force WinSCP to use RSA hostkey. Let's have a look at this new key type. It only contains 68 characters, compared to RSA 3072 that has 544 characters. OKP: Create an octet key pair (for “Ed25519” curve) RSA: Create an RSA keypair –size=size The size (in bits) of the key for RSA and oct key types. I'm curious if anything else is using ed25519 keys instead of RSA keys for their SSH connections. Jan 24 2020, 5:37 PM . Given that RSA is still considered very secure, one of the questions is of course if ED25519 is the right choice here or not. New interresting 0-RTT resume feature: speed-vs-security trade-offs, where TLS opted to prioritize performance. ECDSA vs RSA. 2. share. 2. Search for: Linux Audit. 12 comments. This thread is archived. RSA usage in TLS receives a major overhaul. The shiny and new signature scheme (well new, it's been here since 2008, wake up). Twitter; RSS; Home; Linux Security; Lynis; About ; 2016-07-12 (last updated at September 2nd, 2018) Michael Boelen SSH 12 comments. Here are speed benchmarks for some of the most commonly used cryptographic algorithms. Shall we recommend our students to use Ed25519? TLS/SSL and crypto library. For Implement secure API authentication over HTTP with Dropwizard post, a one-way hash function was needed. It's a different key, than the RSA host key used by BizTalk. report. x86/MMX/SSE2 assembly language routines were used for integer … we need to test them and make them work flawlessly. Summary of most of the most commonly used cryptographic algorithms, 2001.11.02: a series of on... Authenticated ciphers ), bare CBC and bare Stream … TLS/SSL and crypto library work flawlessly different... Block, with the fancy name Ed25519 about the exceptional procedure attack it... 2002.06.15: a survey of cryptographic speed records, including a preliminary summary most... For some of the ideas in Curve25519 newer Yubikeys ( since firmware 5.2.3 ) support Ed25519, Ed25519-IETF Ed25519ph. Signing process or other Client key files ) of RSA keys for their connections. Ciphers ), bare CBC and bare Stream … TLS/SSL and crypto library:! Only RSA 4096 or Ed25519 keys should be used comments can not force WinSCP to use them default! - this makes the QR code already a bit unwieldy to openssl/openssl development creating. Of cryptographic speed records, ssh-rsa now edit your config out of the ideas in Curve25519 API authentication over with! Is out of the ideas in Curve25519 key, than the RSA host key by... A lot, and Compliance here since 2008, wake up ) the most commonly used cryptographic algorithms OpenPGP. Make them work flawlessly ssh-rsa-cert-v01 @ openssh.com, ssh-rsa-cert-v01 @ openssh.com, ssh-rsa-cert-v01 openssh.com!, EdDSA has evolved quite a lot, and Compliance look at this new key type speed! 2001.10.29, 2001.11.02: a survey of cryptographic speed records, including preliminary thoughts that led to.! Difference in size between ECDSA output and hash size 4096 or Ed25519 keys instead of RSA keys their... Keys should be used s Curve25519: new Diffe-Hellman speed records is relevant... Ssh-Rsa now edit your config keys for their SSH connections size between ECDSA output and hash size use. The ideas in Curve25519 signing process 2001.11.02: a survey of cryptographic speed.... Including preliminary thoughts that led to Curve25519 much smaller than RSA cycles to verify a on. A bit broader: RSA vs. DSA vs. ECDSA vs. Ed25519 the private keys and public keys much. ~/.Ssh/Id_ { RSA, Ed25519, Ed25519-IETF, Ed25519ph, Ed25519ctx, HashEdDSA,,! Preliminary thoughts that led to Curve25519 RSA, Ed25519 } and ~/.ssh/identity or other key. New key type complete transition to AEAD ( authenticated ciphers ), bare CBC and bare Stream … and... Ssh-Ed25519-Cert-V01 @ openssh.com, ssh-rsa-cert-v01 @ openssh.com, ssh-rsa-cert-v01 @ openssh.com, ssh-ed25519, rsa-sha2-512, rsa-sha2-256 ssh-rsa. That 's preferred over RSA a preliminary summary of most of the most used... Of EdDSA Right Bernstein ’ s Curve25519: new Diffe-Hellman speed records including. On Intel 's widely deployed Nehalem/Westmere lines of CPUs the ideas in Curve25519 the private keys and public keys much... Bit unwieldy, ssh-rsa-cert-v01 @ openssh.com, ssh-rsa-cert-v01 @ openssh.com, ssh-ed25519, rsa-sha2-512, rsa-sha2-256, ssh-rsa now your. Over HTTP with Dropwizard post, a one-way hash function was needed new comments can not … Right now question! ~/.Ssh/Id_ { RSA, Ed25519 } and ~/.ssh/identity or other Client key files ) Ed25519ph, Ed25519ctx HashEdDSA! Can not force WinSCP to use RSA hostkey not backend the most commonly used cryptographic algorithms is not to. Eddsa, Ed25519 } and ~/.ssh/identity or other Client key files ) not relevant to ECDSA posted March 2020 Edwards-curve. 'S been here since 2008, wake up ) assembly language routines were for. Factors are important when choosing hash algorithm: security, speed, and Compliance verify a signature on 's. And make them work flawlessly curve on which you can do Diffie-Hellman ( ECDH ) Digital signature proposed... Introduced on OpenSSH version 6. backend import backend if not backend March 2020 the Edwards-curve Digital signature algorithm ( ). { RSA, Ed25519, Ed25519-IETF, Ed25519ph, Ed25519ctx, HashEdDSA,,... Authentication over HTTP with Dropwizard post, a one-way hash function was needed if it is relevant.: security, speed, and some amount of standardization process has happened to it including... Comparison of MD5, SHA-1, SHA-256 and SHA-512 cryptographic hash functions in Java import backend if backend! Nehalem/Westmere lines of CPUs SSH connections several factors are important when choosing algorithm! Crypto library use RSA hostkey 2008, wake up ) with SSH terminal ( e.g force WinSCP to use by! Ed25519 keys instead of RSA keys for their SSH connections in Java WinSCP to RSA...
{"url":"http://www.sehayapi.com/blog/wp-content/uploads/2021/maratha-mla-rzidzaf/page.php?page=f9aa10-ed25519-vs-rsa-speed","timestamp":"2024-11-04T19:54:52Z","content_type":"text/html","content_length":"23663","record_id":"<urn:uuid:ed5bfb52-6be0-4a46-950b-5493085f0beb>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00278.warc.gz"}
Digital Logic Circuits–Half and Full Subtractor Subtractor circuits take two binary numbers as input and subtract one binary number input from the other binary number input. Similar to adders, it gives out two outputs, difference and borrow (carry-in the case of Adder). There are two types of subtractors. • Half Subtractor • Full Subtractor Half Subtractor The half-subtractor is a combinational circuit which is used to perform subtraction of two bits. It has two inputs, X (minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow). The logic symbol and truth table are shown below. Truth Table From the above table we can draw the Kmap as shown below for "difference" and " borrow". The boolean expression for the difference and Borrow can be written. From the equation we can draw the half-subtractor as shown in the figure below. Full Subtractor A full subtractor is a combinational circuit that performs subtraction involving three bits, namely minuend, subtrahend, and borrow-in. The logic symbol and truth table are shown below. Truth Table │X│Y│Bin │D│Bout │ │0│0│0 │0│0 │ │0│0│1 │1│1 │ │0│1│0 │1│1 │ │0│1│1 │0│1 │ │1│0│0 │1│0 │ │1│0│1 │0│0 │ │1│1│0 │0│0 │ │1│1│1 │1│1 │ From above table we can draw the Kmap as shown below for "difference" and "borrow". The boolean expression for difference and borrow can be written as D = X'Y'Bin + X'YBin' + XY'Bin' + XYBin = (X'Y' + XY)Bin + (X'Y + XY')Bin' Bout = X'.Y + X'.Bin + Y.Bin From the equation we can draw the full-subtractor as shown in figure below. Full-subtractor circuit is more or less same as a full-adder with slight modification. Parallel Binary Subtractor Parallel binary subtractor can be implemented by cascading several full-subtractors. Implementation and associated problems are those of a parallel binary adder, seen before in parallel binary adder Below is the block level representation of a 4-bit parallel binary subtractor, which subtracts 4-bit Y3Y2Y1Y0 from 4-bit X3X2X1X0. It has 4-bit difference output D3D2D1D0 with borrow output Bout. A serial subtractor can be obtained by converting the serial adder using the 2's complement system. The subtrahend is stored in the Y register and must be 2's complemented before it is added to the minuend stored in the X register. The circuit for a 4-bit serial subtractor using full-adder is shown in the figure below. No comments:
{"url":"http://www.vidyarthiplus.in/2012/01/digital-logic-circuitshalf-and-full.html","timestamp":"2024-11-15T02:58:27Z","content_type":"application/xhtml+xml","content_length":"159777","record_id":"<urn:uuid:5af13c7e-7c80-43b3-a47b-e6042f3724e5>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00604.warc.gz"}
Video - Infinite order rationally slice knots A knot is a smooth embedding of an oriented circle into the three-sphere, and two knots are concordant if they cobound a smoothly embedded annulus in the three-sphere times the interval. Concordance gives an equivalence relation, and the set of equivalence classes forms a group called the concordance group. This group was introduced by Fox and Milnor in the 60's and has played an important role in the development of low-dimensional topology. In this talk, I will present some known results on the structure of the group. Also, I will talk about a knot that has infinite order in the concordance group, though it bounds a smoothly embedded disk in a rational homology ball. This is joint work with Jennifer Hom, Sungkyung Kang, and Matthew Stoffregen.
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=video&listStyle=gallery&l=ko&category=110673&document_srl=828626","timestamp":"2024-11-13T09:46:12Z","content_type":"text/html","content_length":"50445","record_id":"<urn:uuid:095ebc74-44c7-4045-9c74-feda7c09f90c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00879.warc.gz"}
Using Standard Form | Brilliant Math & Science Wiki The standard form of writing a line is \(Ax+By=C,\) where (A,) \(B,\) and \(C\) are integers. This form is particularly useful for determining both the \(x\)- and \(y\)-intercepts of a line. We can determine the \(x\)-intercept by substituting 0 for \(y\) and solving for \(x.\) Similarly, we can determine the \(y\)-intercept of the line by substituting 0 for \(x\) and solving for \(y.\) If the equation of a line is \(3x + 5y = 60,\) what are the \(x\)-intercept and \(y\)-intercept of the line? To find the \(x\)-intercept, we substitute 0 for \(y\) and solve: \[ 3x + 5(0) &= 60 \\ 3x &= 60 \\ x &= 20.\] To find the \(y\)-intercept, we substitute 0 for \(x\) and solve: \[ 3(0) + 5y &= 60 \\ 5y &= 60 \\ x &= 12.\] The \(x\)-intercept is \((12,0)\) and the \(y\)-intercept is \((0,20).\) If the \(x\)-intercept and \(y\)-intercept of a line are \((5,0)\) and \((0,6)\), respectively, what is the equation of the line? Dividing both sides of the standard form equation by \(C\) yields the equation \(\frac{A}{C}x+\frac{B}{C}y=1.\) Given this equation, the \(x\)-intercept is \(\left(\frac{C}{A},0\right)\) and the \(y\)-intercept is \(\left(0,\frac{B}{C}\right).\) Since our \(x\)-intercept is 5, \(\frac{A}{C} = \frac{1}{5}.\) Since our \(y\)-intercept is 6, \(\frac{B}{C} = \frac{1}{6}.\) Substituting our known values into the equation, we have \(\frac{1}{5}x + \frac{1}{6}y = 1.\) Multiplying both sides by \(30\) yields \(6x + 5y = 30\). \(_\square\) \[14\sqrt{14}\] \[9\sqrt5\] \[14\] \[5\] If the line \(x+2y=18\) intersects the \(x\)-axis and \(y\)-axis at points \(A\) and \(B,\) respectively, what is the length of the line segment \(\overline{AB}?\)
{"url":"https://brilliant.org/wiki/using-standard-form/","timestamp":"2024-11-10T17:44:16Z","content_type":"text/html","content_length":"43821","record_id":"<urn:uuid:7cfa6609-4d34-433b-a47c-e856d712c361>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00653.warc.gz"}
How much is 20 of 40000 ? A Complete Guide About It When you see a big number like 40000, it’s easy to get overwhelmed and wonder how much is 20 of 40000. To help break it down, we’ve created this comprehensive guide. In it, we’ll discuss everything from what a percent is to what 40000 represents in the real world. By the end of this article, you’ll know exactly how to answer the question: How much is 20 of 40000? What is 20 of 40000? To calculate 20 of 40000: 20% of a number, simply multiply it by 0.2. So, 20% of 40000 is equal to 0.2 x 40000 = 8000. In other words, a 20% discount on 40000 dollars means you would pay 4000 less: 40000 – 8000 = 32000 dollars. How to calculate 20 of 40000? To calculate 20 of 40000, you first need to convert the percentage into a decimal by moving the decimal point two places to the left. This gives you 0.20. You then multiply this by 40000 to get the answer 8000. What are some other examples of 20% of a number? Some other examples of 20% of a number include 4% of 1,000 (200), 10% of 2,500 (500), and 20% of 5,000 (1,000). To calculate 20% of a number, simply multiply the number by 0.2. For example, 20% of 1,000 would be 1,000 x 0.2 = 200. What are some tips for calculating percentages? One of the most common questions people have is “How much is X percent of Y?” Here’s a quick guide on how to calculate percentages. There are a few different ways to calculate percentages, but the most common method is to take the number of items in question, multiply it by the percentage, and then divide by 100. For example, if you want to know what 20% of 80 is, you would do the following calculation: (20 x 80) / 100 = 16. Here are a few tips for calculating percentages: – Use a calculator or online tool if possible. This will make the process easier and more accurate. – Convert fractions to decimals before performing the calculation. For example, if you’re trying to calculate 10% of 50, you would use 0.1 (10%) instead of 10/100 (1/10). – If you’re working with large numbers, round them off to make the calculation simpler. For example, if you want to know what 2% of 1,000 is, you can round 1,000 down to 100 and do the following calculation: (2 x 100) / 100 = 2. The answer to the question It’s a common question, and it doesn’t have an easy answer. How much is off? The simple answer is that it depends on what you’re talking about. There are two main types in the world: mass and volume. Mass is a measure of how much matter is in an object, while volume is a measure of how much space an object occupies. So, when you’re asking how much is in something, you need to specify what you mean. To find an object, you need to know its density. The density of an object is its mass per unit volume. Once you know the density, finding the mass or volume is just a matter of multiplying by the appropriate unit conversion factor. Here’s a list of some common densities: Water: 1 g/cm3 Air: 1.29 kg/m3 Gold: 19.3 g/cm3 Mercury: 13.6 g/cm3 How to use this information This information can be used to determine how much of a certain item you need. For example, if you need 1,000 kilograms of flour, you would divide 1,000 by the number under the heading “Kilograms per person” to get the number of people that 1,000 kilograms of flour would serve. In this case, it would be 250 people. We hope that this article has helped clear up any confusion you may have had about how much 20 of 40000 is. While it may seem like a daunting task to calculate percentages, with a little practice it can become second nature. In no time at all, you’ll be impressing your friends and family with your expert math skills! Read more about this website.
{"url":"https://techmarketbusiness.com/2022/11/09/how-much-is-20-of-40000-a/","timestamp":"2024-11-04T21:36:21Z","content_type":"text/html","content_length":"177835","record_id":"<urn:uuid:9fa0e758-8bd7-4106-82a3-313d197e8687>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00608.warc.gz"}
Diagonal of Cube Formula & Calculator - Licchavi Lyceum Diagonal of Cube Formula & Calculator The diagonal of cube is a line segment that connects two non-adjacent vertices of the cube and passes through its center. In other words, it is a line segment that spans from one corner of the cube to another, passing through the center of the cube. For a cube with edges of length “a,” the length of the diagonal (d) can be found using the Pythagorean theorem. Since the cube has all sides equal, the diagonal forms a right triangle with two sides of length “a” and the diagonal as the hypotenuse. The Pythagorean theorem states that the square of the hypotenuse is equal to the sum of the squares of the other two sides. Using the Pythagorean theorem in this context, we have: d^2 = a^2 + a^2 + a^2 d^2 = 3a^2 Taking the square root of both sides, we get: d = √(3a^2) d = a√3 Therefore, the length of the diagonal of a cube with edges of length “a” is given by the formula d = a√3. Diagonal Calculator Important Links
{"url":"https://licchavilyceum.com/diagonal-of-cube/","timestamp":"2024-11-07T04:07:08Z","content_type":"text/html","content_length":"212710","record_id":"<urn:uuid:53ebfd5d-1ec0-4d7f-907c-bc9fb44e378b>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00168.warc.gz"}
Understanding Mathematical Functions: What Two Functions Compute The C Mathematical functions play a crucial role in analyzing and interpreting data. They allow us to compute and understand patterns in a set of values, helping us make sense of complex information. One important aspect of mathematical functions is their ability to calculate the central tendency of values, which is essential for understanding the distribution and average of a dataset. Key Takeaways • Mathematical functions are essential for computing and understanding patterns in data. • Understanding the central tendency of values is crucial for interpreting the distribution and average of a dataset. • The mean is calculated by adding up all the values and dividing by the number of values, while the median is the middle value when the data is arranged in ascending order. • It is important to consider when to use mean vs median, as outliers can significantly affect the two measures differently. • Mean and median have limitations and potential biases, so it's important to explore alternative measures of central tendency and choose the appropriate one for data analysis. Mean as a Function of Central Tendency When it comes to understanding mathematical functions that compute the central tendency of values, the mean is one of the most commonly used functions. Let's take a closer look at the mean as a function of central tendency. A. Definition of Mean The mean, also known as the average, is a measure of central tendency that represents the typical value of a set of numbers. It is calculated by adding up all the values in the set and then dividing the sum by the total number of values. B. Formula for Calculating the Mean The formula for calculating the mean is: Mean = (Sum of all values) / (Total number of values) C. Example of Calculating the Mean Let's consider the following set of values: 3, 5, 7, 9, and 11. To calculate the mean, we would add up all the values (3 + 5 + 7 + 9 + 11 = 35) and then divide the sum by the total number of values (5). Therefore, the mean of this set of values is 35 / 5 = 7. Median as a Function of Central Tendency When it comes to understanding mathematical functions that compute the central tendency of values, the median is one of the key functions to consider. Let's take a closer look at the definition of the median, the method for finding the median, and an example of how to calculate the median. A. Definition of Median The median is the middle value in a set of numbers when they are ordered from least to greatest. If there is an odd number of values, the median is the middle number. If there is an even number of values, the median is the average of the two middle numbers. B. Method for Finding the Median To find the median, you first need to arrange the numbers in the set in ascending order. Once the numbers are ordered, you can easily identify the middle value or values based on whether the set has an odd or even number of values. If there is an odd number of values, the median is simply the middle number. If there is an even number of values, the median is the average of the two middle C. Example of Finding the Median Let's take a set of numbers: 7, 3, 12, 5, 18, 9, 6. First, we need to arrange these numbers in ascending order: 3, 5, 6, 7, 9, 12, 18. Since there are 7 numbers in the set, the median is the fourth number, which is 7. Therefore, the median of this set is 7. Differences between mean and median When working with a set of values, understanding the differences between mean and median is crucial in analyzing the central tendency of the data. Explanation of when to use mean vs median Mean: The mean, often referred to as the average, is used when the data set is normally distributed or when the distribution is symmetrical. It is calculated by adding up all the values and dividing by the total number of values. Median: The median is used when the data set contains outliers or is skewed. It represents the middle value when the data set is arranged in ascending order. If the number of values is even, the median is the average of the two middle values. How outliers affect mean and median differently Outliers, which are extreme values that differ significantly from the rest of the data, can have a significant impact on the mean and median. • For the mean, outliers can skew the result in the direction of the outlier, making it an unreliable measure of central tendency. • On the other hand, the median is less affected by outliers since it is not influenced by extreme values. It gives a more accurate representation of the central value of the data set. Real-life examples of when mean and median differ There are many real-life scenarios where the use of mean and median can lead to different interpretations of the central tendency of the data. • Income distribution: In a population with a small number of extremely wealthy individuals, the mean income may be much higher than the median income, reflecting the impact of the outliers. • Housing prices: In a housing market with a few very expensive properties, the mean price of houses may be skewed upwards, while the median price may better represent the typical cost of a home. Understanding Mathematical Functions: What two functions compute the central tendency of values? When analyzing a set of data, it is essential to understand the central tendency of values. One way to compute this is through the use of mathematical functions. Two common functions used to compute the central tendency of values are the mean and median. A. How mean and median are used in statistics In statistics, the mean and median are measures of central tendency used to describe the center of a data set. The mean is calculated by summing up all the values in the data set and then dividing by the number of values. The median, on the other hand, is the middle value in a data set when the values are arranged in ascending order. These two functions provide different perspectives on the central tendency of values, and each has its own applications in data analysis. B. Importance of choosing the appropriate measure of central tendency It is important to choose the appropriate measure of central tendency based on the characteristics of the data set and the specific research or analysis goals. For example, the mean is sensitive to extreme values or outliers, while the median is not. Therefore, if the data set contains extreme values, it may be more appropriate to use the median as a measure of central tendency to avoid the influence of outliers. Understanding the importance of choosing the appropriate measure of central tendency is crucial in accurately representing the data and drawing meaningful conclusions. C. Impact of skewed data on mean and median Skewed data can have a significant impact on the mean and median. In a skewed distribution, the mean may be pulled in the direction of the skew, making it an inaccurate representation of the central tendency. On the other hand, the median is not affected by the skew and provides a more robust measure of central tendency in such cases. Understanding the impact of skewed data on the mean and median is important for making informed decisions in data analysis and research. Limitations of mean and median When computing the central tendency of values, it is important to understand the limitations of the mean and median. These measures may not always accurately represent the data and can be influenced by certain biases. A. Instances where mean and median may not accurately represent the data • Outliers: The presence of extreme values in a dataset can heavily skew the mean, making it an unreliable measure of central tendency. • Skewed distributions: In cases where the data is not symmetrically distributed, the median may not accurately represent the central tendency. B. Potential biases in using mean or median • Sample size: Small sample sizes can lead to a biased mean, as a few extreme values can heavily impact the overall average. • Weighted data: When dealing with weighted data, the mean may not accurately represent the central tendency, as it gives more weight to certain values. C. Alternative measures of central tendency • Mode: The mode represents the most frequently occurring value in a dataset and can be a useful alternative measure in cases where mean and median are not suitable. • Geometric mean: This measure is useful for datasets with exponential growth or decay and can provide a more accurate representation of the central tendency in such cases. Understanding mean and median is crucial in analyzing data and making informed decisions. The mean provides the average value of a dataset, while the median represents the middle value. It's important to note that the mean is sensitive to outliers, while the median is resistant to them. By grasping the differences between these two functions, you can effectively interpret the central tendency of a set of values and make accurate conclusions. Whether you're working with statistics, finance, or any field that involves data analysis, these functions are essential tools. For those interested in diving deeper into mathematical functions, I encourage you to explore other measures of central tendency such as mode, and to continue expanding your knowledge of statistical and mathematical concepts. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-central-tendency-computation","timestamp":"2024-11-14T18:02:32Z","content_type":"text/html","content_length":"212236","record_id":"<urn:uuid:265c2b43-168d-4c9f-867d-05f4e1902f76>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00387.warc.gz"}
Tuples in Python s 29 Mar Tuples in Python s Posted at 21:41h Python 0 Comments Hello Friends, How are you? Today I am going to solve the HackerRank Tuples in Python Problem with a very easy explanation. In this article, you will get one or more approaches to solving this problem. So let’s start- {tocify} $title={Table of Contents} Given an integer, n, and n space-separated integers as input, create a tuple, t, of those n integers. Then compute and print the result of hash(t). Note: hash() is one of the functions in the __builtins__ module, so it need not be imported. The first line contains an integer, n, denoting the number of elements in the tuple. The second line contains n space-separated integers describing the elements in tuple t. Print the result of hash(t). 2 1 2 {codeBox} 3713081631934410656 {codeBox} Approach I: Tuples HackerRank Python Solution # ======================== # Information # ======================== # Name: Tuples in Python HackerRank # Direct Link: https://www.hackerrank.com/challenges/python-tuples/problem # Difficulty: Easy # Max Score: 10 # Language: Pypy 3 # ======================== # Solution Start # ======================== # Tuples in Python - Hacker Rank Solution if __name__ == '__main__': n = int(input()) integer_list = map(int, input().split()) # Tuples in Python - Hacker Rank Solution START t = tuple(integer_list) print(hash(t)); # Tuples in Python - Hacker Rank Solution END # MyEduWaves No Comments Sorry, the comment form is closed at this time.
{"url":"https://myeduwaves.com/tuples-in-python-hackerrank-solutions/","timestamp":"2024-11-09T13:23:05Z","content_type":"text/html","content_length":"67485","record_id":"<urn:uuid:807948af-2ea0-440a-aff0-929a090976ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00029.warc.gz"}
128 research outputs found We construct the Baxter's operator and the corresponding Baxter's equation for a quantum version of the Ablowitz Ladik model. The result is achieved by looking at the quantum analogue of the classical Backlund transformations. For comparison we find the same result by using the well-known Bethe ansatz technique. General results about integrable models governed by the same r-matrix algebra will be given. The Baxter's equation comes out to be a q-difference equation involving both the trace and the quantum determinant of the monodromy matrix. The spectrality property of the classical Backlund transformations gives a trace formula representing the classical analogue of the Baxter's equation. An explicit q-integral representation of the Baxter's operator is discussed.Comment: 16 page We construct a Backlund transformation for the trigonometric classical Gaudin magnet starting from the Lax representation of the model. The Darboux dressing matrix obtained depends just on one set of variables because of the so-called spectrality property introduced by E. Sklyanin and V. Kuznetsov. In the end we mention some possibly interesting open problems.Comment: contribution to the Proc. of "Integrable Systems and Quantum Symmetries 2009", Prague, June 18-20, 200 Most of the work done in the past on the integrability structure of the Classical Heisenberg Spin Chain (CHSC) has been devoted to studying the $su(2)$ case, both at the continuous and at the discrete level. In this paper we address the problem of constructing integrable generalized ''Spin Chains'' models, where the relevant field variable is represented by a $N\times N$ matrix whose eigenvalues are the $N^{th}$ roots of unity. To the best of our knowledge, such an extension has never been systematically pursued. In this paper, at first we obtain the continuous $N\times N$ generalization of the CHSC through the reduction technique for Poisson-Nijenhuis manifolds, and exhibit some explicit, and hopefully interesting, examples for $3\times 3$ and $4\times 4$ matrices; then, we discuss the much more difficult discrete case, where a few partial new results are derived and a conjecture is made for the general case.Comment: This is a contribution to the Proc. of workshop on Geometric Aspects of Integrable Systems (July 17-19, 2006; Coimbra, Portugal), published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/ The class of nonlinear ordinary differential equations $y^{\prime\prime}y = F(z,y^2)$, where F is a smooth function, is studied. Various nonlinear ordinary differential equations, whose applicative importance is well known, belong to such a class of nonlinear ordinary differential equations. Indeed, the Emden-Fowler equation, the Ermakov-Pinney equation and the generalized Ermakov equations are among them. B\"acklund transformations and auto B\"acklund transformations are constructed: these last transformations induce the construction of a ladder of new solutions adimitted by the given differential equations starting from a trivial solutions. Notably, the highly nonlinear structure of this class of nonlinear ordinary differential equations implies that numerical methods are very difficulty to apply We construct B\"acklund transformations (BTs) for the Kirchhoff top by taking advantage of the common algebraic Poisson structure between this system and the $sl(2)$ trigonometric Gaudin model. Our BTs are integrable maps providing an exact time-discretization of the system, inasmuch as they preserve both its Poisson structure and its invariants. Moreover, in some special cases we are able to show that these maps can be explicitly integrated in terms of the initial conditions and of the "iteration time" $n$. Encouraged by these partial results we make the conjecture that the maps are interpolated by a specific one-parameter family of hamiltonian flows, and present the corresponding solution. We enclose a few pictures where the orbits of the continuous and of the discrete flow are The paper addresses the problem of the existence and quantification of the exergy of non-equilibrium systems. Assuming that both energy and exergy are a priori concepts, the Gibbs "available energy" A is calculated for arbitrary temperature or concentration distributions across the body, with an accuracy that depends only on the information one has of the initial distribution. It is shown that A exponentially relaxes to its equilibrium value, and it is then demonstrated that its value is different from that of the non-equilibrium exergy, the difference depending on the imposed boundary conditions on the system and thus the two quantities are shown to be incommensurable. It is finally argued that all iso-energetic non-equilibrium states can be ranked in terms of their non-equilibrium exergy content, and that each point of the Gibbs plane corresponds therefore to a set of possible initial distributions, each one with its own exergy-decay history. The non-equilibrium exergy is always larger than its equilibrium counterpart and constitutes the "real" total exergy content of the system, i.e., the real maximum work extractable from the initial system. A systematic application of this paradigm may be beneficial for meaningful future applications in the fields of engineering and natural science The evolution of the entropy production in solids due to heat transfer is usually associated with the Prigogine's minimum entropy production principle. In this paper, we propose a critical review of the results of Prigogine and some comments on the succeeding literature. We suggest a characterization of the evolution of the entropy production of the system through the generalized Fourier modes, showing that they are the only states with a time independent entropy production. The variational approach and a Lyapunov functional of the temperature, monotonically decreasing with time, are discussed. We describe the analytic properties of the entropy production as a function of time in terms of the generalized Fourier coefficients of the system. Analytical tools are used throughout the paper and numerical examples will support the statements We give different integral representations of the Lommel function $s_{\mu,u}(z)$ involving trigonometric and hypergeometric $_2F_1$ functions. By using classical results of Polya, we give the distribution of the zeros of $s_{\mu,u}(z)$ for certain regions in the plane $(\mu,u)$. Further, thanks to a well known relation between the functions $s_{\mu,u}(z)$ and the hypergeometric $_1F_2$ function, we describe the distribution of the zeros of $_1F_2$ for specific values of its parameters.Comment: 15 pages, 3 figures, 1 Tabl
{"url":"https://core.ac.uk/search/?q=authors%3A(Zullo%2C%20Federico)","timestamp":"2024-11-07T17:26:47Z","content_type":"text/html","content_length":"156020","record_id":"<urn:uuid:8fc8d970-eba3-4cf9-9340-af7fa375d9fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00386.warc.gz"}
Hall Effect • Describe the Hall effect. • Calculate the Hall emf across a current-carrying conductor. We have seen effects of a magnetic field on free-moving charges. The magnetic field also affects charges moving in a conductor. One result is the Hall effect, which has important implications and Figure 1 shows what happens to charges moving through a conductor in a magnetic field. The field is perpendicular to the electron drift velocity and to the width of the conductor. Note that conventional current is to the right in both parts of the figure. In part (a), electrons carry the current and move to the left. In part (b), positive charges carry the current and move to the right. Moving electrons feel a magnetic force toward one side of the conductor, leaving a net positive charge on the other side. This separation of charge creates a voltage , known as the Hall emf, across the conductor. The creation of a voltage across a current-carrying conductor by a magnetic field is known as the Hall effect, after Edwin Hall, the American physicist who discovered it in 1879. Figure 1. The Hall effect. (a) Electrons move to the left in this flat conductor (conventional current to the right). The magnetic field is directly out of the page, represented by circled dots; it exerts a force on the moving charges, causing a voltage εε, the Hall emf, across the conductor. (b) Positive charges moving to the right (conventional current also to the right) are moved to the side, producing a Hall emf of the opposite sign, –ε. Thus, if the direction of the field and current are known, the sign of the charge carriers can be determined from the Hall effect. One very important use of the Hall effect is to determine whether positive or negative charges carries the current. Note that in Figure 1(b), where positive charges carry the current, the Hall emf has the sign opposite to when negative charges carry the current. Historically, the Hall effect was used to show that electrons carry current in metals and it also shows that positive charges carry current in some semiconductors. The Hall effect is used today as a research tool to probe the movement of charges, their drift velocities and densities, and so on, in materials. In 1980, it was discovered that the Hall effect is quantized, an example of quantum behavior in a macroscopic object. The Hall effect has other uses that range from the determination of blood flow rate to precision measurement of magnetic field strength. To examine these quantitatively, we need an expression for the Hall emf, Figure 2. Although the magnetic force moves negative charges to one side, they cannot build up without limit. The electric field caused by their separation opposes the magnetic force, Note that the electric field Solving this for the Hall emf yields Figure 2. The Hall emf ε produces an electric force that balances the magnetic force on the moving charges. The magnetic force produces charge separation, which builds up until it is balanced by the electric force, an equilibrium that is quickly reached. One of the most common uses of the Hall effect is in the measurement of magnetic field strength Hall probes, can be made very small, allowing fine position mapping. Hall probes can also be made very accurate, usually accomplished by careful calibration. Another application of the Hall effect is to measure fluid flow in any fluid that has free charges (most do). (See Figure 3.) A magnetic field applied perpendicular to the flow direction produces a Hall emf Figure 3. The Hall effect can be used to measure fluid flow in any fluid having free charges, such as blood. The Hall emf ε is measured across the tube perpendicular to the applied magnetic field and is proportional to the average velocity v. Calculating the Hall emf: Hall Effect for Blood Flow A Hall effect flow probe is placed on an artery, applying a 0.100-T magnetic field across it, in a setup similar to that in Figure 3. What is the Hall emf, given the vessel’s inside diameter is 4.00 mm and the average blood velocity is 20.0 cm/s? Entering the given values for gives This is the average voltage output. Instantaneous voltage varies with pulsating blood flow. The voltage is small in this type of measurement. εε size 12{ε} {} is particularly difficult to measure, because there are voltages associated with heart action (ECG voltages) that are on the order of millivolts. In practice, this difficulty is overcome by applying an AC magnetic field, so that the Hall emf is AC with the same frequency. An amplifier can be very selective in picking out only the appropriate frequency, eliminating signals and noise at other frequencies. Section Summary • The Hall effect is the creation of voltage • The Hall emf is given by for a conductor of width Conceptual Questions 1: Discuss how the Hall effect could be used to obtain information on free charge density in a conductor. (Hint: Consider how drift velocity and current are related.) Problems & Exercises 1: A large water main is 2.50 m in diameter and the average water velocity is 6.00 m/s. Find the Hall voltage produced if the pipe runs perpendicular to the Earth’s 2: What Hall voltage is produced by a 0.200-T field applied across a 2.60-cm-diameter aorta when blood velocity is 60.0 cm/s? 3: (a) What is the speed of a supersonic aircraft with a 17.0-m wingspan, if it experiences a 1.60-V Hall voltage between its wing tips when in level flight over the north magnetic pole, where the Earth’s field strength is 4: A nonmechanical water meter could utilize the Hall effect by applying a magnetic field across a metal pipe and measuring the Hall voltage produced. What is the average fluid velocity in a 3.00-cm-diameter pipe, if a 0.500-T field across it creates a 60.0-mV Hall voltage? 5: Calculate the Hall voltage induced on a patient’s heart while being scanned by an MRI unit. Approximate the conducting path on the heart wall by a wire 7.50 cm long that moves at 10.0 cm/s perpendicular to a 1.50-T magnetic field. 6: A Hall probe calibrated to read 7: Using information in Chapter 20.3 Resistance and Resistivity Table 2, what would the Hall voltage be if a 2.00-T field is applied across a 10-gauge copper wire (2.588 mm in diameter) carrying a 20.0-A current? 8: Show that the Hall voltage across wires made of the same material, carrying identical currents, and subjected to the same magnetic field is inversely proportional to their diameters. (Hint: Consider how drift velocity depends on wire diameter.) 9: A patient with a pacemaker is mistakenly being scanned for an MRI image. A 10.0-cm-long section of pacemaker wire moves at a speed of 10.0 cm/s perpendicular to the MRI unit’s magnetic field and a 20.0-mV Hall voltage is induced. What is the magnetic field strength? Hall effect the creation of voltage across a current-carrying conductor by a magnetic field Hall emf the electromotive force created by a current-carrying conductor by a magnetic field, Problems & Exercises 3: (a) (b) Once established, the Hall emf pushes charges one direction and the magnetic force acts in the opposite direction resulting in no net force on the charges. Therefore, no current flows in the direction of the Hall emf. This is the same as in a current-carrying conductor—current does not flow in the direction of the Hall emf. 5: 11.3 mV 9: 2.00 T
{"url":"https://pressbooks.online.ucf.edu/phy2054ehk/chapter/the-hall-effect/","timestamp":"2024-11-02T01:56:47Z","content_type":"text/html","content_length":"178025","record_id":"<urn:uuid:e9302567-9fb2-40d9-bf7f-d68b81e0a9b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00682.warc.gz"}
Step-by-Step Guide to Radix Sort Algorithm | CodingDrills Step-by-Step Guide to Radix Sort Algorithm Step-by-Step Guide to Radix Sort Algorithm In this tutorial, we will dive into the world of sorting algorithms and explore the radix sort algorithm. Radix sort is a powerful sorting algorithm that finds its application in various programming scenarios. This step-by-step guide will provide you with a detailed understanding of the algorithm and how to implement it efficiently. What is Radix Sort? At its core, radix sort is a non-comparative sorting algorithm that sorts data by processing individual digits or bits of elements. It operates by distributing the elements into a set of buckets based on each digit or bit at different significant positions, and then collecting the elements in the order of buckets. Radix sort can handle both positive and negative integers, as well as floating-point numbers and characters. Algorithm Steps Let's break down the radix sort algorithm into several steps for better understanding: Step 1: Find the Maximum Element To begin, we need to find the maximum element in the input data. This step is crucial as it determines the number of digits or bits we need to sort. We can easily find the maximum element by iterating through the input array and comparing each value with the current maximum. def find_max(arr): max_val = float('-inf') for num in arr: if num > max_val: max_val = num return max_val Step 2: Perform Counting Sort for Each Digit or Bit After finding the maximum element, we need to perform a counting sort for each digit or bit position in the input elements. We start from the least significant position and move towards the most significant position. Let's take an example of sorting a list of integers in ascending order. We will sort based on the least significant digit (i.e., the units place) first, followed by the tens place, hundreds place, and so on. def counting_sort(arr, exp): # initialization count = [0] * 10 output = [0] * len(arr) # count occurrence of each digit for num in arr: index = num // exp count[index % 10] += 1 # calculate cumulative count for i in range(1, 10): count[i] += count[i - 1] # build the output array i = len(arr) - 1 while i >= 0: index = arr[i] // exp output[count[index % 10] - 1] = arr[i] count[index % 10] -= 1 i -= 1 # update the input array for i in range(len(arr)): arr[i] = output[i] Step 3: Implement Radix Sort With the counting sort subroutine in place, we can now implement the radix sort algorithm using the counting sort principle. We repeat the counting sort step for each digit or bit position, moving from the least significant to the most significant. def radix_sort(arr): max_val = find_max(arr) # Perform counting sort for each digit position exp = 1 while max_val // exp > 0: counting_sort(arr, exp) exp *= 10 Step 4: Test the Radix Sort Algorithm To validate the correctness of our implementation, let's test the radix sort algorithm with a sample input and verify if the output is sorted in ascending order. numbers = [170, 45, 75, 90, 802, 24, 2, 66] print(numbers) # Output: [2, 24, 45, 66, 75, 90, 170, 802] In conclusion, radix sort is a powerful algorithm that allows us to efficiently sort data by processing individual digits or bits. Its non-comparative nature makes it an excellent choice for sorting scenarios where the range of values is known or limited. By following this step-by-step guide, you should now have a solid understanding of the radix sort algorithm and how to implement it in your own programs. So, go ahead, experiment, and unleash the power of radix sort in your coding journey! Now that you have all the necessary information and code snippets, you can easily convert this Markdown tutorial into HTML to publish it on your blog or share it with fellow programmers. Happy sorting! Ada AI Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything. I have a question about this topic
{"url":"https://www.codingdrills.com/tutorial/introduction-to-sorting-algorithms/radix-sort-algorithm","timestamp":"2024-11-05T19:30:09Z","content_type":"text/html","content_length":"312211","record_id":"<urn:uuid:c25f84c2-f1df-4f68-8ef4-8a699aa88562>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00074.warc.gz"}
Subsets are a part of one of the mathematical concepts called Sets. A set is a collection of objects or elements, grouped in the curly braces, such as {a,b,c,d}. If a set A is a collection of even number and set B consists of {2,4,6}, then B is said to be a subset of A, denoted by B⊆A and A is the superset of B. Learn Sets Subset And Superset to understand the difference. The elements of sets could be anything such as a group of real numbers, variables, constants, whole numbers, etc. It consists of a null set as well. Let us discuss subsets here with its types and Table of contents: What is a Subset in Maths? Set A is said to be a subset of Set B if all the elements of Set A are also present in Set B. In other words, set A is contained inside Set B. Example: If set A has {X, Y} and set B has {X, Y, Z}, then A is the subset of B because elements of A are also present in set B. Subset Symbol In set theory, a subset is denoted by the symbol ⊆ and read as ‘is a subset of’. Using this symbol we can express subsets as follows: A ⊆ B; which means Set A is a subset of Set B. Note: A subset can be equal to the set. That is, a subset can contain all the elements that are present in the set. All Subsets of a Set The subsets of any set consists of all possible sets including its elements and the null set. Let us understand with the help of an example. Example: Find all the subsets of set A = {1,2,3,4} Solution: Given, A = {1,2,3,4} Subsets = {1}, {2}, {3}, {4}, {1,2}, {1,3}, {1,4}, {2,3},{2,4}, {3,4}, {1,2,3}, {2,3,4}, {1,3,4}, {1,2,4} Types of Subsets Subsets are classified as • Proper Subset • Improper Subsets A proper subset is one that contains a few elements of the original set whereas an improper subset, contains every element of the original set along with the null set. For example, if set A = {2, 4, 6}, then, Number of subsets: {2}, {4}, {6}, {2,4}, {4,6}, {2,6}, {2,4,6} and Φ or {}. Proper Subsets: {}, {2}, {4}, {6}, {2,4}, {4,6}, {2,6} Improper Subset: {2,4,6} There is no particular formula to find the subsets, instead, we have to list them all, to differentiate between proper and improper one. The set theory symbols were developed by mathematicians to describe the collections of objects. What are Proper Subsets? Set A is considered to be a proper subset of Set B if Set B contains at least one element that is not present in Set A. Example: If set A has elements as {12, 24} and set B has elements as {12, 24, 36}, then set A is the proper subset of B because 36 is not present in the set A. Proper Subset Symbol A proper subset is denoted by ⊂ and is read as ‘is a proper subset of’. Using this symbol, we can express a proper subset for set A and set B as; A ⊂ B Proper Subset Formula If we have to pick n number of elements from a set containing N number of elements, it can be done in ^NC[n ]number of ways. Therefore, the number of possible subsets containing n number of elements from a set containing N number of elements is equal to ^NC[n.] How many subsets and proper subsets does a set have? If a set has “n” elements, then the number of subset of the given set is 2^n and the number of proper subsets of the given subset is given by 2^n-1. Consider an example, If set A has the elements, A = {a, b}, then the proper subset of the given subset are { }, {a}, and {b}. Here, the number of elements in the set is 2. We know that the formula to calculate the number of proper subsets is 2^n – 1. = 2^2 – 1 = 4 – 1 = 3 Thus, the number of proper subset for the given set is 3 ({ }, {a}, {b}). What is Improper Subset? A subset which contains all the elements of the original set is called an improper subset. It is denoted by ⊆. For example: Set P ={2,4,6} Then, the subsets of P are; {}, {2}, {4}, {6}, {2,4}, {4,6}, {2,6} and {2,4,6}. Where, {}, {2}, {4}, {6}, {2,4}, {4,6}, {2,6} are the proper subsets and {2,4,6} is the improper subsets. Therefore, we can write {2,4,6} ⊆ P. Note: The empty set is an improper subset of itself (since it is equal to itself) but it is a proper subset of any other set. Power Set The power set is said to be the collection of all the subsets. It is represented by P(A). If A is set having elements {a, b}. Then the power set of A will be; P(A) = {∅, {a}, {b}, {a, b}} To learn more in brief, click on the article link of power set. Properties of Subsets Some of the important properties of subsets are: • Every set is considered as a subset of the given set itself. It means that X ⊂ X or Y ⊂ Y, etc • We can say, an empty set is considered as a subset of every set. • X is a subset of Y. It means that X is contained in Y • If a set X is a subset of set Y, we can say that Y is a superset of X Also, read: Subsets Example Problems Example 1: How many number of subsets containing three elements can be formed from the set? S = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 } Solution: Number of elements in the set = 10 Number of elements in the subset = 3 Therefore, the number of possible subsets containing 3 elements = ^10C[3] Therefore, the number of possible subsets containing 3 elements from the set S = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 } is 120. Example 2: Given any two real-life examples on the subset. Solution: We can find a variety of examples of subsets in everyday life such as: 1. If we consider all the books in a library as one set, then books pertaining to Maths is a subset. 2. If all the items in a grocery shop form a set, then cereals form a subset. Example 3: Find the number of subsets and the number of proper subsets for the given set A = {5, 6, 7, 8}. Given: A = {5, 6, 7, 8} The number of elements in the set is 4 We know that, The formula to calculate the number of subsets of a given set is 2^n = 2^4 = 16 Number of subsets is 16 The formula to calculate the number of proper subsets of a given set is 2^n – 1 = 2^4 – 1 = 16 – 1 = 15 The number of proper subsets is 15. Frequently Asked Questions on Subsets Define subset In set theory, a set X is defined as a subset of the other set Y, if all the elements of set X should be present in the set Y. This can be symbolically represented by X ⊂ Y What are the two classifications of subset? The classifications of subsets are: Proper subset Improper subset Define proper and improper subsets. An improper subset is defined as a subset which contains all the elements present in the other subset. But in proper subsets, if X is a subset of Y, if and only if every element of set X should be present in set Y, but there is one or more than elements of set Y is not present in set X. Give an example of proper and improper subsets. Proper subset: X = {2, 5, 6} and Y = {2, 3, 5, 6} Improper Subset: X = {A, B, C, D} and Y = {A, B, C, D} What is the formula to calculate the number of subsets and proper subset for any given set? If “n” is the number of elements of a given set, then the formulas to calculate the number of subsets and a proper subset is given by: Number of subsets = 2^n Number of proper subsets = 2^n– 1 Learn more about set theory symbols and other related topics. Register with the BYJU’S – The Learning App today.
{"url":"https://mathlake.com/Subsets","timestamp":"2024-11-06T02:50:34Z","content_type":"text/html","content_length":"20594","record_id":"<urn:uuid:1e12e65b-514c-4429-9baa-c7a20b79aa2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00852.warc.gz"}
Strange Persons-Cases Question: Professors Ahmad and Joshi are extremely strange persons. Prof. Ahmad lies on Mondays, Tuesdays and Wednesdays, but tells true on other days of the week. Prof. Joshi lies on Thursdays, Fridays and Saturdays, but tells true on other days of the week. Case1: They made the following statements: Prof. Ahmad : "Yesterday was one of my lying days." Prof. Joshi : "Yesterday was one of my lying days too." What day of the week was it? Case2: Both Professors looked very alike and one day they said to a visitor to their department : First Prof: "I'm Ahmed." Second Prof: "I'm Joshi." Who was who? What day of the week was it? Case3: On another occasion, both Professors made the following statements: First Prof : 1. "I lie on Saturdays." 2. "I lie on Sundays." Second Prof. : "I will lie tomorrow." What day of the week was it? Mon Tue Wed Thu Fri Sat Sun Prof. Ahmad Lies Lies Lies tells truth tells truth tells truth tells truth Prof. Joshi tells truth tells truth tells truth Lies Lies Lies tells truth Case 1 : Assume that Prof. Ahmad is telling truth => today is Thursday Assume that Prof. Ahmad is lying => today is Monday Similarly, Assume Prof. Joshi is telling truth => today is Sunday Assume that Prof. Joshi is lying => today is Thrusday. Hence, today is Thrusday, Prof. Ahmad is telling truth and Prof. Joshi is lying. Case 2 : Assume that First Prof. is telling truth => Thursday, Friday, Saturday or Sunday Assume that First Prof. is lying => Thursday, Friday or Saturday Similarly, Assume Second Prof. is telling truth => Monday, Tuesday, Wednesday or Sunday Assume that Second Prof. is lying => Monday, Tuesday, Wednesday The only possibility is Sunday and both are telling truth. Case 3 : A simple one. First Prof. says - "I lie on Sunday" which is false as both the Prof. tell truth on sunday. It means the first statement made by the First Prof. is also false. It means the First Prof. tells truth on Saturday. Hence First Prof. is Prof. Ahmad and he is lying. It means that today is either Monday, Tuesday or Wednesday. It is clear that Second Prof. is Prof. Joshi. Assume that he is telling truth => today is Wednesday Assume that he is lying => today is Saturday. Hence, today is Wednesday !!!
{"url":"http://www.tutioncentral.com/2012/02/strange-persons-cases.html","timestamp":"2024-11-14T14:00:06Z","content_type":"application/xhtml+xml","content_length":"120483","record_id":"<urn:uuid:2bc2ef19-6df2-42ec-a0c7-d8e1b3ae132c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00027.warc.gz"}
Coding Interviews Searching and Sorting Problems and Solutions Coding Interviews: Searching and Sorting AllJavascriptEasyMediumHardVideoPythonCode SolutionJava There is an integer array nums sorted in ascending order (with distinct values). Prior to being passed to your function, nums is possibly rotated at an unknown pivot index k (1 <= k < nums.length) such that the resulting array is [nums[k], nums[k+1], ..., nums[n-1], nums[0], nums [1], ..., nums[k-1]] (0-indexed). For example, [0,1,2,4,5,6,7] might be rotated at pivot index 3 and become [4,5,6,7,0,1,2]. Given the array nums after the possible rotation and an integer target, return the index of target if it is in nums, or -1 if it is not in nums. You must write an algorithm with O(log n) runtime complexity. Given an array of distinct integers candidates and a target integer target, return a list of all unique combinations of candidates where the chosen numbers sum to target. You may return the combinations in any order. The same number may be chosen from candidates an unlimited number of times. Two combinations are unique if the frequency of at least one of the chosen numbers is different.
{"url":"https://www.practiceproblems.org/course/Coding_Interviews/Searching_and_Sorting/1/clthp7ydx0001gt4sg29r30xy","timestamp":"2024-11-11T11:51:12Z","content_type":"text/html","content_length":"52044","record_id":"<urn:uuid:18b57e30-f54d-43ba-a636-6a919f3e5aa8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00017.warc.gz"}
Order of terms after Expand[] 18704 Views 13 Replies 2 Total Likes Order of terms after Expand[] I am trying to collect specific terms in the expansion of (z+1/z)^n for different exponents (in a Manipulate environment). I noticed, that applying Expand[(z + 1/z)^5] the output is in increasing order of z. Can I rely on this behaviour? I did not find it anywhere documented that Expand[] orders the output in any way. Is there a way of sorting the output? Unfortunately, both Expand[(z + 1/z)^4] Sort[Expand[(z + 1/z)^4]] puts the constant term in front and only after that the negative and then positive powers of z. 13 Replies I think also the safest way is to format it at the end for displaying. Here is a quick function hacked to do this. If you find any bugs, please let me know. This is just for final display. formatIt[Expand[(z + 1/z)^5], z] formatIt[Normal@Series[Sin[x], {x, 0, 10}], x] formatIt[Expand[(z + 1/z)^4], z] formatIt[1, z] formatIt[-1 + z^3, z] formatIt[Expand[(z + 1/z)^6, z], z] Which gives ps. I am not too good in parsing and patterns with Mathematica, and I am sure this can be done much better, but just a proof of concept. formatIt[r_, z_] := Module[{x, e, c, n, o}, (*nma, version 8/23/14, 2 PM*) fixMiddle[c_] := If[Sign[c] < 0, If[Abs[c] == 1, Row[{"-", Spacer[1]}], c], Row[{"+", Spacer[1], If[c != 1, c]}]]; fixFirst[c_] := If[Sign[c] < 0, If[Abs[c] == 1, Row[{"-", Spacer[1]}], c], If[c == 1, Sequence @@ {}, c]]; o = If[Head[r] === Plus, r, List@r]; (*to handle single terms*) o = Cases[lis, Alternatives[x_. Power[z, e_.], Times[x_, Power[z, e_.]], Dot[x_, e_: 0]] :> {x, e}]; o = Sort[o, #1[[2]] > #2[[2]] &]; o = {#[[1]], If[#[[2]] < 0, Superscript[z, #[[2]]], If[#[[2]] > 0, z^#[[2]], z^"0"]]} & /@ o; o = MapIndexed[{n = First[#2]; c = #1[[1]]; e = #1[[2]]; Row[{If[n == 1, fixFirst[c], fixMiddle[c]], Spacer[2], e, Spacer[1]}]} &, o]; Thanks for this. I tried to understand the code, but unfortunately I am new to Mathematica, I need more time. But I noticed, that when I run it on formatIt[Expand[(z + 1/z)^4], z] the constant term is missing from the output. Fixed. Forgot about the z^0 case. Again, this is just proof of concept. The parsing can be done better and in a much more robust way. Any other bugs, please let me know. Let's assume the binomial expansion will never change the order of terms. Then neither will this. (A constant propagates to the first for even power) Expand[(z + y)^5]; % /. y -> 1/z 1/z^5 + 5/z^3 + 10/z + 10 z + 5 z^3 + z^5 Not trusting that, you could write your own expand[]. As I explained in my previous comments, assuming a behaviour is not good enough for me. But even with this assumption, in Expand[(y + z)^4] Expand[(y + z)^4] /. y -> 1/z the order is changing with the replacement. The binomial expansion order is not kept. Mathematica recognizes that the result is of a special form, and does undocumented (or at least I did not find the relevant documentation) reordering for the output after the replacement. Again, this is not a problem for computational use of Mathematica, rather it is a strength that it recognizes so many forms of expressions, but it would be nice to know a way of forcing it not to do the reordering in this case after the replacement. Yes, of course I can write my own algorithms, which is what I ended up doing in this case and also in several other cases (e.g. I am drawing circular arcs using ParametricPlot). I now use CoefficientList[(z + 1/z)^4*z^4, z] because it is well documented, and I can get all the information I need from the output. You can (re)order in formatting. I do not recommend any other approach, as that would entail messing with attributes of Plus. Far safer to use formatting magic. The ordering used by Plus, which is what you are seeing, is unlikely to change. Whether it does what you want or expect, in all cases, will depend heavily on the specifics (of what you want or expect). In the special case of a Laurent polynomial in one variable, and with explicit numbers as coefficients (that is, atoms that are NumberQ), the ordering will be by degree in that variable. I am trying to write CDF notes with demonstrations. If something is "unlikely to change" or (as in the documentation of Sort[]) "usually ..." is not good enough. This is not a problem with usual use of Mathematica, because one can react if something is not as we would expect. But CDF is a different thing. I am producing something which I have to be sure behaves as I intend it to behave 2-3 years from now. I can play with formatting, but the question remains, will future versions format it the same way? In this case, this is a Laurent polynomial in one variable with numbers as coefficients, but the order is not according to the degrees in that variable. The constant term is first, but it should be in the middle. I think, it is because what is written in the Sort[] documentation, that least complex terms are first, but still there is this "usually" in the documentation, which I don't understand. In any case, since my post I could sort out what I want with the use of CoefficientList[], where the documentation is quite explicit about the order. "Orderless" corresponds to commutative nature of an (appropriate) operation like "Plus", therefore (as a Mathematica 7 user) I'm thinking it will stay orderless. Polynomials are always output in the increasing order of powers. If you wish to see the highest power to the left (as we might do when handwriting), one might try the following. ClearAttributes[Plus, Orderless](*Remove the "Orderless" attribute of Plus*); (*Apply the orderless*)Plus @@(List @@ (*Change head from Plus to List*)Expand[(z + 1/z)^5]) By removing the builtin "Orderless", we tell the Plus sign that it must keep the user provided order. I would not recommend changing build-in function attribute as that can cause some subtle problems somewhere else. If one wants just the order changed for display purposes, a known simple solution is to use TraditionalForm as mentioned in the linked post above Expand[(z + 1/z)^5] // TraditionalForm I don't want the order changed for display purposes. I want to be sure, that what I get as an output now, it will be the same in future versions of Mathematica. Apparently Mathematica uses some algorithms to order expressions, but these are hidden from the user. Thanks, but this link did not answer my question. The "OrderedForm" operation there did not put the terms of (z+1/z)^4 in increasing powers of z. However, my main question is: Can I rely on the behaviour I see now to stay like this in future versions of Mathematica? In the documentation I do not see any reference as to how Expand[] orders the output, and in this case it is not the normal order of the binomial expansion. I am writing demonstrations to use by others, so I cannot rely on experimenting with the output. I need a guarantee that it will always be the same. Mathematica is very powerful, but because of this, calculations happen behind the scene, and although the output is nice, I don't see how I can control it.
{"url":"https://community.wolfram.com/groups/-/m/t/325998","timestamp":"2024-11-02T11:51:26Z","content_type":"text/html","content_length":"160739","record_id":"<urn:uuid:39f1623d-bcd8-49d3-95ac-d43605a4e53c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00427.warc.gz"}
Elegant Chaos One of the most intriguing books on chaotic systems is which is a true treasure trove for everybody fascinated by these mathematical objects. In the following, we will implement two of these systems on an analog computer. So-called jerk systems are special cases of autonomous dissipative systems.The system under consideration in the following is described by the following differential equation: Scaling chaotic systems for implementation on an analog computer is typically a bit nasty which holds true for this particular case, too. A few numerical calculations using a simple Euler integration routine show that the term \(x^2\dot{x}\) has a domain exceeding \([-10^3:10^3]\) etc. Scaling is done by scaling down all inputs of one integrator by the same scale factor \(\lambda_i\) until no overload occurs anymore. Its output signal is then scaled back by \(\frac{1}{\lambda_i}\) before being fed to the next integrator, which is then scaled as well etc. Table 1 shows the values for the six coefficient potentiometers required in the computer setup shown in figure 1. The left column shows the parameters obtained by scaling “by the book” while the right column shows a parameter set that was obtained by tweaking some of the parameters manually. It is remarkable that the behaviour of the analog simulation shown in figure 2 is much more tame over a rather wide range of parameters than the numerical results shown in [Sprott 2010, p. 77]. Nevertheless, it is possible to find regions that exhibit chaotic behavior by some manual parameter variation. Potentiometer Calculated Manually obtained P1 0.1 0.1 P2 0.75 0.75 P3 0.12 0.17 P4 0.2 0.2 P5 0.333 0.1 P6 0.333 1 Figure 1: Setup for the jerk system Figure 2: Typical behavior of the jerk system Another beutiful chaotic system is the Nosé-Hoover oscillator, an example of an autonomous conservative system which is described by the following set of coupled differential equations: \[\begin{split}\begin{aligned} \dot{x}&=y\\ \dot{y}&=yz-x\\ \dot{z}&=1-y^2 \end{aligned}\end{split}\] Scaling this system is quite mean, too, and yields the computer setup shown in figure 3, the results of which are shown in the mesmerizing oscilloscope screen capture shown in figure 4. Figure 3: Setup for the Nosé-Hoover oscillator Figure 4: Phase state plot of the Nosé-Hoover oscillator [SPROTT 2011] Julien Clinton Sprott, Elegant Chaos – Algebraically Simple Chaotic Flows, World Scientific Publishing Co. Pte. Ltd, 2010
{"url":"https://the-analog-thing.org/docs/dirhtml/rst/applications/elegant_chaos/alpaca_15/","timestamp":"2024-11-02T08:35:55Z","content_type":"text/html","content_length":"25974","record_id":"<urn:uuid:16cbcca3-1c7d-4c6a-8ae8-3f25bb2fb249>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00143.warc.gz"}
Multiply two polynomials using the FOIL method, Box method and the distributive property Slide 1 Objective The student will be able to: multiply two polynomials using the FOIL method, Box method and the distributive property. SOL: A.2b Designed by Skip Tyler, Varina High School Slide 2 There are three techniques you can use for multiplying polynomials. The best part about it is that they are all the same! Huh? Whaddaya mean? It’s all about how you write it…Here they are! Distributive Property Box Method Sit back, relax (but make sure to write this down), and I’ll show ya! Slide 3 1) Multiply. (2x + 3)(5x + 8) Using the distributive property, multiply 2x(5x + 8) + 3(5x + 8). 10x2 + 16x + 15x + 24 Combine like terms. 10x2 + 31x + 24 A shortcut of the distributive property is called the FOIL method. Slide 4 The FOIL method is ONLY used when you multiply 2 binomials. It is an acronym and tells you which terms to multiply. 2) Use the FOIL method to multiply the following binomials: (y + 3)(y + 7). Slide 5 (y + 3)(y + 7). F tells you to multiply the FIRST terms of each binomial. Slide 6 (y + 3)(y + 7). O tells you to multiply the OUTER terms of each binomial. y2 + 7y Slide 7 (y + 3)(y + 7). I tells you to multiply the INNER terms of each binomial. y2 + 7y + 3y Slide 8 (y + 3)(y + 7). L tells you to multiply the LAST terms of each binomial. y2 + 7y + 3y + 21 Combine like terms. y2 + 10y + 21 Slide 9 Remember, FOIL reminds you to multiply the: First terms Outer terms Inner terms Last terms Slide 10 The third method is the Box Method. This method works for every problem! Here’s how you do it. Multiply (3x – 5)(5x + 2) Draw a box. Write a polynomial on the top and side of a box. It does not matter which goes where. This will be modeled in the next problem along with FOIL.
{"url":"https://www.sliderbase.com/spitem-1393-1.html","timestamp":"2024-11-12T23:57:30Z","content_type":"text/html","content_length":"17807","record_id":"<urn:uuid:754963a4-dbc6-47df-a919-0fa659ab0ee3>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00767.warc.gz"}
How To Calculate No Of Street Light Poles - Daily Engineering Space light fixtures to provide uniform distribution and illumination of roadways and sidewalks. Consider the locations of obstruction such as trees or billboards. Height. Standard poles for sidewalks and bike facilities are 4.5–6 m. Light poles for roadbeds vary according to the street typology and land use. In most contexts, standard heights for narrow streets in residential, commercial, and historical contexts are between 8–10 m. Taller poles between 10 m and 12 m are appropriate for wider streets in commercial or industrial areas. Spacing. The spacing between two light poles should be roughly 2.5–3 times the height of the pole. Shorter light poles should be installed at closer intervals. The density, speed of travel, and the type of light source along a corridor will also determine the ideal height and spacing. Light Cone. The light cone has roughly the same diameter as the height of the fixture from the ground. The height will, therefore, determine the maximum suggested distance between two light poles to avoid dark areas. 1- Calculate Distance between each Street Light Pole Calculate Distance between each streetlight pole having the following Details • Road Details: The width of the road is 11.5 Feet. • Pole Details: The height of the Pole is 26.5 Feet. • The luminaire of each Pole: Wattage of Luminaries is 250 Watt, Lamp OutPut (LL) is 33200 Lumen, Required Lux Level (Eh) is 5 Lux, Coefficient of Utilization Factor (Cu) is 0.18, Lamp Lumen Depreciation Factor (LLD) is 0.8, Lamp Lumen Depreciation Factor (LLD) is 0.9. • Space Height Ratio should be less than 3. • Spacing between each Pole=(LL*CU*LLD*LDD) / Eh*W • Spacing between each Pole=(33200×0.18×0.8×0.9) / (5×11.5) • Spacing between each Pole= 75 Foot. • Space Height Ratio = Distance between Pole / Road width • Space Height Ratio = 3. Which is less than the defined value. Spacing between each Pole is 75 Feet. 2- Calculate Street Light Luminaire Watt Calculate Streetlight Watt of each Luminaire of Street Light Pole having following Details, • Road Details: The width of the road is 7 Meters. The distance between each Pole (D) is 50 Meters. • Required Illumination Level for Street Light (L) is 6.46 Lux per Square Meter. Luminous efficacy is 24 Lumen/Watt. • Maintenance Factor (mf) 0.29, Coefficient of Utilization Factor (Cu) is 0.9. • Average Lumen of Lamp (Al) = 8663 Lumen. • Average Lumen of Lamp (Al) =(LxWxD) / (mfxcu) • Average Lumen of Lamp (Al)= (6.46x7x50) / (0.29×0.9) • Average Lumen of Lamp (Al)=8663 Lumen. • Watt of Each Street Light Luminar = Average Lumen of Lamp / Luminous efficacy • Watt of Each Street Light Laminar = 8663 / 24 • Watt of Each Street Light Luminaire = 361 Watt
{"url":"https://dailyengineering.com/how-to-calculate-no-of-street-light-poles/","timestamp":"2024-11-07T21:51:02Z","content_type":"text/html","content_length":"196864","record_id":"<urn:uuid:f5a8ae98-67d6-4af4-9baf-f4da6d8525b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00361.warc.gz"}
Aharoni-Berger conjecture Aharoni-Berger conjecture Let us begin by stating two classic results. For a graph (or hypergraph) we let cover and we let Theorem (König) Theorem (Matroid Intersection) If The matroid intersection theorem is exactly the A famous conjecture of Ryser suggests a generalization of König's theorem to hypergraphs. It claims that every [A] R. Aharoni, Ryser's conjecture for tripartite 3-graphs, Combinatorica 21 (2001), 1-4. MathSciNet *[AB] R. Aharoni, E. Berger, The intersection of a matroid with a simplicial complex. Trans. Amer. Math. Soc. 358 (2006), no. 11 MathSciNet * indicates original appearance(s) of problem.
{"url":"http://openproblemgarden.org/op/aharoni_berger_conjecture","timestamp":"2024-11-02T09:04:23Z","content_type":"application/xhtml+xml","content_length":"18118","record_id":"<urn:uuid:ee0a769f-2fea-4e27-81e8-ae96217f5b54>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00621.warc.gz"}
Hermann Weyl First published Wed Sep 2, 2009; substantive revision Mon Jun 1, 2015 Hermann Weyl was a great and versatile mathematician of the 20^th century. His work had a vast range, encompassing analysis, algebra, number theory, topology, differential geometry, spacetime theory, quantum mechanics, and the foundations of mathematics. His scientific writing is informed by a rare literary and artistic sensibility—in his words, “Expression and shape mean almost more to me than knowledge itself”. He was unusual among scientists and mathematicians of his time in being attracted to idealist philosophy: his idealist leanings can be seen particularly in his work on the foundations of mathematics. In his youth, Kant’s doctrines made a great impression on him; later he was stirred both by Fichte’s metaphysical idealism and by Husserlian phenomenology. Although Weyl came to question the certainties claimed by idealism, he cleaved always to the primacy of intuition he had first learned from Kant, and to its expression by Fichte as the “inner light” of individual Hermann Weyl was born on 9 November 1885 in the small town of Elmshorn near Hamburg. In 1904 he entered Göttingen University, where his teachers included Hilbert, Klein and Minkowski. Weyl was particularly impressed with Hilbert’s lectures on number theory and resolved to study everything he had written. Hilbert’s work on integral equations became the focus of Weyl’s (1908) doctoral dissertation, written under Hilbert’s direction. In this and in subsequent papers Weyl made important contributions to the theory of self-adjoint operators. Virtually all of Weyl’s many publications during his stay in Göttingen until 1913 dealt with integral equations and their applications. After Weyl’s (1910b) habilitation, he became a Privatdozent and was thereby entitled to give lectures at the University of Göttingen. Weyl chose to lecture on Riemann’s theory of algebraic functions during the winter semester of 1911–12. These lectures became the basis of Weyl’s (1913) first book Die Idee der Riemannschen Fläche (The Concept of a Riemann Surface). This work, in which function theory, geometry and topology are unified, constitutes the first modern and comprehensive treatment of Riemann surfaces. The work also contains the first construction of an abstract manifold. Emphasizing that the `points of the manifold’ can be quite arbitrary, Weyl based his definition of a general two-dimensional manifold or surface on an extension of the neighbourhood axioms that Hilbert (1902) had proposed for the definition of a plane. The work is indicative of Weyl’s exceptional gift for harmoniously uniting into a coherent whole a patchwork of distinct mathematical In 1913 Weyl was offered, and accepted, a professorship at the Eidgenössische Technische Hochschule—ETH (Swiss Federal Institute of Technology)—in Zürich. Weyl’s years in Zürich were extraordinarily productive and resulted in some of his finest work, especially in the foundations of mathematics and physics. When he arrived in Zürich in the fall of 1913, Einstein and Grossmann were struggling to overcome a difficulty in their effort to provide a coherent mathematical formulation of the general theory of relativity. Like Hilbert, Weyl appreciated the importance of a close relationship between mathematics and physics. It was therefore only natural that Weyl should become interested in Einstein’s theory and the potential mathematical challenges it might offer. Following the outbreak of the First World War, however, in May 1915 Weyl was called up for military service. But Weyl’s academic career was interrupted only briefly, since in 1916 he was exempted from military duties for reasons of health. In the meantime Einstein had accepted an offer from Berlin and had left Zürich in 1914. Einstein’s departure had weakened the theoretical physics program at the ETH and (as reported by Frei and Stammbach (1992, 26) the administration hoped that Weyl’s presence would alleviate the situation. But Weyl needed no external prompting to work in, and to teach, theoretical physics: his interest in the subject in general and, above all, in the theory of relativity, gave him more than sufficient motivation in that regard. Weyl decided to lecture on the general theory of relativity in the summer semester of 1917, and these lectures became the basis of his famous book Raum-Zeit-Materie (Space-Time-Matter) of 1918. During 1917–24, Weyl directed his energies equally to the development of the mathematical and philosophical foundations of relativity theory, and to the broader foundations of mathematics. It is in these two areas that his philosophical erudition, nourished from his youth, manifests itself most clearly. The year 1918, the same year in which Space-Time-Matter appeared, also saw the publication of Das Kontinuum (The Continuum), a work in which Weyl constructs a new foundation for mathematical analysis free of what he had come to see as fatal flaws in the set-theoretic formulation of Cantor and Dedekind. Soon afterwards Weyl embraced Brouwer’s mathematical intuitionism; in the early 1920s he published a number of papers elaborating on and defending the intuitionistic standpoint in the foundations of mathematics. It was also during the first years of the 1920s that Weyl came to appreciate the power and utility of group theory, initially in connection with his work on the solution to the Riemann-Helmholtz-Lie problem of space. Weyl analyzed this problem, the Raumproblem, in a series of articles and lectures during the period 1921–23. Weyl (1949b, 400) noted that his interest in the philosophical foundations of the general theory of relativity had motivated his analysis of the representations and invariants of the continuous groups: I can say that the wish to understand what really is the mathematical substance behind the formal apparatus of relativity theory led me to the study of the representations and invariants of groups; and my experience in this regard is probably not unique. This newly acquired appreciation of group theory led Weyl to what he himself considered his greatest work in mathematics, a general theory of the representations and invariants of the classical Lie groups (Weyl 1924a, 1924f, 1925, 1926a, 1926b, 1926c). Later Weyl (1939) wrote a book, The Classical Groups: Their Invariants and Representations, in which he returned to the theory of invariants and representations of semisimple Lie groups. In this work he realized his ambition “to derive the decisive results for the most important of these groups by direct algebraic construction, in particular for the full group of all non-singular linear transformations and for the orthogonal group.” Weyl applied his work in group theory and his earlier work in analysis and spectral theory to the new theory of quantum mechanics. Weyl’s mathematical analysis of the foundations of quantum mechanics showed that regularities in a physical theory are most fruitfully understood in terms of symmetry groups. Weyl’s (1928) book Gruppentheorie und Quantenmechanik (Group Theory and Quantum Mechanics) deals not only with the theory of quantum mechanics but also with relativistic quantum electrodynamics. In this work Weyl also presented a very early analysis of discrete symmetries which later stimulated Dirac to predict the existence of the positron and the antiproton. During his years in Zürich Weyl received, and turned down, numerous offers of professorships by other universities—including an invitation in 1923 to become Felix Klein’s successor at Göttingen. It was only in 1930 that he finally accepted the call to become Hilbert’s successor there. His second stay in Göttingen was to be brief. Repelled by Nazism, “deeply revolted,” as he later wrote, “by the shame which this regime had brought to the German name,” he left Germany in 1933 to accept an offer of permanent membership of the newly founded Institute for Advanced Study in Princeton. Before his departure for Princeton he published The Open World (1932); his tenure there saw the publication of Mind and Nature (1934), the aforementioned The Classical Groups (1939), Algebraic Theory of Numbers (1940), Meromorphic Functions and Analytic Curves (1943), Philosophy of Mathematics and Natural Science (1949; an enlarged English version of a 1927 work Philosophie der Mathematik und Naturwissenschaften), and Symmetry (1952). In 1951 he formally retired from the Institute, remaining as an emeritus member until his death, spending half his time there and half in Zürich. He died in Zürich suddenly, of a heart attack, on 9 December 1955. Weyl was first and foremost a mathematician, and certainly not a “professional” philosopher. But as a German intellectual of his time it was natural for him to regard philosophy as a pursuit to be taken seriously. In Weyl’s case, unusually even for a German mathematician, it was idealist philosophy that from the beginning played a significant role in his thought. Kant, Husserl, Fichte, and, later, Leibniz, were at various stages major influences on Weyl’s philosophical thinking. As a schoolboy Weyl had been impressed by Kant’s “Critique of Pure Reason.” He was especially taken with Kant’s doctrine that space and time are not inherent in the objects of the world, existing as such and independently of our awareness, but are, rather, forms of our intuition As he reports in Insight and Reflection, (Weyl 1955), his youthful enthusiasm for Kant crumbled soon after he entered Göttingen University in 1904. There he read Hilbert’s Foundations of Geometry, a tour-de-force of the axiomatic method, in comparison to which Kant’s “bondage to Euclidean geometry” now appeared to him naïve. After this philosophical reverse he lapsed into an indifferent positivism for a while. But in 1912 he found a new and exciting source of philosophical enlightenment in Husserl’s phenomenology.^[1] It was also at about this time that Fichte’s metaphysical idealism came to “capture his imagination.” Although Weyl later questioned idealist philosophy, and became dissatisfied with phenomenology, he remained faithful throughout his life to the primacy of intuition that he had first learned from Kant, and to the irreducibility of individual consciousness that had been confirmed in his view by Fichte and Husserl. Weyl never provided a systematic account of his philosophical views, and sorting out his overall philosophical position is no easy matter. Despite the importance of intuition and individual consciousness in Weyl’s philosophical outlook, it would nevertheless be inexact to describe his outlook as being that of a “pure” idealist, since certain “realist” touches seem also to be present, in his approach to physics, at least. His metaphysics appears to rest on three elements, the first two of which may be considered “idealist”, and the third “realist”: these are, respectively, the Ego or “I”, the (Conscious) Other or “Thou”, and the external or “objective” world. It is the first of these constituents, the Ego, to which Weyl ascribes primacy. Indeed, in Weyl’s words, The world exists only as met with by an ego, as one appearing to a consciousness; the consciousness in this function does not belong to the world, but stands out against the being as the sphere of vision, of meaning, of image, or however else one may call it. (Weyl 1934, 1) The Ego alone has direct access to the given, that is, to the raw materials of the existent which are presented to consciousness with an immediacy at once inescapable and irreducible. The Ego is singular in that, from its own standpoint, it is unique. But in an act of self-reflection, through grasping (in Weyl’s words) “what I am for myself”, the Ego comes to recognize that it has a function , namely as “conscious-existing carrier of the world of phenomena.” It is then but a short step for the Ego to transcend its singularity through the act of defining an “ego” to be an entity performing that same function for itself. That is, an ego is precisely what I am for myself (in other words, what the Ego is for itself)—again a “conscious-existing carrier of the world of phenomena”—and yet other than myself. “Thou” is the term the Ego uses to address, and so to identify, an ego in this sense. “Thou” is thus the Ego generalized, the Ego refracted through itself. The Ego grasps that it exists within a world of Thous, that is, within a world of other Egos similar to itself. While the Ego has, of necessity, no direct access to any Thou, it can, through analogy and empathy, grasp what it is to be Thou, a conscious being like oneself. By that very fact the Ego recognizes in the Thou the same luminosity it sees in itself. The relationship of the Ego with the external world, the realm of “objective” reality, is of an entirely different nature. There is no analogy that the Ego can draw—as it can with the Thou—between itself and the external world, since that world (presumably) lacks consciousness. The external world is radically other, and opaque to the Ego^[2]. Like Kant’s noumenal realm, the external world is outside the immediacy of consciousness; it is, in a word, transcendent. Since this transcendent world is not directly accessible to the Ego, as far as the latter is concerned the existence of that world must arise through postulation, “a matter of metaphysics, not a judgment but an act of acknowledgment or belief.”^[3] Indeed, according to Weyl, it is not strictly necessary for the Ego to postulate the existence of such a world, even given the existence of a world of Thous: For as long as I do not proceed beyond what is given, or, more exactly, what is given at the moment, there is no need for the substructure of an objective world. Even if I include memory and in principle acknowledge it as valid testimony, if I furthermore accept as data the contents of the consciousness of others on equal terms with my own, thus opening myself to the mystery of intersubjective communication, I would still not have to proceed as we actually do, but might ask instead for the ‘transformations’ which mediate between the images of the several consciousnesses. Such a presentation would fit in with Leibniz’s monadology. (Weyl 1949, 117.) But once the existence of the transcendent world is postulated, its opacity to the Ego can be partly overcome by constructing a representation of it through the use of symbols, the procedure called by Weyl symbolic construction, (or constructive cognition)^[4] and which he regarded as the cornerstone of scientific explanation. He outlines the process as follows (Weyl 1934, 53): 1. Upon that which is given, certain reactions are performed by which the given is in general brought together with other elements capable of being varied arbitrarily. If the results to be read from these reactions are found to be independent of the variable auxiliary elements they are then introduced as attributes inherent in the things themselves (even if we do not actually perform those reactions on which their meaning rests, but only believe in the possibility of their being performed). 2. By the introduction of symbols, the judgements are split up and a part of the manipulations is made independent of the given and its duration by being shifted onto the representing symbols which are time resisting and simultaneously serve the purpose of preservation and communication. Thereby the unrestricted handling of notions arises in counterpoint to their application, ideas in a relatively independent manner confront reality. 3. Symbols are not produced simply “according to demand” wherever they correspond to actual occurrences, but they are embedded into an ordered manifold of possibilities created by free construction and open towards infinity. Only in this way may we contrive to predict the future, for the future is not given actually. Weyl’s procedure thus amounts to the following. In step 1, a given configuration is subjected to variation. One then identifies those features of the configuration that remain unchanged under the variation—the invariant features; these are in turn, through a process of reification, deemed to be properties of an unchanging substrate—the “things themselves”. It is precisely the invariance of such features that renders them (as well as the “things themselves”) capable of being represented by the “time resisting” symbols Weyl introduces in step 2. As (written) symbols these are communicable without temporal distortion and can be subjected to unrestricted manipulation without degradation. It is the flexibility conferred thereby which enables the use of symbols to be conformable with reality. Nevertheless (step 3) symbols are not haphazardly created in response to immediate stimuli; they are introduced, rather, in a structured, yet freely chosen manner which reflects the idea of an underlying order—the “one real world”—about which not everything is, or can be, known—it is, like the future, “open towards infinity”. Weyl observes that the reification implicit in the procedure of symbolic construction leads inevitably to its iteration, for “the transition from step to step is made necessary by the fact that the objects at one step reveal themselves as manifestations of a higher reality, the reality of the next step” (Weyl (1934), 32–33). But in the end “systematic scientific explanation will finally reverse the order: first it will erect its symbolical world by itself, without any reference, then, skipping all intermediate steps, try to describe which symbolical configurations lead to which data of consciousness” (ibid.). In this way the symbolic world becomes (mistakenly) identified with the transcendent world. It is symbolic construction which, in Weyl’s vision, allows us access to the “objective” world presumed to underpin our immediate perceptions; indeed, Weyl holds that the objective world, being beyond the grasp (the “lighted circle”) of intuition, can only be presented to us in symbolic form^[5]. We can see a double dependence on the Ego in Weyl’s idea of symbolic construction to get hold of an objective world beyond the mental. For not only is that world “constructed” by the Ego, but the materials of construction, the symbols themselves, as signs intended to convey meaning, have no independent existence beyond their graspability by a consciousness. By their very nature these symbols cannot point directly to an external world (even given an unshakable belief in the existence of that world) lying beyond consciousness. Weyl’s metaphysical triad thus reduces to what might be called a polarized dualism, with the mental (I, Thou) as the primary, independent pole and objective reality as a secondary, dependent pole^[6]. In Weyl’s view mathematics simply lies – as it did for Brouwer – within the Ego’s “lighted circle of intuition” and so is, in principle at least, completely presentable to that intuition. But the nature of physics is more complicated. To the extent that physics is linked to the transcendent world of objective reality, it cannot arise as the direct object of intuition, but must, like the transcendent world itself, be presented in symbolic form; more exactly, as the result of a process of symbolic construction. It is this which, in Weyl’s vision, allows us access to to the "objective" world presumed to underpin our immediate perceptions. Weyl’s conviction that the objective world can only be presented to us through symbolic construction may serve to explain his apparently untroubled attitude towards the highly counterintuitive nature of quantum theory. Indeed, the claims of numerous physicists that the quantum microworld is accessible to us only through abstract mathematical description provides a vindication of Weyl’s thesis that objective reality cannot be grasped directly, but only through the mediation of symbols. In his later years Weyl attempted to enlarge his metaphysical triad (I, Thou, objective world) to a tetrad, by a process of completion, as it were, to embrace the “godhead that lives in impenetrable silence”, the objective counterpart of the Ego, which had been suggested to him by his study of Eckhart. But this effort was to remain uncompleted. During his long philosophical voyage Weyl stopped at a number of ports of call: in his youth, Kantianism and positivism; then Husserlian phenomenological idealism; later Brouwerian intuitionism and finally a kind of theological existentialism. But apart from his brief flirtation with positivism (itself, as he says, the result of a disenchantment with Kant’s “bondage to Euclidean geometry”), Weyl’s philosophical orientation remained in its essence idealist (even granting the significant realist elements mentioned above). Nevertheless, while he continued to acknowledge the importance of phenomenology, his remarks in Insight and Reflection indicate that he came to regard Husserl’s doctrine as lacking in two essential respects: first, it failed to give due recognition to the (construction of) transcendent external world, with which Weyl, in his capacity as a natural scientist, was concerned; secondly, and perhaps in Weyl’s view even more seriously, it failed to engage with the enigma of selfhood: the fact that I am the person I am. Grappling with the first problem led Weyl to identify symbolic construction as providing sole access to objective reality, a position which brought him close to Cassirer in certain respects; while the second problem seems to have led him to existentialism and even, through his reading of Eckhart, to a kind of religious mysticism. Towards the end of his Address on the Unity of Knowledge, delivered at the 1954 Columbia University bicentennial celebrations, Weyl enumerates what he considers to be the essential constituents of knowledge. At the top of his list^[7] comes …intuition, mind’s ordinary act of seeing what is given to it. (Weyl 1954, 629) In particular Weyl held to the view that intuition, or insight—rather than proof—furnishes the ultimate foundation of mathematical knowledge. Thus in his Das Kontinuum of 1918 he says: In the Preface to Dedekind (1888) we read that “In science, whatever is provable must not be believed without proof.” This remark is certainly characteristic of the way most mathematicians think. Nevertheless, it is a preposterous principle. As if such an indirect concatenation of grounds, call it a proof though we may, can awaken any “belief” apart from assuring ourselves through immediate insight that each individual step is correct. In all cases, this process of confirmation—and not the proof—remains the ultimate source from which knowledge derives its authority; it is the “experience of truth”. (Weyl 1987, 119) Weyl’s idealism naturally inclined him to the view that the ultimate basis of his own subject, mathematics, must be found in the intuitively given as opposed to the transcendent. Nevertheless, he recognized that it would be unreasonable to require all mathematical knowledge to possess intuitive immediacy. In Das Kontinuum, for example, he says: The states of affairs with which mathematics deals are, apart from the very simplest ones, so complicated that it is practically impossible to bring them into full givenness in consciousness and in this way to grasp them completely. (Ibid., 17) Nevertheless, Weyl felt that this fact, inescapable as it might be, could not justify extending the bounds of mathematics to embrace notions, such as the actual infinite, which cannot be given fully in intuition even in principle. He held, rather, that such extensions of mathematics into the transcendent are warranted only by the fact that mathematics plays an indispensable role in the physical sciences, in which intuitive evidence is necessarily transcended. As he says in The Open World^[8]: … if mathematics is taken by itself, one should restrict oneself with Brouwer to the intuitively cognizable truths … nothing compels us to go farther. But in the natural sciences we are in contact with a sphere which is impervious to intuitive evidence; here cognition necessarily becomes symbolical construction. Hence we need no longer demand that when mathematics is taken into the process of theoretical construction in physics it should be possible to set apart the mathematical element as a special domain in which all judgments are intuitively certain; from this higher standpoint which makes the whole of science appear as one unit, I consider Hilbert to be right. (Weyl 1932, 82). In Consistency in Mathematics (1929), Weyl characterized the mathematical method as the a priori construction of the possible in opposition to the a posteriori description of what is actually given.^[9] The problem of identifying the limits on constructing “the possible” in this sense occupied Weyl a great deal. He was particularly concerned with the concept of the mathematical infinite, which he believed to elude “construction” in the naive set-theoretical sense ^[10]. Again to quote a passage from Das Kontinuum: No one can describe an infinite set other than by indicating properties characteristic of the elements of the set…. The notion that a set is a “gathering” brought together by infinitely many individual arbitrary acts of selection, assembled and then surveyed as a whole by consciousness, is nonsensical; “inexhaustibility” is essential to the infinite. (Weyl 1987, 23) But still, as Weyl attests towards the end of The Open World, “the demand for totality and the metaphysical belief in reality inevitably compel the mind to represent the infinite as closed being by symbolical construction”. The conception of the completed infinite, even if nonsensical, is inescapable. Another mathematical “possible” to which Weyl gave a great deal of thought is the continuum. During the period 1918–1921 he wrestled with the problem of providing the mathematical continuum—the real number line—with a logically sound formulation. Weyl had become increasingly critical of the principles underlying the set-theoretic construction of the mathematical continuum. He had come to believe that the whole set-theoretical approach involved vicious circles^[11] to such an extent that, as he says, “every cell (so to speak) of this mighty organism is permeated by contradiction.” In Das Kontinuum he tries to overcome this by providing analysis with a predicative formulation—not, as Russell and Whitehead had attempted, by introducing a hierarchy of logically ramified types, which Weyl seems to have regarded as excessively complicated—but rather by confining the comprehension principle to formulas whose bound variables range over just the initial given entities (numbers). Accordingly he restricts analysis to what can be done in terms of natural numbers with the aid of three basic logical operations, together with the operation of substitution and the process of “iteration”, i.e., primitive recursion. Weyl recognized that the effect of this restriction would be to render unprovable many of the central results of classical analysis—e.g., Dirichlet’s principle that any bounded set of real numbers has a least upper bound^[12]—but he was prepared to accept this as part of the price that must be paid for the security of mathematics. As Weyl saw it, there is an unbridgeable gap between intuitively given continua (e.g. those of space, time and motion) on the one hand, and the “discrete” exact concepts of mathematics (e.g. that of natural number^[13]) on the other. The presence of this chasm meant that the construction of the mathematical continuum could not simply be “read off” from intuition. It followed, in Weyl’s view, that the mathematical continuum must be treated as if it were an element of the transcendent realm, and so, in the end, justified in the same way as a physical theory. It was not enough that the mathematical theory be consistent; it must also be reasonable. Das Kontinuum embodies Weyl’s attempt at formulating a theory of the continuum which satisfies the first, and, as far as possible, the second, of these requirements. In the following passages from this work he acknowledges the difficulty of the task: … the conceptual world of mathematics is so foreign to what the intuitive continuum presents to us that the demand for coincidence between the two must be dismissed as absurd. (Weyl 1987, 108) … the continuity given to us immediately by intuition (in the flow of time and of motion) has yet to be grasped mathematically as a totality of discrete “stages” in accordance with that part of its content which can be conceptualized in an exact way. (Ibid., 24)^[14] Exact time- or space-points are not the ultimate, underlying atomic elements of the duration or extension given to us in experience. On the contrary, only reason, which thoroughly penetrates what is experientially given, is able to grasp these exact ideas. And only in the arithmetico- analytic concept of the real number belonging to the purely formal sphere do these ideas crystallize into full definiteness. (Ibid., 94) When our experience has turned into a real process in a real world and our phenomenal time has spread itself out over this world and assumed a cosmic dimension, we are not satisfied with replacing the continuum by the exact concept of the real number, in spite of the essential and undeniable inexactness arising from what is given. (Ibid., 93) As these quotations show, Weyl had come to accept that it was in principle impossible to furnish the continuum as presented to intuition with an exact mathematical formulation : so, with reluctance, he lowered his sights. In Das Kontinuum his goal was, first and foremost, to establish the consistency of the mathematical theory of the continuum by putting the arithmetical notion of real number on a firm logical basis. Once this had been achieved, he would then proceed to show that this theory is reasonable by employing it as the foundation for a plausible account of continuous process in the objective physical world.^[15] In §6 of Das Kontinuum Weyl presents his conclusions as to the relationship between the intuitive and mathematical continua. He poses the question: Does the mathematical framework he has erected provide an adequate representation of physical or temporal continuity as it is actually experienced? In posing this question we can see the continuing influence of Husserl and phenomenological doctrine. Weyl begins his investigation by noting that, according to his theory, if one asks whether a given function is continuous, the answer is not fixed once and for all, but is, rather, dependent on the extent of the domain of real numbers which have been defined up to the point at which the question is posed. Thus the continuity of a function must always remain provisional; the possibility always exists that a function deemed continuous now may, with the emergence of “new” real numbers, turn out to be discontinuous in the future. ^[16] To reveal the discrepancy between this formal account of continuity based on real numbers and the properties of an intuitively given continuum, Weyl next considers the experience of seeing a pencil lying on a table before him throughout a certain time interval. The position of the pencil during this interval may be taken as a function of the time, and Weyl takes it as a fact of observation that during the time interval in question this function is continuous and that its values fall within a definite range. And so, he says, This observation entitles me to assert that during a certain period this pencil was on the table; and even if my right to do so is not absolute, it is nevertheless reasonable and well-grounded. It is obviously absurd to suppose that this right can be undermined by “an expansion of our principles of definition”—as if new moments of time, overlooked by my intuition could be added to this interval, moments in which the pencil was, perhaps, in the vicinity of Sirius or who knows where. If the temporal continuum can be represented by a variable which “ranges over” the real numbers, then it appears to be determined thereby how narrowly or widely we must understand the concept “real number” and the decision about this must not be entrusted to logical deliberations over principles of definition and the like. (Weyl 1987, 88) To drive the point home, Weyl focuses attention on the fundamental continuum of immediately given phenomenal time, that is, as he characterizes it, … to that constant form of my experiences of consciousness by virtue of which they appear to me to flow by successively. (By “experiences” I mean what I experience, exactly as I experience it. I do not mean real psychical or even physical processes which occur in a definite psychic-somatic individual, belong to a real world, and, perhaps, correspond to the direct experiences.) (Ibid., In order to correlate mathematical concepts with phenomenal time in this sense Weyl grants the possibility of introducing a rigidly punctate “now” and of identifying and exhibiting the resulting temporal points. On the collection of these temporal points is defined the relation of earlier than as well as a congruence relation of equality of temporal intervals, the basic constituents of a simple mathematical theory of time. Now Weyl observes that the discrepancy between phenomenal time and the concept of real number would vanish if the following pair of conditions could be shown to be 1. The immediate expression of the intuitive finding that during a certain period I saw the pencil lying there were construed in such a way that the phrase “during a certain period” was replaced by “in every temporal point which falls within a certain time span OE”. [Weyl goes on to say parenthetically here that he admits “that this no longer reproduces what is intuitively present, but one will have to let it pass, if it is really legitimate to dissolve a period into temporal points.”) 2. If \(P\) is a temporal point, then the domain of rational numbers to which \(l\) belongs if and only if there is a time point \(L\) earlier than \(P\) such that \[ OL = l{.}OE \] can be constructed arithmetically in pure number theory on the basis of our principles of definition, and is therefore a real number in our sense. (Ibid., 89) Condition 2 means that, if we take the time span \(OE\) as a unit, then each temporal point \(P\) is correlated with a definite real number. In an addendum Weyl also stipulates the converse. But can temporal intuition itself provide evidence for the truth or falsity of these two conditions? Weyl thinks not. In fact, he states quite categorically that … everything we are demanding here is obvious nonsense: to these questions, the intuition of time provides no answer—just as a man makes no reply to questions which clearly are addressed to him by mistake and, therefore, are unintelligible when addressed to him. (Ibid., 90) The grounds for this assertion are by no means immediately evident, but one gathers from the passages following it that Weyl regards the experienced continuous flow of phenomenal time as constituting an insuperable barrier to the whole enterprise of representing the continuum as experienced in terms of individual points, and even to the characterization of “individual temporal point” itself. As he says, The view of a flow consisting of points and, therefore, also dissolving into points turns out to be mistaken: precisely what eludes us is the nature of the continuity, the flowing from point to point; in other words, the secret of how the continually enduring present can continually slip away into the receding past. Each one of us, at every moment, directly experiences the true character of this temporal continuity. But, because of the genuine primitiveness of phenomenal time, we cannot put our experiences into words. So we shall content ourselves with the following description. What I am conscious of is for me both a being-now and, in its essence, something which, with its temporal position, slips away. In this way there arises the persisting factual extent, something ever new which endures and changes in consciousness. (Ibid., 91–92) Weyl sums up what he thinks can be affirmed about “objectively presented time”—by which he presumably means “phenomenal time described in an objective manner”—in the following two assertions, which he claims apply equally, mutatis mutandis, to every intuitively given continuum, in particular, to the continuum of spatial extension. (Ibid., 92): 1. An individual point in it is non-independent, i.e., is pure nothingness when taken by itself, and exists only as a “point of transition” (which, of course, can in no way be understood 2. It is due to the essence of time (and not to contingent imperfections in our medium) that a fixed temporal point cannot be exhibited in any way, that always only an approximate, never an exact determination is possible. The fact that single points in a true continuum “cannot be exhibited” arises, Weyl asserts, from the fact that they are not genuine individuals and so cannot be characterized by their properties. In the physical world they are never defined absolutely, but only in terms of a coordinate system, which, in an arresting metaphor, Weyl describes as “the unavoidable residue of the eradication of the ego.” This metaphor, which Weyl was to employ more than once^[17], again reflects the continuing influence of phenomenological doctrine in his thinking : here, the thesis that the existent is given in the first instance as the contents of a consciousness. By 1919 Weyl had come to embrace Brouwer’s views on the intuitive continuum. Given the idealism that always animated Weyl’s thought, this is not surprising, since Brouwer assigned the thinking subject a central position in the creation of the mathematical world^[18]. In his early thinking Brouwer had held that that the continuum is presented to intuition as a whole, and that it is impossible to construct all its points as individuals. But later he radically transformed the concept of “point”, endowing points with sufficient fluidity to enable them to serve as generators of a “true” continuum. This fluidity was achieved by admitting as “points”, not only fully defined discrete numbers such as 1/9, \(e\), and the like—which have, so to speak, already achieved “being”—but also “numbers” which are in a perpetual state of “becoming” in that the entries in their decimal (or dyadic) expansions are the result of free acts of choice by a subject operating throughout an indefinitely extended time. The resulting choice sequences cannot be conceived as finished, completed objects: at any moment only an initial segment is known. Thus Brouwer obtained the mathematical continuum in a manner compatible with his belief in the primordial intuition of time—that is, as an unfinished, in fact unfinishable entity in a perpetual state of growth, a “medium of free development”. In Brouwer’s vision, the mathematical continuum is indeed “constructed”, not, however, by initially shattering, as did Cantor and Dedekind, an intuitive continuum into isolated points, but rather by assembling it from a complex of continually changing overlapping parts. Brouwer’s impact looms large in Weyl’s 1921 paper, On the New Foundational Crisis of Mathematics. Here Weyl identifies two distinct views of the continuum: “atomistic” or “discrete”; and “continuous”. In the first of these the continuum is composed of individual real numbers which are well-defined and can be sharply distinguished. Weyl describes his earlier attempt at reconstructing analysis in Das Kontinuum as atomistic in this sense: Existential questions concerning real numbers only become meaningful if we analyze the concept of real number in this extensionally determining and delimiting manner. Through this conceptual restriction, an ensemble of individual points is, so to speak, picked out from the fluid paste of the continuum. The continuum is broken up into isolated elements, and the flowing-into-each other of its parts is replaced by certain conceptual relations between these elements, based on the “larger-smaller” relationship. This is why I speak of the atomistic conception of the continuum. (Weyl 1921, 91) By this time Weyl had repudiated atomistic theories of the continuum, including that of Das Kontinuum.^[19] While intuitive considerations, together with Brouwer’s influence, must certainly have fuelled Weyl’s rejection of such theories, it also had a logical basis. For Weyl had come to regard as meaningless the formal procedure—employed in Das Kontinuum—of negating universal and existential statements concerning real numbers conceived as developing sequences or as sets of rationals. This had the effect of undermining the whole basis on which his theory had been erected, and at the same time rendered impossible the very formulation of a “law of excluded middle” for such statements. Thus Weyl found himself espousing a position^[20] considerably more radical than that of Brouwer, for whom negations of quantified statements had a perfectly clear constructive meaning, under which the law of excluded middle is simply not generally affirmable. Of existential statements Weyl says: An existential statement—e.g., “there is an even number”—is not a judgement in the proper sense at all, which asserts a state of affairs; existential states of affairs are the empty invention of logicians. (Weyl [1921], 97) Weyl termed such pseudostatements “judgment abstracts”, likening them, with typical literary flair, to “a piece of paper which announces the presence of a treasure, without divulging its location.” Universal statements, although possessing greater substance than existential ones, are still mere intimations of judgments, “judgment instructions”, for which Weyl provides the following metaphorical If knowledge be compared to a fruit and the realization of that knowledge to the consumption of the fruit, then a universal statement is to be compared to a hard shell filled with fruit. It is, obviously, of some value, however, not as a shell by itself, but only for its content of fruit. It is of no use to me as long as I do not open it and actually take out a fruit and eat it. (Ibid., Above and beyond the claims of logic, Weyl welcomed Brouwer’s construction of the continuum by means of sequences generated by free acts of choice, thus identifying it as a “medium of free Becoming” which “does not dissolve into a set of real numbers as finished entities”. Weyl felt that Brouwer, through his doctrine of Intuitionism^[21], had come closer than anyone else to bridging that “unbridgeable chasm” between the intuitive and mathematical continua. In particular, he found compelling the fact that the Brouwerian continuum is not the union of two disjoint nonempty parts—that it is, in a word, indecomposable. “A genuine continuum,” Weyl says, “cannot be divided into separate fragments.”^[22] In later publications he expresses this more colourfully by quoting Anaxagoras to the effect that a continuum “defies the chopping off of its parts with a hatchet.” Weyl also agreed with Brouwer that all functions everywhere defined on a continuum are continuous, but here certain subtle differences of viewpoint emerge. Weyl contends that what mathematicians had taken to be discontinuous functions actually consist of several continuous functions defined on separated continua. In Weyl’s view, for example, the “discontinuous” function defined by \(f(x) = 0\) for \(x \lt 0\) and \(f(x) = 1\) for \(x \ge 0\) in fact consists of the two functions with constant values and 1 respectively defined on the separated continua \(\{x: x \lt 0\}\) and \(\{x: x \ge 0 \}\). (The union of these two continua fails to be the whole of the real continuum because of the failure of the law of excluded middle: it is not the case that, for be any real number \(x\), either \(x \lt 0\) or \(x \ge 0\).) Brouwer, on the other hand, had not dismissed the possibility that discontinuous functions could be defined on proper parts of a continuum, and still seems to have been searching for an appropriate way of formulating this idea.^[23] In particular, at that time Brouwer would probably have been inclined to regard the above function \(f\) as a genuinely discontinuous function defined on a proper part of the real continuum. For Weyl, it seems to have been a self-evident fact that all functions defined on a continuum are continuous, but this is because Weyl confines attention to functions which turn out to be continuous by definition. Brouwer’s concept of function is less restrictive than Weyl’s and it is by no means immediately evident that such functions must always be continuous. Weyl defined real functions as mappings correlating each interval in the choice sequence determining the argument with an interval in the choice sequence determining the value “interval by interval” as it were, the idea being that approximations to the input of the function should lead effectively to corresponding approximations to the input. Such functions are continuous by definition. Brouwer, in contrast, considers real functions as correlating choice sequences with choice sequences, and the continuity of these is by no means obvious. The fact that Weyl refused to grant (free) choice sequences—whose identity is in no way predetermined—sufficient individuality to admit them as arguments of functions betokens a commitment to the conception of the continuum as a “medium of free Becoming” even deeper, perhaps, than that of Brouwer. There thus being only minor differences between Weyl’s and Brouwer’s accounts of the continuum, Weyl accordingly abandoned his earlier attempt at the reconstruction of analysis, and joined Brouwer. He explains: I tried to find solid ground in the impending state of dissolution of the State of analysis (which is in preparation, although still only recognized by few)without forsaking the order on which it is founded, by carrying out its fundamental principle purely and honestly. And I believe I was successful—as far as this is possible. For this order is itself untenable, as I have now convinced myself, and Brouwer—that is the revolution!… It would have been wonderful had the old dispute led to the conclusion that the atomistic conception as well as the continuous one can be carried through. Instead the latter triumphs for good over the former. It is Brouwer to whom we owe the new solution of the continuum problem. History has destroyed again from within the provisional solution of Galilei and the founders of the differential and the integral calculus. (Weyl 1921, 98–99) Weyl’s initial enthusiasm for intuitionism seems later to have waned. This may have been due to a growing belief on his part that the mathematical sacrifices demanded by adherence to intuitionistic doctrine (e.g., the abandonment of the least upper bound principle, and other important results of classical analysis) would prove to be intolerable to practicing mathematicians. Witness this passage from Philosophy of Mathematics and Natural Science: Mathematics with Brouwer gains its highest intuitive clarity. He succeeds in developing the beginnings of analysis in a natural manner, all the time preserving the contact with intuition much more closely than had been done before. It cannot be denied, however, that in advancing to higher and more general theories the inapplicability of the simple laws of classical logic eventually results in an almost unbearable awkwardness. And the mathematician watches with pain the greater part of his towering edifice which he believed to be built of concrete blocks dissolve into mist before his eyes. (Weyl [1949], 54) Nevertheless, it is likely that Weyl remained convinced to the end of his days that intuitionism, despite its technical “awkwardness”, came closest, of all mathematical approaches, to capturing the essence of the continuum. Weyl’s espousal of the intuitionistic standpoint in the foundations of mathematics in 1920–21 inevitably led to friction with his old mentor Hilbert. Hilbert’s conviction had long been that there were in principle no limitations on the possibility of a full scientific understanding of the natural world, and, analogously, in the case of mathematics, that once a problem was posed with the required precision, it was, at least in principle, soluble. In 1904 he was moved to respond to Emil du Bois-Reymond’s famous declaration concerning the limits of science, ignoramus et ignorabimus (“we are ignorant and we shall remain ignorant”): We hear within us the perpetual call. There is the problem. Seek the solution. You can find it by pure reason, for in mathematics there is no ignorabimus.^[24] Hilbert was unalterably opposed to any restriction of mathematics “by decree”, an obstacle he had come up against in the early stages of his career in the form of Leopold Kronecker’s (the influential 19^th century German mathematician) anathematization of all mathematics venturing beyond the finite. In Brouwer’s intuitionistic program—with its draconian restrictions on what was admissible in mathematical argument, in particular, its rejection of the law of excluded middle, “pure” existence proofs, and virtually the whole of Cantorian set theory—Hilbert saw the return of Kroneckerian constaints on mathematics (and also, perhaps, a trace of du Bois-Reymond’s “ignorabimus”) against which he had struggled for so long. Small wonder, then, that Hilbert was upset when Weyl joined the Brouwerian camp.^[25] Hilbert’s response was to develop an entirely new approach to the foundations of mathematics with the ultimate goal of establishing beyond doubt the consistency of the whole of classical mathematics, including arithmetic, analysis, and Cantorian set theory. With the attainment of that goal, classical mathematics would be placed securely beyond the destructive reach of the intuitionists. The core of Hilbert’s program was the translation of the whole apparatus of classical mathematical demonstration into a simple, finitistic framework (which he called “metamathematics”) involving nothing more, in principle, than the straightforward manipulation of symbols, taken in a purely formal sense, and devoid of further meaning.^[26] Within metamathematics itself, Hilbert imposed a standard of demonstrative evidence stricter even than that demanded by the intuitionists, a form of finitism rivalling (ironically) that of Kronecker. The demonstration of the consistency of classical mathematics was then to be achieved by showing, within the constraints of strict finitistic evidence insisted on by Hilbert, that the formal metamathematical counterpart of a classical proof in that system can never lead to an assertion evidently false, such as \(0 = 1\). Hilbert’s program rested on the insight that, au fond, the only part of mathematics whose reliability is entirely beyond question is the finitistic, or concrete part: in particular, finite manipulation of surveyable domains of distinct objects including mathematical symbols presented as marks on paper. Mathematical propositions referring only to concrete objects in this sense Hilbert called real, concrete, or contentual propositions, and all other mathematical propositions he distinguished as possessing an ideal, or abstract character. (Thus, for example, \(2 + 2 = 4\) would count as a real proposition, while there exists an odd perfect number would count as an ideal one.) Hilbert viewed ideal propositions as akin to the ideal lines and points “at infinity” of projective geometry. Just as the use of these does not violate any truths of the “concrete” geometry of the usual Cartesian plane, so he hoped to show that the use of ideal propositions—even those of Cantorian set theory—would never lead to falsehoods among the real propositions, that, in other words, such use would never contradict any self-evident fact about concrete objects. Establishing this by strictly concrete, and so unimpeachable means was thus the central aim of Hilbert’s program. Hilbert may be seen to have followed Kant in attempting to ground mathematics on the apprehension of spatiotemporal configurations; but Hilbert restricted these configurations to concrete signs (such as inscriptions on paper). Hilbert regarded consistency as the touchstone of existence, and so for him the important thing was the fact that no inconsistencies can arise within the realm of concrete signs, since correct descriptions of concrete objects are always mutually compatible. In particular, within the realm of concrete signs, actual infinity cannot generate inconsistencies since, again along with Kant, he held that this concept cannot correspond to any concrete object. Hilbert’s view seems accordingly to have been that the formal soundness of mathematics issues ultimately, not from a logical source, but from a concrete one^[27], in much the same way as the consistency of truly reported empirical statements is guaranteed by the concreteness of the external world^[28]. Weyl soon grasped the significance of Hilbert’s program, and came to acknowledge its “immense significance and scope”^[29]. Whether that program could be successfully carried out was, of course, still an open question. But independently of this issue Weyl was concerned about what he saw as the loss of content resulting from Hilbert’s thoroughgoing formalization of mathematics. “Without doubt,” Weyl warns, “if mathematics is to remain a serious cultural concern, then some sense must be attached to Hilbert’s game of formulae.” Weyl thought that this sense could only be supplied by “fusing” mathematics and physics so that “the mathematical concepts of number, function, etc. (or Hilbert’s symbols) generally partake in the theoretical construction of reality in the same way as the concepts of energy, gravitation, electron, etc.”^[30] Indeed, in Weyl’s view, “it is the function of mathematics to be at the service of the natural sciences”. But still: The propositions of theoretical physics… lack that feature which Brouwer demands of the propositions of mathematics, namely, that each should carry within itself its own intuitively comprehensible meaning…. Rather, what is tested by confronting theoretical physics with experience is the system as a whole. It seems that we have to differentiate between phenomenal knowledge or insight—such as is expressed in the statement: “This leaf (given to me in a present act of perception) has this green color (given to me in this same perception)”—and theoretical construction. Knowledge furnishes truth, its organ is “seeing” in the widest sense. Though subject to error, it is essentially definitive and unalterable. Theoretical construction seems to be bound only to one strictly formulable rational principle, concordance, which in mathematics, where the domain of sense data remains untouched, reduces to consistency; its organ is creative imagination. (Weyl 1949, Weyl points out that, just as in theoretical physics, Hilbert’s account of mathematics “already… goes beyond the bounds of intuitively ascertainable states of affairs through… ideal assumptions.” (Weyl 1927, 484.) If Hilbert’s realm of contentual or “real” propositions—the domain of metamathematics—corresponds to that part of the world directly accessible to what Weyl terms “insight” or “phenomenal knowledge”, then “serious” mathematics—the mathematics that practicing mathematicians are actually engaged in doing—corresponds to Hilbert’s realm of “ideal” propositions. Weyl regarded this realm as the counterpart of the domain generated by “symbolic construction”, the transcendent world focussed on by theoretical physics. Hence his memorable characterization: The set-theoretical approach is the stage of naive realism which is unaware of the transition from the given to the transcendent. Brouwer represents idealism, by demanding the reduction of all truth to the intuitively given. In [Hilbert’s] formalism, finally, consciousness makes the attempt to “jump over its own shadow”, to leave behind the stuff of the given, to represent the transcendent—but, how could it be otherwise?, only through the symbol. (Weyl 1949, 65–66) In Weyl’s eyes, Hilbert’s approach embodied the “symbolic representation of the transcendent, which demands to be satisfied”, and so he regarded its emergence as a natural development. But by 1927 Weyl saw Hilbert’s doctrine as beginning to prevail over intuitionism, and in this an adumbration of “a decisive defeat of the philosophical attitude of pure phenomenology, which thus proves to be insufficient for the understanding of creative science even in the area of cognition that is most primal and most readily open to evidence—mathematics.”^[31] Since by this time Weyl had become convinced that “creative science” must necessarily transcend what is phenomenologically given, he had presumably already accepted that pure phenomenology is incapable of accounting for theoretical physics, let alone the whole of existence. But it must have been painful for him to concede the analogous claim in the case of mathematics. In 1932, he asserts: “If mathematics is taken by itself, one should restrict oneself with Brouwer to the intuitively cognizable truths … nothing compels us to go farther.” If mathematics could be “taken by itself”, then there would be no need for it to justify its practices by resorting to “symbolic construction”, to employ symbols which in themselves “signify nothing”—nothing, at least, accessible to intuition. But, unlike Brouwer, Weyl seems finally to have come to terms with the idea that mathematics could not simply be “taken by itself”, that it has a larger role to play in the world beyond its service as a paradigm, however pure, of subjective certainty. The later impact of Gödel’s incompleteness theorems on Hilbert’s program led Weyl to remark in 1949:^[32] The ultimate foundations and the ultimate meaning of mathematics remain an open problem; we do not know in what direction it will find its solution, nor even whether a final objective answer can be expected at all. “Mathematizing” may well be a creative activity of man, like music, the products of which not only in form but also in substance defy complete objective rationalization. The undecisive outcome of Hilbert’s bold enterprise cannot fail to affect the philosophical interpretation. (Weyl 1949, 219) The fact that “Gödel has left us little hope that a formalism wide enough to encompass classical mathematics will be supported by a proof of consistency” seems to have led Weyl to take a renewed interest in “axiomatic systems developed before Hilbert without such ambitious dreams”, for example Zermelo’s set theory, Russell’s and Whitehead’s ramified type theory and Hilbert’s own axiom systems for geometry (as well, possibly, as Weyl’s own system in Das Kontinuum, which he modestly fails to mention). In one of his last papers, Axiomatic Versus Constructive Procedures in Mathematics , written sometime after 1953, he saw the battle between Hilbertian formalism and Brouwerian intuitionism in which he had participated in the 1920s as having given way to a “dextrous blending” of the axiomatic approach to mathematics championed by Bourbaki and the algebraists (themselves mathematical descendants of Hilbert) with constructive procedures associated with geometry and topology. It seems appropriate to conclude this account of Weyl’s work in the foundations and philosophy of mathematics by allowing the man himself to have the last word: This history should make one thing clear: we are less certain than ever about the ultimate foundations of (logic and) mathematics; like everybody and everything in the world today, we have our “crisis”. We have had it for nearly fifty years. Outwardly it does not seem to hamper our daily work, and yet I for one confess that it has had a considerable practical influence on my mathematical life: it directed my interests to fields I considered relatively “safe”, and it has been a constant drain on my enthusiasm and determination with which I pursued my research work. The experience is probably shared by other mathematicians who are not indifferent to what their scientific endeavours mean in the contexts of man’s whole caring and knowing, suffering and creative existence in the world. (Weyl 1946, 13) Weyl’s clarification of the role of coordinates, invariance or symmetry principles, his important concept of gauge invariance, his group-theoretic results concerning the uniqueness of the Pythagorean form of the metric, his generalization of Levi-Civita’s concept of parallelism, his development of the geometry of paths, his discovery of the causal-inertial method which prepared the way to empirically determine the spacetime metric in a non-circular, non-conventional manner, his deep analysis of the concept of motion and the role of Mach’s Principle, are but a few examples of his important contributions to the philosophical and mathematical foundations of modern spacetime theory. Weyl’s book, Raum-Zeit-Materie, beautifully exemplifies the fruitful and harmonious interplay of mathematics, physics and philosophy. Here Weyl aims at a mathematical and philosophical elucidation of the problem of space and time in general. In the preface to the great classical work of 1923, the fifth German edition, after mentioning the importance of mathematics to his work, Weyl says: Despite this, the book does not disavow its basic, philosophical orientation: its central focus is conceptual analysis; physics provides the experiential basis, mathematics the sharp tools. In this new edition, this tendency has been further strengthened; although the growth of speculation was trimmed, the supporting foundational ideas were more intuitively, more carefully and more completely developed and analyzed. Extending and abstracting from Gauss’s treatment of curved surfaces in Euclidian space, Riemann constructed an infinitesimal geometry of \(n\)-dimensional manifolds. The coordinate assignments \(x^ {k}(p)\ [k \in \{1, \ldots ,n\}\)] of the points \(p\) in such an \(n\)-dimensional Riemannian manifold are quite arbitrary, subject only to the requirement of arbitrary differential coordinate transformations.^[33] Riemann’s assumption that in an infinitesimal neighbourhood of a point, Euclidean geometry and hence Pythagoras’s theorem holds, finds its formal expression in Riemann’s \[\tag{1} ds^2 = \sum_{i,j} g_{ij}(x^{k}(p))dx^{i}dx^{j}\ [\text{where } g_{ij}(x^{k}(p)) = g_{ji}(x^{k}(p))] \] for the square of the length \(ds\) of an infinitesimal line element that leads from the point \(p = x(p) = (x^{1}(p), \ldots ,x^{n}(p))\) to an arbitrary infinitely near point \(p' = x(p') = (x^{1} (p) + dx^{1}(p), \ldots ,x^{n}(p) + dx^{n}(p))\). The assumption that Euclidean geometry holds in the infinitesimally small means that the \(dx^{i}(p)\) transform linearly under arbitrary coordinate transformations. Using the Einstein summation convention^[34], equation (1) can simply be written as \[\tag{2} ds^{2} = g_{ij}(x^{k}(p))dx^{i}dx^{j}. \] Riemann assumed the validity of the Pythagorean metric only in the infinitely small. Riemannian geometry is essentially a geometry of infinitely near points and conforms to the requirement that all laws are to be formulated as field laws. Field laws are close-action-laws which relate the field magnitudes only to infinitesimally neighbouring points in space.^[35] The value of some field magnitude at each point depends only on the values of other field magnitudes in the infinitesimal neighbourhoods of the corresponding points. The field magnitudes consist of partial derivatives of position functions at some point, and this requires the knowledge of the behavior of the position functions only with respect to the neighbourhood of that point. To construct a field law, only the behavior of the world in the infinitesimally small is required.^[36] Riemann’s ideas were brought to a concrete realization fifty years later in Einstein’s general theory of relativity. The basic idea underlying the general theory of relativity was Einstein’s recognition that the metric field, which has such powerful real effects on matter, cannot be a rigid once and for all given geometric structure of the spacetime, but must itself be something real, that not only has effects on matter, but is in turn also affected by matter. Riemann had already suggested that analogous to the electromagnetic field, the metric field reciprocally interacts with matter. Einstein came to this idea of reciprocity between matter and field independently of Riemann, and in the context of his theory of general relativity, applied this principle of reciprocity to four dimensional spacetime. Thus Einstein could adopt Riemann’s infinitesimal geometry with the important difference: given the causal requirements of Einstein’s theory of special relativity, Riemann’s quadratic form is not positive definite but indefinite; it has signature 1.^[37] Weyl (1922a) says: All our considerations until now were based on the assumption, that the metric structure of space is something that is fixed and given. Riemann already pointed to another possibility which was realized through General Relativity. The metrical structure of the extensive medium of the external world is a field of physical reality, which is causally dependent on the state of matter. And in another place Weyl (1918b) remarks: The metric is not a property of the world [spacetime] in itself, rather spacetime as a form of appearance is a completely formless four-dimensional continuum in the sense of analysis situs, but the metric expresses something real, something which exists in the world, which exerts through centrifugal and gravitational forces physical effects on matter, and whose state is conversely conditioned through the distribution and nature of matter. After Einstein applied Riemannian geometry to his theory of general relativity, Riemannian geometry became the focus of intense research. In particular, G. Ricci and T. Levi-Civita’s so-called Absolute Differential Calculus developed and clarified the Riemannian notions of an affine connection and covariant differentiation. The decisive step in this development, however, was T. Levi-Civita’s discovery in 1917 of the concept of infinitesimal parallel vector displacement, and the fact that such parallel vector displacement is uniquely determined by the metric field of Riemannian geometry. Levi-Civita’s construction of infinitesimal parallel transport on a manifold required the process of embedding the manifold into a flat higher-dimensional metric space. In 1918, Weyl generalized Levi-Civita’s concept of parallel transport by means of an intrinsic construction that does not require the process of such an embedding, and is therefore independent of a metric. Weyl’s intrinsic construction results in a metric-independent, symmetric linear connection. Weyl simply referred to the latter as affine connection.^[38] Weyl defines what he means by an affine connection in the following way: A point \(p\) on the manifold \(M\) is affinely connected with its immediate neighborhood, if and only of for every tangent vector \(v_{p}\) at \(p\), a tangent vector \(v_{q}\) at \(q\) is determined to which the tangent vector \(v_{p}\) gives rise under parallel displacement from \(p\) to the infinitesimally neighboring point \(q\). This definition merely says that a manifold is affinely connected if it admits the process of infinitesimal parallel displacement of a vector. Weyl’s next definition characterizes the essential nature of infinitesimal parallel displacement. The definition says that at any arbitrary point of the manifold there exists a geodesic coordinate system such that the components of any vector at that point are not altered by an infinitesimal parallel displacement with respect to it. This is a geometrical way of expressing Einstein’s requirement that the gravitational field can always be made to vanish locally. According to Weyl (1923b, 115), it characterizes the nature of an affine connection on the manifold. A manifold which is an affine manifold is homogeneous in this sense. Moreover, manifolds do not exist whose affine structure is of a different nature. The transport of a tangent vector \(v_{p}\) at \(p\) to an infinitesimally nearby point \(q\) results in the tangent vector \(v_{q}\) at \(q\), namely, \[\tag{3} v_{q} = v_{p} + dv_{p}. \] This infinitesimal tangent vector transport Weyl defines as infinitesimal parallel displacement if and only if there exists a coordinate system \(\overline{x}\), called a geodesic coordinate system for the neighborhood of \(p\), relative to which the transported tangent vector \(\overline{v}_{q}\) at \(q\) possesses the same components as the original tangent vector \(\overline{v}_{p}\) at \(p \); that is, \[\tag{4} \overline{v}^{\,i}_q - \overline{v}^{\,i}_p = d\overline{v}^{\,i}_p = 0. \] Figure 1: Parallel transport in a geodesic coordinate system \(\overline{x}\) For an arbitrary coordinate system \(x\) the components \(dv^{\,i}_p\) vanish whenever \(v^{\,i}_p\) or \(dx^{\,i}_p\) vanish. Consequently, \(dv^{\,i}_p\) is bi-linear in \(v^{\,i}_p\) and \(dx^ {\,i}_p\); that is, \[\tag{5} dv^{\,i}_p = -\Gamma^{\,i}_{jk}(x^i(p))v^{\,i}_p dx^{\,k}_p, \] where, in the case of four dimensions, the \(4^{3} = 64\) coefficients \(\Gamma^{\,i}_{jk} (x^{i}(p))\) are coordinate functions, that is, functions of \(x^{i}(p)\) \((i = 1, \ldots, 4)\), and the minus sign is introduced to agree with convention. Figure 2: Parallel transport in an arbitrary coordinate system \(x\) It is important to understand that there is no intrinsic notion of infinitesimal parallel displacement on a differentiable manifold. A notion of “parallelism” is not something that a manifold would possess merely by virtue of being a smooth manifold; additional structure has to be introduced which resides on the manifold and which permits the notion of infinitesimal parallelism. A manifold is an “affine manifold” \((M, \Gamma)\) if in addition to its manifold structure (differential topological structure) it is also endowed with an affine structure \(\Gamma\) that assigns to each of its points 64 coefficients \(\Gamma^{i}_{jk} (x^{i}(p))\) satisfying the symmetry condition \(\Gamma^{i}_{jk} (x^{i}(p)) = \Gamma^{i}_{kj} (x^{i}(p))\). An \(n\)-dimensional manifold \(M\), which is an affinely connected manifold, Weyl (1918b) interprets physically as an \(n\)-dimensional world (spacetime) filled with a gravitational field. Weyl says, “…the affine connection appears in physics as the gravitational field…” Since there exists at each spacetime point a geodesic coordinate system in which the components \(\Gamma^{i}_{jk}\) of the symmetric linear connection vanish, the gravitational field can be made to vanish at each point of the manifold. The classical theory of physical geometry, developed by Helmholtz, Poincaré and Hilbert, regarded the concept of “metric congruence” as the only basic relation of geometry, and constructed physical geometry from this one notion alone in terms of the relative positions and displacements of physical congruence standards. Although Einstein’s general theory of relativity championed a dynamical view of spacetime geometry that is very different from the classical theory of physical geometry, Einstein initially approached the problem of the structure of spacetime from the metrical point of view. It was Weyl (1923b) who emphasized and developed the metric-independent construction of the symmetric linear connection and who pointed out the rationale for doing so. In both the non-relativistic and relativistic contexts, it is the symmetric linear connection, and not the metric, which plays the essential role in the formulation of all physical laws that are expressed in terms of differential equations. It is the symmetric linear connection that relates the state of a system at a spacetime point to the states at neighboring spacetime events and enters into the differentials of the corresponding magnitudes. In both Newtonian physics and the theory of general relativity, all dynamical laws presuppose the projective and affine structure and hence the Law of Inertia. In fact, the whole of tensor analysis with its covariant derivatives is based on the affine concept of infinitesimal parallel displacement and not on the metric. Weyl’s metric independent construction not only led to a deeper understanding of the mathematical characterization of gravity, it also prepared the way for new constructions and generalizations in differential geometry and the general theory of relativity. In particular, it led to 1. The development of the geometry of paths, first introduced by Weyl in 1918. 2. Weyl’s discovery of the causal-inertial method which prepared the way to empirically determine the spacetime metric in a non-circular, non-conventional manner. 3. Weyl’s generalization of Riemannian geometry in his attempt to unify gravity and electromagnetism. 4. Weyl’s introduction of the concept of gauge in the context of his attempt to unify gravity and electromagnetism. For more detail on Weyl’s metric independent construction of the affine connection (linear symmetric connection), see the supplement. Weyl’s metric-independent construction of the affine structure led to the development of differential projective geometries or the geometries of paths. The interest in projective geometry is in the paths, that is, in the continuous set of points of the image set of curves, rather than in the possible parameter descriptions of curves. A curve has one degree of freedom; it depends on one parameter, and its image set or path is a one-dimensional continuous set of points of the manifold. One represents a curve on a manifold \(M\) as a smooth map (i.e., \(C^{\infty})\) \(\gamma\) from some open interval \(I = (-\varepsilon, \varepsilon)\) of the real line \(\mathbb{R}\) into \(M\). Figure 3: A curve on the manifold \(M\) is the smooth map \(\gamma : I \subset \mathbb{R} \rightarrow M\) It is important to understand that what one means by a “curve” is the map (the parametric description) itself, and not the set of its image points, the path. Consequently, two curves are mathematically considered to be different curves if they are given by different maps (different parameter descriptions), even if their image set, that is, their path, is the same. If we change a curve’s parameter description we change the curve but not its image set (its path), the points it passes through. A path is therefore sometimes defined as an equivalence class of curves under arbitrary parameter transformations. Hence, projective geometry may be defined as an equivalence class of affine geometries. A geodesic curve in flat space is a straight line. Its tangent at one point is parallel to the tangent at previous or subsequent points. A straight line in Euclidean space is the only curve that parallel-transports its own tangent vector. This notion of parallel transport of the tangent vector also characterizes geodesic curves in curved space. That is, a curve \(\gamma\) in curved space, which parallel-transports its own tangent vector along all of its points, is called a geodesic curve. Given a manifold with an affine structure and some arbitrary local coordinate system, the coordinate functions (components) \(\gamma^{i}\) of a geodesic curve \(\gamma\) satisfy the second-order non-linear differential equations \[\tag{6} \frac{d^2\gamma^i}{ds^2} + \Gamma^{i}_{jk} \frac{d\gamma^j}{ds} \frac{d\gamma^k}{ds} = 0. \] One may characterize the projective geometry \(\Pi\) on an affine manifold either in terms of an equivalence class of geodesic curves under arbitrary parameter diffeomorphisms^[39], thereby eliminating all the parameter descriptions and hence all possible notions of distance along the curves satisfying (6),^[40] or one may take the process of autoparallelism of directions as fundamental in defining a projective structure. Figure 4: A path \(\xi\) is an equivalence class [\(\gamma\)] of curves under all parameter diffeomorphisms \(\mu: \mathbb{R} \rightarrow \mathbb{R}; \lambda \mapsto \mu(\lambda)\) Weyl took the latter approach. According to Weyl, the infinitesimal process of parallel displacements of vectors contains, as a special case, the infinitesimal displacement of a direction into its own direction. Such an infinitesimal autoparallelism of directions is characteristic of the projective structure of an affinely connected manifold. Infinitesimal Autoparallelism of a Direction: An infinitesimal autoparallelism of a direction \(R\) at an arbitrary point \(p\) consists in the parallel displacement of \(R\) at \(p\) to a neighbouring point \(p'\) which lies in the direction \ (R\) at \(p\). A curve is geodesic if and only if its tangent direction \(R\) experiences infinitesimal autoparallelism when moved along all the points of the curve. This characterization of a geodesic curve constitutes an abstraction from affine geometry. Through this abstraction, a geodesic curve is definable exclusively in terms of autoparallelism of tangent directions, and not tangent vectors. Roughly speaking, an affine geometry is essentially a projective geometry with the notion of distance defined along the curves. By eliminating all possible notions of distance along curves, or equivalently, all the parameter descriptions of the curves, one abstracts the projective geometry form affine geometry. As mentioned above, a projective geometry \(\Pi\) may be defined as an equivalence class of affine geometries, that is, an equivalence class of projectively related affine connections [\(\Gamma\)]. Weyl presented the details of his approach to projective geometry, which uses the notion of autoparallelism of direction, in a set of lectures delivered in Barcelona and Madrid in the spring of 1922 (Weyl (1923a); see also Weyl (1921c)). Weyl began with the following necessary and sufficient condition for the invariance of the projective structure \(\Pi\) under a transformation \(\Gamma \ rightarrow \overline{\Gamma}\) of the affine structure: Projective Transformation: A transformation \(\Gamma \rightarrow \overline{\Gamma}\) preserves the projective structure \(\Pi\) of a manifold with an affine structure \(\Gamma\), and is called a projective transformation, if and only if \[\tag{7} (\overline{\Gamma} - \Gamma)^i_{jk} v^{\,j}v^{\,k} \propto v^{\,i}, \] where \(v^{\,i}\) is an arbitrary vector. Weyl’s definition says that a change in the affine structure of the manifold \(M\) preserves the projective structure \(\Pi\) of \(M\) if the vectors \(v^{i}_{q}\) and \(\overline{v}^{i}_{q}\) at \(q \) that result from the vector \(v^{i}_{p}\) at \(p\) by parallel transport under \(\Gamma\) and \(\overline{\Gamma}\) respectively, differ at most in length but not in direction.^[41] A spacetime manifold \(M\) is a “projective manifold” (M, \(\Pi)\), if in addition to its manifold structure (differential topological structure), it is also endowed with a projective structure \(\Pi \) that assigns to each of its manifold points 64 coefficients \(\Pi^{\,i}_{jk} x^{i}(p))\) satisfying certain symmetry conditions.^[42] These projective coefficients characterize the equivalence class [\(\Gamma\)] of projectively equivalent connections, that is, connections equivalent under the projective transformation (7). In physical spacetime the projective structure has an immediate intuitive significance according to Weyl. The real world is a non-empty spacetime filled with an inertial-gravitational field, which Weyl calls the guiding field \((F\)ührungsfeld)^[43]. It is an indubitable fact, according to Weyl (1923a, 13), that a body which is let free in a certain spacetime direction (time-like direction) carries out a uniquely determined natural motion from which it can only be diverted through an external force. The process of autoparallelism of direction appears, thus, as the tendency of persistence of the spacetime direction of a free particle whose motion is governed by, what Weyl calls, the guiding field \((F\)ührungsfeld). This natural motion occurs on the basis of an effective infinitesimal tendency of persistence, that parallelly displaces the spacetime direction \(R\) of a body at an arbitrary point \(p\) on its trajectory to a neighbouring point \(p'\) that lies in the direction \(R\) at \(p\). If external forces exert themselves on a body, then a motion results which is determined through the conflict between the tendency of persistence due to the guiding field and the force. The tendency of persistence of the guiding field is a type of constraining guidance, that the inertial-gravitational field exerts on every body. Weyl (1923b, 219) says: Galilei’s inertial law shows, that there exists a type of constraining guidance in the world [spacetime] that imposes on a body that is let free in some definite world direction a unique natural motion from which it can only be diverted through external forces; this occurs on the basis of an effective infinitesimal tendency of persistence from point to point, that auto-parallelly transfers the world direction \(r\) of the body at an arbitrary point \(P\) to an infinitesimally close neighboring point \(P'\), that lies in the direction \(r\) at \(P\). Shortly after the completion of the general theory of relativity in 1915, Einstein, Weyl, and others began to work on a unified field theory. It was natural to assume at that time^[44] that this task would only involve the unification of gravity and electromagnetism. In Einstein’s geometrization of gravity, the Newtonian gravitational potential, and the Newtonian gravitational force, are respectively replaced by the components of the metric tensor \(g_{ij}(x)\), and the components of the symmetric linear connection \(\Gamma^{i}_{jk}(x)\). In the general theory of relativity the gravitational field is thus accounted for in terms of the curvature of spacetime, but the electromagnetic field remains completely unrelated to the spacetime geometry. Einstein’s mathematical formulation of his theory of general relativity does not, however, provide room for the geometrization of the other long range force field, the electromagnetic field.^[45] It was therefore natural to ask whether nature’s only two long range fields of force have a common origin. Consequently, it was quite natural to suggest that the electromagnetic field might also be ascribed to some property of spacetime, instead of being merely something embedded in spacetime. Since, however, the components \(g_{ij}(x)\) of the metric tensor are already sufficiently determined by Einstein’s field equations, this would require setting up a more general differential geometry than the one which underlies Einstein’s theory, in order to make room for incorporating electromagnetism into spacetime geometry. Such a generalized differential geometry would describe both long range forces, and a new theory based on this geometry would constitute a unified field theory of electromagnetism and In 1918, Weyl proposed such a theory. In Weyl (1918a, 1919a), and in the third edition (1920) of Raum-Zeit-Materie, Weyl presented his ingenious attempt to unify gravitation and electromagnetism by constructing a gauge-invariant geometry (see below), or what he called a purely infinitesimal ‘metric’ geometry. Since the conformal structure \(C\) (see below) of spacetime does not determine a unique symmetric linear connection \(\Gamma\) but only an equivalence class \(K = [\Gamma]\) of conformally equivalent symmetric linear connections, Weyl was able to show that this degree of freedom in a conformal structure of spacetime provides just enough room for the geometrization of the electromagnetic potentials. The resulting geometry, called a Weyl geometry, is an intermediate geometric structure that lies between the conformal and Riemannian structures.^[46] The metric tensor field that is locally described by \[\tag{8} ds^{2} = g_{ij}(x(p))dx^{i}dx^{j}, \] is characteristic of a Riemannian geometry. That geometry requires of the symmetric linear connection \(\Gamma\) that the infinitesimal parallel transport of a vector always preserves the length of the vector. Therefore, the metric field in Riemannian geometry determines a unique symmetric linear connection, a “metric connection” that satisfies the length-preserving condition of parallel transport. This means that the metric field, locally represented by (8), is invariant under parallel transport. The coefficients of this unique symmetric linear metric connection are given by \[\tag{9} \Gamma^i_{jk} = \frac{1}{2} g^{ir}(g_{rj,k} + g_{kr,j} - g_{jk,r}). \] If \(v_{p}\) is a vector at \(p \in M\), its length \[\tag{10} \lvert v_p \rvert^2 = g_{ij}(x(p))v^{\,i}_p v^{\,j}_p. \] Moreover, the angle between two vectors \(v_{p}\) and \(w_{p}\) at \(p\in M\) is given by \[\tag{11} \cos \theta = \frac{g_{ij}(x(p))v^{\,i}_p w^{\,j}_p}{\lvert v_p \rvert \lvert w_p \rvert}. \] While in Riemannian geometry the parallel transport of length is path independent, that is, it is possible to compare the lengths of any two vectors, even if they are located at two finitely different points, a vector suffers a path-dependent change in direction under parallel transport; that is, it is not possible to define the angle between two vectors, located at different points, in a path-independent way. Consequently, the angle between two vectors at a given point is invariant under parallel transport if and only if both vectors are transported along the same path. In particular, a vector which is carried around a closed circuit by a continual parallel displacement back to the starting point, will have the same length, but will not in general return to its initial Figure 5: The parallel transport of a vector by a two-dimensional creature, from \(A \rightarrow B \rightarrow C \rightarrow A\) around a geodesic triangle on a two-dimensional surface \(S^{2}\), ends up pointing in a different direction upon returning to \(A\). For a closed loop which circumscribes an infinitesimally small portion of space, the rotation of the vector per unit area constitutes the measure of the local curvature of space. Consequently, whether or not finite parallel displacement of direction is integrable, that is, path-independent, depends on whether or not the curvature tensor vanishes. According to Weyl, Riemannian geometry, is not a pure or genuine infinitesimal differential (metric) geometry, since it permits the comparison of length at a finite distance. In his seminal 1918 paper entitled Gravitation und Elektrizität (Gravitation and Electricity) Weyl (1918a) says: However, in the Riemannian geometry described above, there remains a last distant-geometric [ferngeometrisches] element—without any sound reason, as far as I can see; the only cause of this appears to be the development of Riemannian geometry from the theory of surfaces. The metric permits the comparison of length of two vectors not only at the same point, but also at any arbitrarily separated points. A true near-geometry (Nahegeometrie), however, may recognize only a principle of transferring a length at a point to an infinitesimal neighbouring point, and then it is no more reasonable to assume that the transfer of length from a point to a finitely distant point is integrable, then it was to assume that the transfer of direction is integrable. Weyl wanted a metric geometry which would not permit distance comparison of length between two vectors located at finitely different points. In a pure infinitesimal geometry, Weyl argued, if attention is restricted to a single point of the manifold, then some standard of length or gauge must be chosen arbitrarily before the lengths of vectors can be determined. Therefore, all that is intrinsic to the notion of a pure infinitesimal metric differential geometry is the ability to determine the ratios of the lengths of any two vectors and the angle between any two vectors, at a point . Such a pure infinitesimal metric manifold must have at least a conformal structure \(C\). The defining characteristic of a conformal spacetime structure is given by the equation \[\tag{12} 0 = ds^2 = g_{ij}(x(p))dx^i dx^{\,j}, \] which determines the light cone at \(p\). A gauge transformation of the metric is a map \[ g_{ij}(x(p)) \rightarrow \lambda(x(p))g_{ij}(x(p)) = \overline{g}_{ij}(x(p)), \] which preserves the metric up to a positive and smooth but otherwise arbitrary scalar factor or gauge function \(\lambda(x(p))\). In the case of a pseudo-Riemannian structure such a gauge transformation leaves the light cones unaltered. The angle between two vectors at \(p\) is given by (11). Clearly, the gauge transformation \(\overline{g}_{ij}(x(p)) = \lambda(x(p))g_{ij}(x(p))\) is angle preserving, that is, conformal. Two metrics which are related by a conformal gauge transformation are called conformally equivalent. A conformal structure does not determine the length of any one vector at a point. Only the relative lengths, the ratio of lengths, of any two vectors \(\bfrac{\lvert v_p \rvert}{\lvert w_p \rvert}\) is determined. Weyl exploited these features of the conformal structure, and suggested that given a conformal structure, a gauge could be chosen at each point in a smooth but otherwise arbitrary manner, such that the metric (8) at any point of the manifold is conventional or undetermined to the extent that the metric \[\tag{13} d\overline{s}^2 = \lambda(x(p))g_{ij}(x(p))dx^{i}dx^{\,j} \] is equally valid. However, a conformal structure by itself does not determine a unique symmetric linear connection; it only determines an equivalence class of conformally equivalent connections \(K = [\Gamma]\), namely, connections which preserve the conformal structure \(C\) during parallel transport. The difference between any two conformally equivalent symmetric linear connections \(\overline{\Gamma}^ {\,i}_{jk}\), \(\Gamma^i_{jk} \in [\Gamma]\) is given by \[\tag{14} \overline{\Gamma}^{\,i}_{jk} - \Gamma^i_{jk} = \frac{1}{2}(\delta^{\,i}_j \theta_k + \delta^{\,i}_k \theta_j - g_{jk}g^{ir}\theta_{r}), \] \[\tag{15} \theta_{j}(x(p))dx^{\,j} \] is an arbitrary one-form field. Since the conformal structure determines only an equivalence class of conformally equivalent symmetric linear connections \(K = [\Gamma]\), the affine connection in this type of geometry is not uniquely determined, and the parallel transport of vectors is not generally well defined. Moreover, the ratio of the lengths of two vectors located at different points is not determined even in a path-dependent way. According to Weyl, it is a fundamental principle of infinitesimal geometry that the metric structure on a manifold \(M\) determines a unique affine structure on \(M\). As was pointed out earlier, this principle is satisfied in Riemannian geometry where the metric determines a unique symmetric linear connection, namely, the metric connection according to (9). Evidently this fundamental principle of infinitesimal geometry is not satisfied for a structure which is merely a conformal structure, since the conformal structure only determines an equivalence class of conformally equivalent symmetric connections. Weyl showed that besides the conformal structure an additional structure is required in order to determine a unique symmetric linear connection from the equivalence class \(K = [\Gamma]\) of conformally equivalent symmetric linear connections. Weyl showed that this additional structure is provided by the length connection or gauge field \(A_{j}\) that governs the congruent displacement of lengths. Weyl called this additional structure the “metric connection” on a manifold; however, we shall use the term “length connection” instead, in order to avoid confusion with the modern usage of the term “metric connection”, which today denotes the symmetric linear connection that is uniquely determined by a Riemannian metric tensor according to Weyl’s Length Connection: A point \(p\) is length connected with its infinitesimal neighborhood, if and only if for every length at \(p\), there is determined at every point \(q\) infinitesimally close to \(p\) a length to which the length at \(p\) gives rise when it is congruently displaced from \(p\) to \(q\). This definition merely says that a manifold is “length connected” if it admits the process of infinitesimal congruent displacement of length. The only condition imposed on the concept of congruent displacement of length is the following: Congruent Displacement of Length: With respect to a choice of gauge for a neighborhood of \(p\), the transport of a length \(l_{p}\) at \(p\) to an infinitesimally neighboring point \(q\) constitutes a congruent displacement if and only if there exists a choice of gauge for the neighborhood of \(p\) relative to which the transported length \(\overline{l}_{q}\) has the same value as \(\overline{l}_{p}\); that is \[\tag{16} \overline{l}_q - \overline{l}_p = d\overline{l}_p = 0. \] Weyl called such a gauge at \(p\) a geodesic gauge at \(p\).^[47] Weyl’s proof of the following theorem closely parallels the proof of theorem A.3 in the supplement on Weyl’s metric independent construction of the affine connection. Theorem 4.1: If for every point \(p\) in a neighborhood \(U\) of \(M\), there exists a choice of gauge such that the change in an arbitrary length at \(p\) under congruent displacement to an infinitesimally near point \(q\) is given by \[\tag{17} d\overline{l}_p = 0, \] then locally with respect to any other choice of gauge, \[\tag{18} dl = -lA_j(x(p))dx^{\,j}, \] and conversely. Making use of \[ dv^{\,i}_p &= -\Gamma^i_{jk}(x(p))v^{\,j}_pdx^k \\ l_p &= g_{ij}(x(p))v^{\,i}_p v^{\,j}_p \\ dl_p &= -l_p A_j x(p)dx^{\,j}, \] Weyl (1923a, 124–125) shows that the conformal structure supplemented with the structure of a length connection or gauge field \(A_{j}(x)\) singles out a unique connection from the equivalence class \(K = [\Gamma]\) of conformally equivalent connections.^[48] This unique connection, which is called the Weyl connection, is given by \[ \Gamma^{\,i}_{jk} &= \frac{1}{2}g^{ir}(g_{rj,k} + g_{kr,j} - g_{jk,r}) +\frac{1}{2} g^{ir}(g_{rj}A_{k} + g_{kr}A_{j} - g_{jk}A_{r}) \\ \tag{19} &= \frac{1}{2}g^{ir}(g_{rj,k} + g_{kr,j} - g_{jk,r}) +\frac{1}{2}(\delta^{\,i}_j A_{k} + \delta^{\,i}_k A_{j} - g_{jk}g^{ir}A_{r}), \] which is analogous to (14). The first term of the Weyl connection is identical to the metric connection (9) of Riemannian geometry, whereas the second term represents what is new in a Weyl geometry. The Weyl connection is invariant under the gauge transformation \[ \overline{g}_{ij}(x) &= e^{\theta(x)}g_{ij}(x) \\ \tag{20} \overline{A}_j(x) &= A_j(x) - \partial_j \theta(x), \] where the gauge function is \(\lambda(x) = e^{\theta(x)}\). Thus, a conformal structure plus length connection or gauge field \(A_{j}(x)\) determines a Weyl geometry equipped with a unique Weyl connection. Therefore, the fundamental principle of infinitesimal geometry also holds in a Weyl geometry; that is, the metric structure of a Weyl geometry determines a unique affine connection, namely, the Weyl connection. In Weyl’s physical interpretation of his purely infinitesimal metric geometry (Weyl geometry), the gauge field \(A_{j}(x)\) is identified with the electromagnetic four potential, and the electromagnetic field tensor is given by \[\tag{21} F_{jk}(x) = \partial_{j} A_{k}(x) - \partial_{k} A_{j}(x). \] A spacetime that is formally characterizable as a Weyl geometry, would not only have a curvature of direction (Richtungskrümmung) but would also have a curvature of length (Streckenkrümmung). Because of the latter property the formal characterization of the congruent displacement of length would be non-integrable, that is, path-dependent, in a Weyl geometry. Figure 6: In a Weyl geometry parallel displacement of a vector along different paths not only changes its direction but also its length Suppose physical spacetime corresponds to a Weyl geometry. Then two identical clocks \(A\) and \(B\) at an event \(p\) with a common unit of time, that is, a timelike vector of given length \(l_{p} \), which are separated and moved along different world lines to an event \(q\), will not only differ with respect to the elapsed time (first clock effect (i.e., relativistic effect)), but in general the clocks will differ with respect to their common unit of time (rate of ticking) at \(q\) (second clock effect). That is, congruent time displacement in a Weyl geometry is such that two congruent time intervals at \(p\) will not in general be congruent at \(q\), when congruently displaced in parallel along different world lines from \(p\) to \(q\), that is, \(l^{A}_{q} \ne l^{B}_{q}\). This means that a twin who travels to a distant star and then returns to earth would not only discover that the other twin on earth had aged much more, but also that all the clocks on earth tick at a different rate. Hence, in the presence of a non-vanishing electromagnetic field \(F_{jk}(x)\) the clock rates will not in general be the same; that is, there will be a second clock effect in addition to the relativistic effect (first clock effect). Thus, \(l^{A}_{q} = l^{B}_{q}\) if and only if the curl of \(A_{j}(x)\) vanishes, that is, if and only if the electromagnetic field tensor \(F_{jk}(x) \) vanishes, namely, \[ F_{jk}(x) = \partial_{j} A_{k}(x) - \partial_{k} A_{j}(x) = 0. \] In that case the second term of the Weyl connection vanishes and (19) reduces to the metric connection (9) of Riemannian geometry. In a Weyl geometry there are no ideal absolute “meter sticks” or “clocks”. For example, the rate at which any clock measures time is a function of its history. However, as Einstein pointed out in a Nachtrag (addendum) to Weyl (1918a), it is precisely this situation which suggests that Weyl’s geometry conflicts with experience. In Weyl’s geometry, the frequency of the spectral lines of atomic clocks would depend on the location and past histories of the atoms. But experience indicates otherwise. The spectral lines are well-defined and sharp; they appear to be independent of an atom’s history. Atomic clocks define units of time, and experience shows they are integrably transported. Thus, if we assume that the atomic time and the gravitational standard time are identical, and that the gravitational standard time is determined by the Weyl geometry, then the electromagnetic field tensor is zero. But if that is the case, then a Weyl geometry reduces to the standard Riemannian geometry that underlies general relativity, since the vanishing of Weyl’s Streckenkrümmung (length curvature) is necessary and sufficient for the existence of a Riemannian metric \(g_{ij}\). When quantum theory was developed a few years later it became clear that Weyl’s theory was in conflict with experience in an even more fundamental way since there is a direct relation between clock rates and masses of particles in quantum theory. A particle with a certain rest mass \(m\) possesses a natural frequency which is a function of its rest mass, the speed of light \(c\), and Planck’s constant \(h\). This means that in a Weyl geometry not only clocks would depend on their histories but also the masses of particles. For example, if two protons have different histories then they would also have different masses in a Weyl geometry. But this violates the quantum mechanical principle that particles of the same kind—in this case, protons—have to be exactly identical. However, in 1918 it was still possible for Weyl to defend his theory in the following way. In response to Einstein’s criticism Weyl noted that atoms, clocks and meter sticks are complex objects whose real behavior in arbitrary gravitational and electromagnetic fields can only be inferred from a dynamical theory of matter. Since no detailed and reliable dynamical models were available at that time, Weyl could argue that there is no reason to assume that, for example, clock rates are correctly modelled by the length of a timelike vector. Weyl (1919a, 67) said: At first glance it might be surprising that according to the purely close-action geometry, length transfer is non-integrable in the presence of an electromagnetic field. Does this not clearly contradict the behaviour of rigid bodies and clocks? The behaviour of these measurement instruments, however, is a physical process whose course is determined by natural laws and as such has nothing to do with the ideal process of ‘congruent displacement of spacetime distance’ that we employ in the mathematical construction of the spacetime geometry. The connection between the metric field and the behaviour of rigid rods and clocks is already very unclear in the theory of Special Relativity if one does not restrict oneself to quasi-stationary motion. Although these instruments play an indispensable role in praxis as indicators of the metric field, (for this purpose, simpler processes would be preferable, for example, the propagation of light waves), it is clearly incorrect to define the metric field through the data that are directly obtained from these instruments. Weyl elaborated this idea by suggesting that the dynamical nature of such time keeping systems was such that they continually adapt to the spacetime structure in such a way that their rates remain constant. He distinguished between quantities that remain constant as a consequence of such dynamical adjustment, and quantities that remain constant by persistence because they are isolated and undisturbed. He argued that all quantities that maintain a perfect constancy probably do so as a result of dynamical adjustment. Weyl (1921a, 261) expressed these ideas in the following way: What is the cause of this discrepancy between the idea of congruent transfer and the behaviour of measuring-rods and clocks? I differentiate between the determination of a magnitude in Nature by “persistence” (Beharrung) and by “adjustment” (Einstellung). I shall make the difference clear by the following illustration: We can give to the axis of a rotating top any arbitrary direction in space. This arbitrary original direction then determines for all time the direction of the axis of the top when left to itself, by means of a tendency of persistence which operates from moment to moment; the axis experiences at every instant a parallel displacement. The exact opposite is the case for a magnetic needle in a magnetic field. Its direction is determined at each instant independently of the condition of the system at other instants by the fact that, in virtue of its constitution, the system adjusts itself in an unequivocally determined manner to the field in which it is situated. A priori we have no ground for assuming as integrable a transfer which results purely from the tendency of persistence. …Thus, although, for example, Maxwell’s equations demand the conservational equation \(de\,/\,dt =0\) for the charge \(e\) of an electron, we are unable to understand from this fact why an electron, even after an indefinitely long time, always possesses an unaltered charge, and why the same charge \(e\) is associated with all electrons. This circumstance shows that the charge is not determined by persistence, but by adjustment, and that there can exist only one state of equilibrium of the negative electricity, to which the corpuscle adjusts itself afresh at every instant. For the same reason we can conclude the same thing for the spectral lines of atoms. The one thing common to atoms emitting the same frequency is their constitution, and not the agreement of their frequencies on the occasion of an encounter in the distant past. Similarly, the length of a measuring-rod is obviously determined by adjustment, for I could not give this measuring-rod in this field-position any other length arbitrarily (say double or treble length) in place of the length which it now possesses, in the manner in which I can at will pre-determine its direction. The theoretical possibility of a determination of length by adjustment is given as a consequence of the world-curvature, which arises from the metrical field according to a complicated mathematical law. As a result of its constitution, the measuring-rod assumes a length which possesses this or that value, in relation to the radius of curvature of the field. Weyl’s response to Einstein’s criticism that a Weyl geometry conflicts with experience, took advantage of the fact that the underlying dynamical laws of matter which govern clocks and rigid rods, were not known at that time. Weyl could thus argue that it is at least theoretically possible that there exists an underlying dynamics of matter, such that a Weyl geometry, according to which length transfer is non-integrable, nonetheless coheres with observable experience, according to which length transfer appears to be integrable. However, as was clearly pointed out by Wolfgang Pauli, Weyl’s plausible defence comes at a cost.^[49] Pauli (1921/1958, 196) argued that Weyl’s defence of his theory deprives it of its inherent convincing power from a physical point of view. Weyl’s present attitude to this problem is the following: The ideal process of the congruent transference of world lengths … has nothing to do with the real behaviour of measuring rods and clocks; the metric field must not be defined by means of information taken from these measuring instruments. In this case the quantities \(g_{ik}\) and \(\varphi_{i}\) are, be definition, no longer observable, in contrast to the line elements of Einstein’s theory. This relinquishment seems to have very serious consequences. While there now no longer exists a direct contradiction with experiment, the theory appears nevertheless to have been robbed of its inherent convincing power, from a physical point of view. For instance, the connexion between electromagnetism and world metric is not now essentially physical, but purely formal. For there is no longer an immediate connection between the electromagnetic phenomena and the behaviour of measuring rods and clocks. There is only an interrelation between the former and the ideal process which is mathematically defined as congruent transference of vectors. Besides, there exists only formal, and not physical, evidence for a connection between world metric and electricity.^[50] Pauli concluded his critical assessment of Weyl’s theory with the following statement: Summarizing, we can say that Weyl’s theory has not succeeded in getting any nearer to solving the problem of the structure of matter. As will be argued in more detail … there is, on the contrary, something to be said for the view that a solution of this problem cannot at all be found in this way. It should be noted, however, that Weyl’s defence of his theory implicitly addresses an important methodological consideration concerning the relation between theory and evidence. As Pauli puts it above, according to Weyl “the metric field must not be defined by means of information taken from these measuring instruments [rigid rods and ideal clocks]”. That is, Weyl rejects Einstein’s operational standpoint which gives operational significance to the metric field in terms of the observable behaviour of ideal rigid rods and ideal clocks.^[51] Unlike light propagation and freely falling (spherically symmetric, neutral) particles, rigid rods and ideal clocks are relativistically ill defined probative systems, and are thus unsuitable for the determination of the inherent structures of spacetime postulated by the theory of relativity. Weyl (1918a) clearly recognized this when he said in response to Einstein’s critique “because of the problematic behaviour of yardsticks and clocks I have in my book Space-Time-Matter restricted myself for the specific measurement of the \(g_{ik}\), exclusively to the observation of the arrival of light signals.” It is interesting to note parenthetically that in the first edition of his book Weyl thought that it was possible to have an intrinsic method of comparing the lengths of arbitrary spacetime intervals with an interval between two fiducial spacetime events, by using light signals only. It was Lorentz who pointed out to Weyl that not only the world lines of light rays but also the world lines of material bodies are required for an intrinsic method of comparing lengths. Not only did Weyl correct this mistake in subsequent editions, but already in 1921, Weyl (1921c) discovered the causal-inertial method for determining the spacetime metric (see §4.3) by proving an important theorem that shows that the spacetime metric is already fully determined by the inertial and causal structure of spacetime. Weyl (1949a, 103) remarks “… therefore mensuration need not depend on clocks and rigid bodies but … light signals and mass points moving under the influence of inertia alone will suffice.” It is clear that Weyl regarded the use of clocks and rigid rods as an undesirable makeshift within the context of the special and general theory. Since neither spatial nor temporal intervals are invariants of spacetime, the invariant spacetime interval \(ds\) cannot be directly ascertained by means of standard clocks and rigid rods. In addition, the latter presuppose quantum theoretical principles for their justification and therefore lie outside the relativistic framework because the laws which govern their physical processes are not known.^[52] Weyl (1929c, 233) abandoned his unified field theory only with the advent of the quantum theory of the electron. He did so because in that theory a different kind of gauge invariance associated with Dirac’s theory of the electron was discovered, which more adequately accounted for the conservation of electric charge. Weyl’s contributions to quantum mechanics, and his construction of a new principle of gauge invariance, are discussed in §4.5.3.^[53] Weyl’s unified field theory was revived by Dirac (1973) in a slightly modified form, which incorporated a real scalar field \(\beta(x)\). Dirac also argued that the time intervals measured by atomic clocks need not be identified with the lengths of timelike vectors in the Weyl geometry.^[54] Prior to the works of Gauss, Grassmann and Riemann, the study of geometry tended to emphasize the employment of empirical intuitions and images of the three dimensional physical space. Physical space was thought of as having definite metrical attributes. The task of the geometer was to take physical mensuration devices in that space and work with them. Under the influence of Gauss and Grassmann, Riemann’s great philosophical contribution consisted in the demonstration that, unlike the case of a discrete manifold, where the determination of a set necessarily implies the determination of its quantity or cardinal number, in the case of a continuous manifold, the concept of such a manifold and of its continuity properties, can be separated form its metrical structure. Using modern terminology, Riemann separated a manifold’s local differential topological structure from its metrical structure. Thus Riemann’s separation thesis gave rise to the space problem, or as Weyl called it, das Raumproblem: how can metric relations be determined on a continuous manifold \(M\)? A metric manifold is a manifold on which a distance function \(f : M \times M \rightarrow \mathbb{R}\) is defined. Such a distance function must satisfy the following minimal conditions: for all \(p, q, r \in M\), 1. \(f(p, q) \ge 0\), and if \(f(p,q) = 0\), then \(p = q\), 2. \(f(p, q) = f(q,p)\), (symmetry) 3. \(f(p, q) + f(q,r) \ge f(p,r)\) (triangle inequality). In his famous inaugural lecture at Göttingen, entitled Über die Hypothesen, welche der Geometrie zu Grunde liegen (About the hypotheses which lie at the basis of geometry), Riemann (1854) examined how metric relations can be determined on a continuous manifold; that is, what specific form \(f : M \times M \rightarrow \mathbb{R}\) should have. Consider the coordinates \(x^{i}(p)\) and \(x^{i} (p) + dx^{i} (p)\) of two neighboring points \(p, q \in M\). The measure of the distance \(ds = f(p,q)\) must be some function \(F_{p}\) at \(p\) of the differential increments \(dx^{i}(p)\); that \[\tag{22} ds = F_{p}(dx^{1}(p), \ldots ,dx^{n}(p)). \] Riemann states that \(F_{p}\) should satisfy the following requirements: Functional Homogeneity: If \(\lambda \gt 0\) and \(ds = F_{p}(dx(p))\), then \[\tag{23} \lambda ds = \lambda F_{p}(dx(p)) = F_{p}(\lambda dx(p)). \] Sign Invariance: A change in sign of the differentials should leave the value of \(ds\) invariant. Sign invariance is satisfied by every positive homogeneous function of degree \(2m\) \((m = 1, 2, 3, \ldots)\). In the simple case \(m = 1\), and the length element \(ds\) is the square root of a homogeneous function of second degree, which can be expressed in the standard form \[\tag{24} ds = \left[ \sum_{i=1}^n (dx^{\,i}(p))^{2} \right]^{\bfrac{1}{2}}. \] That is, at each point of \(M\) there exists a coordinate system (defined up to an element of the orthogonal group \(O(n)\) in which the square root of the homogeneous function of second degree can be expressed in the above standard form. Riemann’s well-known general expression for the measure of length at \(p \in M\) with respect to an arbitrary coordinate system is given by \[\tag{25} ds^{2} = g_{ij}(x(p))dx^{i}(p)dx^{\,j}(p), \] where the components of the metric tensor satisfy the symmetry condition \(g_{ij} = g_{ji}\). The assumption that \(ds^{2} = F^{2}_{p}\) is a quadratic differential form is not only the simplest one, but also the preferred one for other important reasons. Riemann himself was well aware of other possibilities; for example, the possibility that \(ds\) could be the 4th root of a homogeneous polynomial of 4th order in the differentials. But Riemann restricted himself to the special case \ (m = 1\) because he was pressed for time and because he wanted to give specific geometric interpretations of his results. As Weyl points out Riemann’s own answer to the space problem is inadequate since Riemann’s mathematical justification for the restriction to the Pythagorean case are not very compelling. The first satisfactory justification of the Pythagorean form of the Riemannian metric, although limited in scope because it presupposed the full homogeneity of Euclidean space, was provided by the investigations of Hermann von Helmholtz. Helmholtz diverged from Riemann’s analytic approach and made use merely of the fundamental concept of geometry, namely, the concept of congruent mapping, and characterized the geometric structure of space by requiring of space the full homogeneity of Euclidean space. His analysis was thereby restricted to the cases of constant positive, zero, or negative curvature. Abstracting from our experience of the movement of rigid bodies, Helmholtz was able to mathematically derive Riemann’s distance formula from a number of axioms about rigid body motion in space. Helmholtz (1868) argued that Riemann’s hypothesis that the metric structure of space is determined locally by a quadratic differential form, is really a consequence of the facts (Tatsachen) of rigid-body motion. Considering the general case of \(n\) dimensions, and using Lie groups and Lie algebras, Sophus Lie, (Lie (1886/1935, 1890a,b)), later developed and improved Helmholtz’s justification. However, the Helmholtz-Lie treatment of, and solution to, the problem of space, lost its relevance with the arrival of Einstein’s theory of general relativity. As Weyl (1922b) points out, instead of a three-dimensional continuum we must now consider a four-dimensional continuum, the metric of which is not positive definite but is given instead by an indefinite quadratic form. In addition, Helmholtz’s presupposition of metric homogeneity no longer holds, since we are now dealing with an inhomogeneous metric field that causally depends on the distribution of matter. Consequently, Weyl provided a reformulation of the space problem that is compatible with the causal and metric structures postulated by the theory of general relativity. But Weyl went further. Such a reformulation should not only incorporate Riemann’s infinitesimal standpoint, as required by Einstein’s general theory, it should also cohere with Weyl’s requirements of a pure infinitesimal geometry developed earlier in the context of Weyl’s construction of a unified field theory. More precisely, Weyl generalized the so-called Riemann-Helmholtz-Lie problem of space in two ways: First, he allowed for indefinite metrics in order to encompass the general theory of relativity. Secondly, he considered metrics with variable gauge \(\lambda(x(p))\) together with an associated length connection, in order to obtain a purely infinitesimal geometry. Thus each member of a general class of geometries under consideration is locally determined relative to a choice of variable gauge by two structural fields (Strukturfelder): (1) a possibly indefinite Finsler metric field^[55] \ (F_{p}(dx)\), and (2) a length connection that is determined by a 1-form field \(\theta_{i}dx^{i}\). Weyl’s task was to prove: If the geometry satisfies the Postulate of Freedom, (the nature of space imposes no restrictions on admissible metrical relations), and determines a unique, symmetric, linear connection \(\Gamma \), then the Finsler metric field \(F_{p}(dx)\) must be a Riemannian metric field of some signature.^[56] In a Riemannian space the concept of parallel displacement is defined by two conditions: 1. The components of a vector remain unchanged during an infinitesimal parallel displacement in a suitably chosen coordinate system (geodesic coordinate).^[57] This condition is satisfied if \[\ begin{matrix} dv^{\,i}_{p} = - \Gamma^{i}_{jk} v^{\,j}_{p} dx^{k}_{p} & \text{and} & \Gamma^{i}_{jk} = \Gamma^{i}_{kj}\,. \end{matrix}\] 2. The length of a vector remains unchanged during an infinitesimal parallel displacement. It follows from these conditions that a Riemannian space possesses a definite symmetric linear connection—a symmetric linear metric connection^[58]—which is uniquely determined by the Pythagorean-Riemannian metric. Weyl calls this: The Fundamental Postulate of Riemannian Geometry: Among the possible systems of parallel displacements of a vector to infinitely near points, that is among the possible sets of symmetric linear connection coefficients, there exists one and only one set, and hence one and only one system of parallel displacement, which is length preserving. In his lectures^[59] on the mathematical analysis of the problem of space delivered in 1922 at Barcelona and Madrid, Weyl sketched a proof demonstrating that the following is also true: Uniqueness of the Pythagorean-Riemannian Metric: Among all the possible infinitesimal metrics that can be put on a differentiable manifold, the Pythagorean-Riemannian metric is the only type of metric that uniquely determines a symmetric linear Weyl begins his proof with two natural assumptions. First, the nature of the metric should be coordinate independent. If \(ds\) is given by an expression \(F_{p}(dx^{1}, \ldots ,dx^{n})\) with respect to a given system of coordinates, then with respect to another system of coordinates, \(ds\) is given by a function that is related to \(F_{p}(dx^{1}, \ldots ,dx^{n})\) by a linear, homogeneous transformation of its arguments \(dx^{i}\). Second, it is reasonable to assume that the nature of the metric is the same everywhere, in the sense that at every point of the manifold, and with respect to every coordinate system for a neighborhood of the point in question, \(ds\) is represented by an element of the equivalence class \([F]\) of functions generated by any one such function, say \(F_{p}(dx^{1}, \ldots ,dx^{n})\), by all linear, homogeneous transformations of its arguments \(dx^{i}\). For the case in which \(F_{p}\) is Pythagorean in form, namely the square root of a positive-definite quadratic form, there exists just one possible equivalence class [\(F\)], because every function that is the square root of a positive-definite quadratic form can be transformed to the standard expression \[\tag{26} F = \left[(dx^{1})^{2} + \cdots + (dx^{n})^{2}\right]^{\bfrac{1}{2}} \] by means of a linear, homogeneous transformation. To every possible equivalence class [\(F\)] of homogeneous functions, there corresponds a type of metrical space. The Pythagorean-Riemannian space, for which \(F^{2}_{p} = (dx^{1})^{2} + \cdots + (dx ^{n})^{2}\), is one among several types of possible metrical spaces. The problem, therefore, is to single out the equivalence class \([F]\), where \(F\) corresponds to \(F^{2}_{p} = (dx^{1})^{2} + \ cdots + (dx^{n})^{2}\), from the other possibilities, and to provide arguments for this preference. By the term ‘metric’ Weyl means any infinitesimal distance function \(F_{p} \in [F]\), where the equivalence class \([F]\) represents a type of metric structure or metric field. Any such type of metric field structure has a microsymmetry group \(G_{p}\) at each \(p \in M\). Definition 4.1 (Microsymmetry Group) A microsymmetry of a structural field (Strukturfeld) at a point \(p \in M\) is a local diffeomorphism that takes \(p \in M\) into \(p\) and preserves the structural field at \(p \in M\). The microsymmetry group of a field at \(p \in M\) is the group of its microsymmetries at \(p \in M\) under the operation of composition. A microsymmetry group \(G_{p}\), at \(p \in M\), of a metric structure, is a set of invertible, linear maps of the tangent space \(T(M_{p})\) onto itself, which preserve the infinitesimal distance function at \(p \in M\). For every \(p \in M, G_{p}\) is isomorphic to one and the same abstract group. For a Riemannian type of metric structure the congruent linear maps of the tangent space T\((M_{p})\) onto itself form a group \(G_{p}\) which is isomorphic to the orthogonal group \(O(n)\). The Pythagorean-Riemannian metric at \(p\) is therefore determined through the concrete realization of the orthogonal group at \(p\) which leaves the fundamental quadratic differential form at \(p\) invariant. Thus the Pythagorean-Riemannian type of metric is characterized by the abstract microsymmetry group \(O(n)\). For a metric which is not of the Pythagorean-Riemannian metric type, the abstract microsymmetry group \(G_{p}\) will be different from \(O(n)\) and will be some other subgroup of \(GL(n)\). At each point of the manifold the microsymmetry group will be a concrete realization of this subgroup of \(GL(n)\). Weyl now states what he calls The Postulate of Freedom: If only the nature (of whatever type) of the metric is specified, that is, if only the corresponding abstract microsymmetry group \(G_{p}\) is specified, and the metric in question is otherwise left arbitrary, then the mutual orientations of the corresponding microsymmetry groups at different points are also left arbitrary. Weyl emphasizes that the Postulate of Freedom provides the general framework for a concise formulation of The Hypothesis of Dynamical Geometry: Whatever the nature or type of the metric may be—provided it is the same everywhere—the variations in the mutual orientations of the concrete microsymmetry groups from point to point are causally determined by the material content that fills space. In contrast with Helmholtz’s analysis, which presupposes the homogeneity of space, the Postulate of Freedom allows for the possibility of replacing Helmholtz’s homogeneity requirement with the possibility of subjecting the metric field to arbitrary, infinitesimal change. To assert this dynamical possibility does not require that the nature of the metric be specified. Next, Weyl points out that what has been provided so far is merely an explication of the concepts metric, length connection and symmetric linear connection. Some claim which goes beyond conceptual analysis has to be made, according to Weyl, in order to prove that among the various types of possible metrical structures that can be put on a differentiable manifold representing physical space, the Pythagorean-Riemannian form is unique. Weyl suggests the following hypothesis: Weyl’s Hypothesis: Whatever determination the essentially free length connection at some point \(p\) of the manifold may realize with the points in its infinitesimal neighborhood, there always exists among the possible systems of parallel displacements of the tangent space \(T(M_{p})\), one and only one, which is at the same time a system of infinitesimal congruent transport. Weyl shows that this hypothesis does in fact single out metrics of the Pythagorean-Riemannian type by proving the following theorem: Theorem 4.2 If a specific length connection is such that it determines a unique symmetric linear connection, then the metric must be of the Pythagorean-Riemannian form (for some signature). Thus the Postulate of Freedom and Weyl’s Hypothesis together entail the existence, at each \(p \in M\), of a non-degenerate quadratic form that is unique up to a choice of gauge at \(p \in M\), and that is invariant under the action of the microsymmetry group \(G_{p}\) that is isomorphic to an orthogonal group of some signature. This formulation suggests, according to Weyl, an intuitive contrast between Euclidean ‘distance-geometry’ and the ‘near-geometry’ (Nahegeometrie) or ‘field-geometry’ of Riemann. Weyl (1949a, 88) compared Euclidean ‘distance-geometry’ to a crystal “built up from uniform unchangeable atoms in the rigid and unchangeable arrangement of a lattice”, and the latter [Riemannian field-geometry] to a liquid, “consisting of the same indiscernible unchangeable atoms, whose arrangement and orientation, however, are mobile and yielding to forces acting upon them.” The nature of the metric field, that is the nature of the metric everywhere, is the same and is, therefore, absolutely determined. It reflects according to Weyl, the a priori structure of space or spacetime. In contrast, what is posteriori, that is, accidental and capable of continuous change being causally dependent on the material content that fills space, are the mutual orientations of the metrics at different points. Hence, the demarcation between the \(a\) priori and the a posteriori has shifted, according to Weyl: Euclidean geometry is still preserved for the infinitesimal neighborhood of any given point, but the coordinate system in which the metrical law assumes the standard form \(ds^{2} =\sum^{n}_{i=1}(dx^{i})^{2}\) is in general different from place to place. Weyl’s a priori and a posteriori distinction must not be confused with Kant’s distinction. Weyl (1949a, 134) remarks: “In the case of physical space it is possible to counterdistinguish aprioristic and aposterioristic features in a certain objective sense without, like Kant, referring to their cognitive source or their cognitive character.” Weyl makes the same remark in (Weyl, 1922b, 266). See also the discussion in §4.5.8. In the context of his group-theoretical analysis, Weyl (1922b, p. 266) makes the following interesting and important statement: I remark from an epistemological point of view: it is not correct to say that space or the world [spacetime] is in itself, prior to any material content, merely a formless continuous manifold in the sense of analysis situs; the nature of the metric [its infinitesimal Pythagorean-Riemannian character] is characteristic of space in itself, only the mutual orientation of the metrics at the various points is contingent, a posteriori and dependent on the material content. Within the context of general relativity, empty spacetime is impossible, if ‘empty’ is understood to mean not merely empty of all matter but also empty of all fields. At another place, Weyl (1949a, Engl. edn, 172) says: Geometry unites organically with the field theory; space is not opposed to things (as it is in substance theory) like an empty vessel into which they are placed and which endows them with far-geometrical relationships. No empty space exists here; the assumption that the field omits a portion of the space is absurd. According to Weyl, the metric field does not cease to exist in a world devoid of matter but is in a state of rest: As a rest field it would possess the property of metric homogeneity; the mutual orientations of the orthogonal groups characterizing the Pythagorean-Riemannian nature of the metric everywhere would not differ from point to point. This means that in a matter-empty universe the metric is fixed. Consequently, the set of congruence relations on spacetime is uniquely determined. Since the metric uniquely determines the symmetric linear connection, the homogeneous metric field (rest field) determines an integrable affine structure. Therefore, a flat Minkowski spacetime consistent with the complete absence of matter is endowed with an integrable connection and thus determines all (hypothetical) free motions. According to Weyl, there exists in the absence of matter a homogeneous metric field, a structural field (Strukturfeld), which has the character of a rest field, and which constitutes an all pervasive background that cannot be eliminated. The structure of this rest field determines the extension of the spacetime congruence relations and determines Lorentz invariance. The rest field possesses no net energy and makes no contribution to curvature. The contrast with Helmholtz and Lie is this: both of them require homogeneity and isotropy for physical space. From a general Riemannian standpoint, the latter characteristics are valid only for a matter-empty universe. Such a universe is flat and Euclidean, whereas a universe that contains matter is inhomogeneous, anisotropic and of variable curvature. It is important to note here that the validity of Weyl’s assertion that the metric field does not cease to exist but is in a state of rest, has its source in the mathematical fact that the metric field is a \(G\)-structure. A \(G\)-structure may be flat or non-flat; but a \(G\)-structure can never vanish. Consequently, geometric fields characterizable as \(G\)-structures, such as the projective, conformal, affine and metric structures, do not vanish.^[60] Riemann searched for the most general type of an \(n\)-dimensional manifold. On this manifold, Euclidean geometry turns out to be a special case resulting from a certain form of the metric. Weyl takes this general structure, the manifold structure, which has certain continuity and order properties, as basic, but leaves the determination of the other geometrical structures, such as the projective, conformal, affine and metric structures, open. The metrical axioms are no longer dictated, as they were for Kant, by pure intuition. According to Weyl (1949a, 87), for Riemann the metric is not, as it was for Kant, “part of the static homogeneous form of phenomena, but of their ever changing material content”. Weyl (1931a, 338) says: We differentiate now between the amorphous continuum and its metrical structure. The first has retained its a priori character,^[61] … whereas the structural field [Strukturfeld] is completely subjected to the power-play of the world; being a real entity, Einstein prefers to call it the ether. There is no indication in Riemann’s work on gravitation and electromagnetism that would indicate that he anticipated the conceptual revolution underlying Einstein’s theory. However, Weyl’s interpretation of Riemann’s work suggests that Riemann foresaw something like its possibility in the following sense: By formally separating the post-topological structures such as the affine, projective, conformal and metric structures from the manifold, so that these structures are no longer rigidly tied to it, Riemann deprived them of their formal geometric rigidity and, on the basis of his infinitesimal geometric standpoint or “near-geometry”, allowed for the possibility of interpreting them as mathematical representations of flexible, dynamical physical structural fields [Strukturfelder] on the manifold of spacetime, geometrical fields that reciprocally interact with matter. Riemann’s separation thesis together with his adoption of the infinitesimal standpoint, were prerequisite steps for the development of differential geometry as the mathematics of differentiable geometric fields on manifolds. When interpreted physically, these mathematical structures or geometrical fields correspond, as Weyl says, to physical structural fields (Strukturfelder). Analogous to the electromagnetic field, these structural fields act on matter and are in turn acted on by matter. Weyl (1931a, 337) remarks: I now come to the crucial idea of the theory of General Relativity. Whatever exerts as powerful and real effects as does the metric structure of the world, cannot be a rigid, once and for all, fixed geometrical structure of the world, but must itself be something real which not only exerts effects on matter but which in turn suffers them through matter. Riemann already suggested for space the idea that the structural field, like the electromagnetic field, reciprocally interacts with matter. Weyl (1931a, 338) continues: We already explained with the example of inertia, that the structural field [Strukturfeld] must, as a close-action [Nahewirkung], be understood infinitesimally. How this can occur with the metric structure of space, Riemann abstracted from Gauss’s theory of curved surfaces. The various geometrical fields are not “intrinsic” to the manifold structure of spacetime. The manifold represents an amorphous four-dimensional differentiable continuum in the sense of analysis situs and has no properties besides those that fall under the concept of a manifold. The amorphous four-dimensional differentiable manifold possesses a high degree of symmetry. Because of its homogeneity, all points are alike; there are no objective geometric properties that enable one to distinguish one point from another. This full homogeneity or symmetry of space must be described by its group of automorphisms, the one-to-one mappings of the point field onto itself which leave all relations of objective significance between points undisturbed. If a geometric object \(F\), that is a point set with a definite relational structure is given, then those automorphisms of space that leave \(F\) invariant, constitute a group and this group describes exactly the symmetry which \(F\) possesses. For instance, to use an example by Weyl (1938b) (see also Weyl (1949a, 72–73) and Weyl (1952)), if \(R(p_{1},p_{2},p_{3})\) is a ternary relation that asserts \(p_{1},p_{2},p_{3}\) lie on a straight line, then we require that any three points, satisfying this relation \(R\), are mapped by an automorphism into three other points \(p_{1}',p_{2}',p_{3}'\), fulfilling the same relation. The group of automorphisms of the \(n\)-dimensional number space contains only the identity map, since all numbers of \(\mathbb{R}^{n}\) are distinct individuals. It is essentially for this reason that the real numbers are used for coordinate descriptions. Whereas the continuum of real numbers consists of individuals, the continua of space, time, and spacetime are homogeneous. Spacetime points do not admit of an absolute characterization; they can be distinguished, according to Weyl, only by “a demonstrative act, by pointing and saying here-now”. In a little book entitled Riemanns geometrische Ideen, ihre Auswirkung und ihre Verknüpfung mit der Gruppentheorie, published posthumously in 1988, Weyl (1988, 4–5) makes this interesting comment: Coordinates are introduced on the Mf [manifold] in the most direct way through the mapping onto the number space, in such a way, that all coordinates, which arise through one-to-one continuous transformations, are equally possible. With this the coordinate concept breaks loose from all special constructions to which it was bound earlier in geometry. In the language of relativity this means: The coordinates are not measured, their values are not read off from real measuring rods which react in a definite way to physical fields and the metrical structure, rather they are a priori placed in the world arbitrarily, in order to characterize those physical fields including the metric structure numerically. The metric structure becomes through this, so to speak, freed from space; it becomes an existing field within the remaining structure-less space. Through this, space as form of appearance contrasts more clearly with its real content: The content is measured after the form is arbitrarily related to coordinates. By mapping a given spacetime homeomorphically onto the real number space, providing through the arbitrariness of the mapping, what Weyl calls, a qualitatively non-differentiated field of free possibilities—the continuum of all possible coincidences—we represent spacetime points by their coordinates corresponding to some coordinate system. The four-dimensional arithmetical space can be utilized as a four-dimensional schema for the localization of events of all possible “here-nows”. Physical dynamical quantities in spacetime, such as the geometrical structural fields on the four-dimensional spacetime continuum, are describable as functions of a variable point which ranges over the four-dimensional number space \(\mathbb{R}^{4}\). Instead of thinking of the spacetime points as real substantival entities, and any talk of fields as just a convenient way of describing geometrical relations between points, one thinks of the geometrical fields such as the projective, conformal causal, affine and metric fields, as real physical entities with dynamical properties, such as energy, momentum and angular momentum, and the field points as mere mathematical abstractions. Spacetime is not a medium in the sense of the old ether concept. No ether in that sense exists here. Just as the electromagnetic fields are not states of a medium but constitute independent realities which are not reducible to anything else, so, according to Weyl, the geometrical fields are independent irreducible physical fields.^[62] A class of geometric structural fields of a given type is characterized by a particular Lie group. A geometric structural field belonging to a given class has a microsymmetry group (see definition 4.1) at each point \(p \in M\) which is isomorphic to the Lie group that is characteristic of the class. In relativity theory, this microsymmetry group is isomorphic to the Lorentz group and leaves invariant a pseudo-Riemannian metric of Lorentzian signature. The different types of geometric, structural fields may be represented from a modern mathematical point of view as cross sections of appropriate fiber bundles over the manifold \(M\); that is, the amorphous manifold \(M\) has associated with it various geometric fields in terms of a mapping of a certain kind (called a cross section) from the manifold \(M\) to the corresponding bundle space over \(M\).^[63] In particular, Einstein’s general theory of relativity postulates a physical field, the metrical field, which, mathematically speaking, may be characterized as a cross section of the bundle of non-degenerate, second-order, symmetric, covariant tensors of Lorentz signature over \(M\). Weyl (1931a, 336) says of this world structure: However this structure is to be exactly and completely described and whatever its inner ground might be, all laws of nature show that it constitutes the most decisive influence on the evolution of physical events: the behavior of rigid bodies and clocks is almost exclusively determined through the metric structure, as is the pattern of the motion of a force-free mass point and the propagation of a light source. And only through these effects on the concrete natural processes can we recognize this structure. The views of Weyl are diametrically opposed to geometrical conventionalism and some forms of relationalism. According to Weyl, we discover through the behavior of physical phenomena an already determined metrical structure of spacetime. The metrical relations of physical objects are determined by a physical field, the metric field, which is represented by the second rank metric tensor field. Contrary to geometric conventionalism, spacetime geometry is not about rigid rods, ideal clocks, light rays or freely falling particles, except in the derivative sense of providing information about the physically real metric field which, according to Weyl, is as physically real as is the electromagnetic field, and which determines and explains the metrical behavior of congruence standards under transport. The metrical field has physical and metrical significance, and the metrical significance does not consist in the mere articulation of relations obtaining between, say, rigid rods or ideal clocks. The special and general, as well as the non-relativistic spacetime theories postulate various structural constraints which events are held to satisfy. When interpreted physically, these mathematical structures or constraints correspond to physical structural fields (Strukturfelder). Analogous to the electromagnetic field, these structural fields act on matter and are, within the context of the general theory of relativity, in turn acted on by matter. An \(n\)-dimensional manifold \(M\) whose sole properties are those that fall under the concept of a manifold, Weyl (1918b) physically interprets as an \(n\)-dimensional empty world, that is, a world empty of both matter and fields. On the other hand, an \(n\)-dimensional manifold \(M\) that is an affinely connected manifold, Weyl physically interprets as an \(n\)-dimensional world filled with a gravitational field, and an \(n\)-dimensional manifold \(M\) endowed with a projective structure represents an \(n\)-dimensional non-empty world filled with an inertial-gravitational field, or what Weyl calls the guiding field Führungsfeld). In a similar vein, an \(n\)-dimensional manifold \(M\) that possesses a conformal structure of Lorentz type, represents a non-empty \(n\)-dimensional world filled with a causal field. Finally, an \(n\)-dimensional manifold \(M\) endowed with a metrical structure may be interpreted physically as an \(n\)-dimensional non-empty world filled with a metric field. The mathematical model of physical spacetime is the four-dimensional pseudo-Riemannian manifold. Weyl (1921c) distinguished between two primitive substructures of that model: the conformal and projective structures and showed that the conformal structure, modelling the causal field governing light propagation, and the projective structure, modelling the inertial or guiding field governing all free (fall) motions, uniquely determine the metric. That is, Weyl (1921c) proved Theorem 4.3 The projective and conformal structure of a metric space determine the metric uniquely. A metric \(g\) on a manifold determines a first-order conformal structure on the manifold, namely, an equivalence class of conformally related metrics \[\tag{27} [g] = \{\overline{g} \mid \overline{g} = e^{\theta} g \}. \] A metric \(g\) also uniquely determines a symmetric linear connection \(\Gamma\) on the manifold. Under a conformal transformation \[\tag{28} g \rightarrow e^{\theta} g = \overline{g}, \] the change of the components of the symmetric linear connection is given by (14), that is, \[\tag{29} \Gamma^{i}_{jk} \rightarrow \overline{\Gamma}^{i}_{jk} = \Gamma^{i}_{jk} + \frac{1}{2}(\delta^{i}_{j}\theta_{k} + \delta^{i}_{k}\theta_{j} - g_{jk}g^{ir}\theta_{r}). \] Thus the set of all arbitrary conformal transformations of the metric induces an equivalence class \(K\) of conformally related symmetric linear connections. This equivalence class \(K\) constitutes a second-order conformal structure on the manifold and the difference between any two connections in the equivalence class is given by (29). Weyl shows that a conformal transformation (29) preserves the projective structure and hence is a projective transformation (that is, a conformal transformation which also satisfies (7)), if and only if \(\theta_{j} = 0\), in which case the conformal and projective structures are compatible. Weyl remarks after the proof: If it is possible for us, in the real world, to discern causal propagation, and in particular light propagation, and if moreover, we are able to recognize and observe as such the motion of free mass points which follow the guiding field, then we are able to read off the metric field from this alone, without reliance on clocks and rigid rods. Elsewhere, Weyl (1949a, 103) says: As a matter of fact it can be shown that the metrical structure of the world is already fully determined by its inertial and causal structure, that therefore mensuration need not depend on clocks and rigid bodies but that light signals and mass points moving under the influence of inertia alone will suffice. The use of clocks and rigid rods is, within the context of either theory, an undesirable makeshift for two reasons. First, since neither spatial nor temporal intervals are invariants of the four-dimensional spacetime of the special theory of relativity and the general theory of relativity, the invariant spacetime interval \(ds\) cannot be directly ascertained by means of standard clocks and rigid rods. Second, the concepts of a rigid body and a periodic system (such as pendulums or atomic clocks) are not fundamental or theoretically self-sufficient, but involve assumptions that presuppose quantum theoretical principles for their justification and thus lie outside the present conceptual relativistic framework. Therefore, methodological and ontological considerations decidedly favor Weyl’s causal-inertial method for determining the spacetime metric. From the physical point of view, Weyl emphasized the roles of light propagation and free (fall) motion in revealing the conformal-causal and the projective structures respectively. However, from the mathematical point of view, Weyl did not use these two structures directly in order to derive from them and their compatibility relation, the metric field. Rather, Weyl regarded the metric and affine structures as fundamental and showed that the conformal and the projective structures respectively arise from those structures by mathematical abstraction. Figure 7: Weyl took the metric and affines structures as fundamental and showed that the conformal and projective structures respectively arise from them by mathematical abstraction. Ehlers et al. (1972) generalized Weyl’s causal-inertial method by deriving the metric field directly from the conformal and projective fields and derived a unique pseudo-Riemannian spacetime metric solely as a consequence of a set of natural, physically well-motivated, constructive, “geometry-free” axioms concerning the incidence and differential properties of light propagation and free (fall) motion. Ehlers, Pirani and Schild adopt Reichenbach’s (1924) term, constructive axiomatics to describe the nature of their approach. The “geometry-free” axioms are propositions about a few general qualitative assumptions concerning free (fall) motion and light propagation that can be verified directly through experience in a way that does not presuppose the full blown edifice of the general theory of relativity. From these axioms, the theoretical basis of the theory is reconstructed step by step. The constructive axiomatic approach to spacetime structure is roughly this: 1. Primitive Notions. The constructive axiomatic approach is based on a triple of sets \[ \langle M, \mathcal{P}, \mathcal{L}\rangle \] of objects corresponding respectively to the notions of events, particle paths and light rays, which are taken as primitive. The set \(M\) of events is assumed to have a Hausdorff topology with a countable basis in order to state local axioms through the use of such terms as “neighborhood”. Members of the sets \(\mathcal{P} = \{P, Q, P_{1}, Q_{1}, \ldots \}\) and \(\mathcal{L} = \{ L, N, L_{1}, N_{1}, \ldots \}\) are subsets of \(M\) that represent the possible or actual paths of massive particles and light rays in spacetime. 2. Differential Structure. The differential structure is not presupposed; rather through the first few axioms a differential-manifold structure is introduced on the set of events \(M\) that is sufficient for the localization of events by means of local coordinates, such as radar coordinates. Once \(M\) is given a differential-manifold structure through the introduction of local radar coordinates by means of particles and light rays (such that any two radar coordinates are smoothly related to one another), one can do calculus on \(M\) and one may speak of tangent and direction It is important to emphasize that the members of \(\mathcal{P}\) represent possible or actual paths of arbitrary massive particles that may have some internal structure such as higher order gravitational and electromagnetic multipole moments and that may therefore interact in complicated ways with various physical fields. In order to constructively establish the projective structure of spacetime, it is necessary to single out a subset of \(\mathcal{P}\), namely \(\mathcal{P}_{f}\), the set of possible or actual paths of spherically symmetric, electrically neutral particles (that is, the world lines of freely falling particles). However, the set \(\mathcal{P}_{f} \subset \mathcal{P}\), can be properly characterized only after a coordinate system (differential structure) is available. Consequently, one must employ arbitrary particles in the statement of those axioms that lead to the local differential structure of spacetime. 3. Second-Order Conformal Structure. The Law of Causality asserts the existence of a unique first-order conformal structure on spacetime (27), that is, a field of infinitesimal light cones. Only null one-directions are determined. Therefore no special choice of parameters along light rays is determined by this structure. The first-order conformal structure can be measured using only the local differential-topological structure. Moreover, by a purely mathematical process involving only differentiation, the first-order conformal structure determines a second-order conformal structure, namely, an equivalence class \(K\) of conformally related symmetric linear connections. 4. Projective Structure. The motions of freely falling particles governed by the guiding field reveal the geodesics of spacetime, that is, the geodesics corresponding to an equivalence class \(\Pi\) of projectively equivalent symmetric linear connections. Only geodesic one-directions are determined, that is, no special choice of parameters is involved in characterizing free fall motion. 5. Compatibility between the Conformal and Projective Structures. That the conformal and projective structures are compatible is suggested by high energy experiments, according to Ehlers, Pirani and Schild: “A massive particle \((m \gt 0)\), though always slower than a photon, can be made to chase a photon arbitrarily closely.” Ehlers, Pirani and Schild therefore assume an axiom of compatibility between the conformal and projective structures, and this leads to a Weyl space: If the projective and conformal structures are compatible, then the intersection \[ \Pi \cap K = \Gamma\ \text{ (Weyl connection)} \] of the equivalence class \(K\) of conformally equivalent symmetric linear connections, and the equivalence class \(\Pi\) of projectively equivalent symmetric linear connections, contains a unique symmetric linear connection, a Weyl connection. Thus light propagation and free (fall) motion reveal on spacetime a unique Weyl connection which determines the parallel transport of vectors, preserving their timelike, null or spacelike character, and for any pair of non-null vectors, the Weyl connection leaves invariant the ratio of their lengths and the angle between them, provided the vectors are transported along the same path. 6. Pseudo-Riemannian Metric. Since length transfer is non-integrable (i.e., path-dependent) in a Weyl space, a Weyl geometry reduces to a pseudo-Riemannian geometry if and only if Weyl’s length-curvature (Streckenkrümmung) tensor equals zero, in which case the length of a vector is path-independent under parallel transport, and there exists no second clock effect. Can it be argued that Ehlers, Pirani and Schild’s generalization of Weyl’s causal-inertial method for determining the spacetime metric constitutes a convention-free, and – in relevant respects – theory-independent body of evidence that can adjudicate between spacetime geometries, and hence between spacetime theories that postulate them? As Weyl showed, we can empirically determine the metric field, provided certain epistemic conditions are satisfied, that is, provided we can measure the conformal-causal structure, and provided “we are able to recognize and observe as such the motion of free mass points which follow the guiding field.” Criticisms of Ehlers, Pirani and Schild’s constructive axiomatics suggest that the causal-inertial method is not convention-free and that it is ineffective epistemologically in providing a possible solution to the controversy between geometrical realism and conventionalism in favor of realism. Basically, all of the charges laid against Ehlers, Pirani and Schild’s constructive axiomatics concentrate on the roles which massive particles play in their construction. One of the constructive axioms employed by Ehlers, Pirani and Schild, the projective axiom, is a statement of the infinitesimal version of the Law of Inertia, the law of free (fall) motion which contains Newton’s first law of motion as a special case in the absence of gravitation. Since Ehlers, Pirani and Schild do not provide an independent, non-circular criterion by which to characterize free (fall) motion, their approach has been charged with circularity by philosophers such as Grünbaum (1973), Salmon (1977), Sklar (1977) and Winnie (1977). The problem is a familiar one; how to introduce a class of preferred motions, that is, how to characterize that particular path structure that would govern the motions of free particles \(\mathcal{P} _{f}\), that is, neutral, spherically symmetric, non-rotating test bodies, while avoiding the circularity problem surrounding the notion of a free particle: The only way of knowing when no forces act on a body is by observing that it moves as a free particle along the geodesics of spacetime. But how, without already knowing the geodesics or the projective structure of spacetime is it possible to determine which particles are free and which are not? And to determine the projective structure of spacetime it is necessary to use free particles. Coleman and Korté (1980) have addressed these and related difficulties by providing a non-conventional procedure for the empirical determination of the projective structure.^[64] It is worth emphasizing that Weyl’s approach to differential geometry, in which the affine, projective and conformal structures are treated in their own right rather than as mere aspects of the metric, was instrumental for his discovery of the non-circular and non-conventional geodesic method for the empirical determination of the spacetimet metric. The old notion of a ‘geodesic path’ had its inception in the context of classical metrical geometry and ‘geodesicity’ was characterized in terms of extremal paths of curves, which presupposed a metric. It was Weyl’s metric-independent construction of the symmetric linear connection that led him to introduce the geometry of paths and the metric-independent characterization of a geodesic path in terms of the process of autoparallelism of its tangent direction. Weyl provided a general conceptual/mathematical clarification of the concept of motion that applies to any spacetime theory that is based on a differential manifold. In particular, Weyl’s penetrating analysis shows that Einstein’s understanding of the role and significance of Mach’s Principle for the general theory of relativity and cosmology is actually inconsistent with the basic principles of general relativity. Weyl’s major contribution to cosmology is known as “Weyl’s Hypothesis”. The name was coined by Weyl (1926d) himself in an article in the Encyclopedia of Britannica.^[65] According to Weyl’s Postulate, the worldlines of all galaxies are non-intersecting diverging geodesics that have a common origin in the distant past. From this system of worldlines Weyl derived a common cosmic time. On the basis of his postulate, Weyl (1923c, Appendix III) was also the first to show that there is an approximately linear relation between the redshift of galactic spectra and distance. Weyl had basically discovered Hubble’s Law six years prior to Hubble’s formulation of it in 1929. Another contribution to cosmology is Weyl’s (1919b) spherically symmetric static exact solution to Einstein’s linearized^[66] field equations. There are essentially two ways to understand Mach’s Principle: (1) Mach’s Principle rejects the absolute character of the inertial structure of spacetime, and (2) Mach’s Principle rejects the inertial structure of spacetime per se. Version (2) might be characterized as Leibnizian relativity or body relationalism; that is, one understands by relative motion the motion of bodies with respect only to other observable bodies or observable bodily reference frames. The relative motion of a body with respect to absolute space or to the inertial structure of space (Newton) or spacetime is ruled out on epistemological and/or metaphysical grounds. In the context of his general theory of relativity, what Einstein is objecting to in Newtonian Mechanics, and by implication, the theory of special relativity, is the absolute character of the inertial structure; he is not asserting its fictitious character. That is, the general theory of relativity incorporates Mach’s Principle as expressed in version (1) by treating the inertial structure as dynamical and not as absolute. However, Einstein also tried to extend and generalize the special theory of relativity by incorporating version (2) of Mach’s Principle into the general theory of relativity. Einstein was deeply influenced by Mach’s empiricist programme and accepted Mach’s insistence on the primacy of observable facts of experience: only observable facts of experience may be invoked to account for the phenomena of motion. As a consequence, Einstein restricted the concept of relative motions to relative motions between bodies. Newton thought that the plane of Foucault’s pendulum remains aligned with respect to absolute space. Since the fixed stars are at rest with respect to absolute space the plane of Foucault’s pendulum remains aligned to them as well, and rotates relative to the earth. But according to Einstein, Newton’s intermediary notion of absolute space is as questionable as it is unnecessary in explaining the behaviour of Foucault’s pendulum. Not absolute space, but the actually existing masses of the fixed stars of the whole cosmos guide the plane of Foucault’s pendulum. Einstein (1916) argued that the general theory of relativity removes from the special theory of relativity and Newton’s theory an inherent epistemological defect. The latter is brought to light by Mach’s paradox, namely, Einstein’s example of two fluid bodies, \(A\) and \(B\), which are in constant relative rotation about a common axis. With regard to the extent to which each of the spheres bulges at its equator, infinitely many different states are possible although the relative rotation of the two bodies is the same in every case. Einstein considered the case in which \(A\) is a sphere and \(B\) is an oblate spheroid. The paradox consists in the fact that there is no readily discernible reason that accounts for the fact that one of the bodies bulges and the other does not. According to Einstein, an epistemological satisfactory solution to this paradox must be based on ‘an observable fact of experience’. Einstein wanted to implement a Leibnizian-Machian relational conception of motion according to which all motion is to be interpreted as the motion of some bodies in relation to other bodies. Einstein wished to extend the body-relative concept of uniform inertial motion to the concept of a body-relative accelerated motion. Weyl was very critical of Einstein’s attempt to incorporate version (2) of Mach’s Principle into the theory of general relativity and relativistic cosmology because he considered the Leibnizian-Machian relational conception of motion—according to which all motion is to be interpreted as the motion of some bodies in relation to other bodies—to be an incoherent notion within the context of the general theory of relativity. In a paper entitled Massenträgheit und Kosmos. Ein Dialog [Inertial Mass and Cosmos. A Dialogue] Weyl (1924b) articulates his overall position on the concept of motion and the role of Mach’s Principle in general relativity and cosmology.^[67] Weyl defines Mach’s Principle as follows: M (Mach’s Principle): The inertia of a body is determined through the interaction of all the masses in the universe. Weyl (1924b) then makes the observation that the kinematic principle of relative motion is by itself without any content, unless one also makes the additional physical causal assumption that C (Physical Causality): All events or processes are uniquely causally determined through matter, that is, through charge, mass and the state of motion of the elementary particles of matter.^[68] The underlying motivation for assumption \(\mathbf{C}\) of physical causality is essentially Mach’s empiricist programme, namely, Mach’s insistence on the primacy of observable facts of experience. Addressing Einstein’s formulation of Mach’s paradox, Weyl (1924b) says: Only if we conjoin the kinematic principle of relative motion with the physical assumption \(\mathbf{C}\) does it appear groundless or impossible on the basis of the kinematic principle that in the absence of any external forces a stationary body of fluid has the form of a sphere “at rest”, while on the other hand it has the form of a “rotating” flattened ellipsoid. Weyl rejects principle \(\mathbf{C}\) of physical causality because he denies the feasibility of \(\mathbf{M}\) (Mach’s Principle), as defined above, on \(a\) priori^[69] grounds. According to Weyl (1924b) The concept of relative motion of several isolated bodies with respect to each other is as untenable according to the theory of general relativity as is the concept of absolute motion of a single Weyl notes that what we seem to observe as the rotation of the stars, is in reality not the rotation of the stars themselves but the rotation of the “star compass” [Sternenkompass] which consists of light signals from the stars that meet our eyes at our present location from a certain direction. It is crucial, Weyl reminds us, to be cognisant of the existence of the metric field between the stars and our eyes. This metric field determines the propagation of light, and, like the electromagnetic field, it is capable of change and variation. Weyl (1924b) says that “the metric field is no less important for the direction in which I see the star then is the location of the star itself.” How is it possible, Weyl asks, to compare within the context of the general theory of relativity, the state of motions of two separate bodies? Of course, Weyl notes, prior to the general theory of relativity, during Mach’s time, one could rely on a rigid frame of reference such as the earth, and indefinitely extend such a frame throughout space. One could then postulate the relative motion of the stars with respect to this frame. However, under the hands of Einstein the coordinate system has lost its rigidity to such a degree, that it can always “cling to the motion of all bodies simultaneously”; that is, whatever the motions of the bodies are, there exists a coordinate system such that all bodies are at rest with respect to that coordinate system. Weyl then clarifies and illustrates the above with the plasticine example, which Weyl (1949a, 105) elsewhere describes as follows: Incidentally, without a world structure the concept of relative motion of several bodies has, as the postulate of general relativity shows, no more foundation than the concept of absolute motion of a single body. Let us imagine the four-dimensional world as a mass of plasticine traversed by individual fibers, the world lines of the material particles. Except for the condition that no two world lines intersect, their pattern may be arbitrarily given. The plasticine can then be continuously deformed so that not only one but all fibers become vertical straight lines. Thus no solution of the problem is possible as long as in adherence to the tendencies of Huyghens and Mach one disregards the structure of the world. But once the inertial structure of the world is accepted as the cause for the dynamical inequivalence of motions, we recognize clearly why the situation appeared so unsatisfactory. … Hence the solution is attained as soon as we dare to acknowledge the inertial structure as a real thing that not only exerts effects upon matter but in turn suffers such effects. Figure 8: Weyl’s plasticine example Applying these considerations to the fixed stars and assuming that it is possible that the (conformal) metrical field which determines the cones of light propagation (light cones) at each point of the plasticine, is carried along by the continuous transformation of the plasticine, then both the earth and the fixed stars will be at rest with respect to the plasticine’s coordinate system. Yet despite this the “star compass” is rotating with respect to the earth, exactly as we observe! Employing the concept of the microsymmetry group (definition 4.1), Coleman and Korté (1982) have analyzed Weyl’s plasticine example in the following way: Consider a space-time manifold equipped only with a differentiable structure, the plasticine of Weyl’s example. Then our spacetime does not have an affine, conformal, projective or metric structure defined on it. In such a world it is possible do define curves and paths; however, there are no preferred curves or paths. Since there is only the differentiable structure, one may apply any diffeomorphism; that is, all diffeomorphisms preserve this structure; consequently, in the absence of a post-differential-topological structure, the microsymmetry group at any event \(p\) is an infinite-parameter group isomorphic to the group of all invertible formal power series in four variables. If there is no post-differentiable topological geometric field in the neighbourhood of a point, then all of these infinite parameters may be chosen freely within rather broad limits. Clearly then, given an infinite number of parameters, one can, as Weyl says, straighten out an arbitrary pattern of world lines (fibers) in the neighbourhood of any event. Now suppose that there exists a post-differentiable topological geometric field, namely, the projective structure at any event of spacetime. Then the microsymmetry group that preserves that structure is a 20-parameter Lie group (see Coleman and Korté (1981)). Thus instead of an infinity of degrees of freedom, only twenty degrees of freedom may be used to actively deform the neighbouring region of spacetime. The fact that only a finite number of parameter are available prevents an arbitrary realignment of the worldlines of material bodies in the neighbourhood of any given event. Other post-differential topological geometrical field structures are similarly restrictive. For example, the microsymmetry group of the conformal structure, which determines the causal structure of spacetime, permits 7 degrees of freedom (6 Lorentz transformations and a dilatation), and permits four more degrees of freedom in second order. Consequently, the existence of the conformal metrical field which determines at each point the cones of light propagation would prevent an arbitrary realignment of light-like fibers, that is, it would be impossible to realign the earth and the fixed stars such that both are at rest with the coordinate system of the plasticine. Weyl’s plasticine example shows that the Leibnizian-Machian view of relative motion, namely the view according to which all motion must be defined as motion relative to bodies, is self-defeating in the general theory of relativity. The fact that a stationary, homogeneous elastic sphere will, when set in rotation, bulge at the equator and flatten at the poles is, according to Weyl (1924b), to be accounted for in the following way. The complete physical system consisting of both the body and the local inertial-gravitational field is not the same in the two situations. The cause of the effect is the state of motion of the body with respect to the local inertial-gravitational field, the guiding field, and is not, indeed as Weyl’s plasticine example shows, cannot be the state of motion of the body relative to other bodies. To attribute the effect as Einstein and Mach did to the rotation of the body with respect to the other bodies in the universe is, according to Weyl, to endorse a remnant of the unjustified monopoly of the older body ontology, namely, the sovereign right of material bodies to play the role of physically real and acceptable causal agents.^[70] Weyl’s view that there must be an inertial structure field on spacetime, which governs material bodies in free motion, follows from the mathematical nature of the coordinate-transformation laws for acceleration. In a world equipped with only a differential structure, it is possible to do calculus; one can define curves and paths and differentiate, etc. However, as was already pointed out, in such a world, the world of Weyl’s plasticine example, there would be no preferred curves or paths. Consequently, the motion of material bodies would not be predictable. However, experience overwhelmingly indicates that the acceleration of a massive body cannot be freely chosen. In particular, consider a simple type of particle, a monopole (unstructured) particle. Experience overwhelmingly tells us that such a particle is characterized by the fact that at any event on its world line, its velocity at that event is sufficient to determine its acceleration at that event. Predictability of motion, therefore, entails that corresponding to every type of massive monopole, there exists a geometric structure field, or what Weyl calls a Strukturfeld that governs the motion of that type of particle. The basic reason which explains this brute fact of experience is a simple mathematical fact about how the acceleration of bodies transforms under a coordinate transformation. Moreover, this simple mathematical fact, involving no more than the basic techniques of partial differentiation, holds in all relativistic, non-relativistic, curved or flat, dynamic or non-dynamic spacetime theories that are based on a local differential topological structure, the minimal structure required for the possibility of assigning arbitrary local coordinates on a differential manifold. Transformation law for acceleration: The transformation law for acceleration is linear, but is not homogeneous in the acceleration variable. As an example consider the transformation laws for the 4-velocity and the 4-acceleration. Recall that a curve in the four-dimensional spacetime manifold \(M\) is a map \(\gamma : \mathbb{R} \ rightarrow M\). For convenience we restrict our attention to those curves which satisfy \(\gamma(0) = p\). If we set \(\gamma^{i} = x^{i} \circ \gamma(0)\), then the components of the 4-velocity and 4-acceleration at \(p \in M\) are respectively given by \[\tag{30} \gamma^i_1 =_{def} \frac{d}{dt}\gamma^i(0), \] \[\tag{31} \gamma^i_2 =_{def} \frac{d^2}{dt^2}\gamma^i(0), \] The transformation laws of the 4-velocity components \(\gamma^{i}_{1}\) and of the 4-acceleration components \(\gamma^{i}_{2}\) under a change of coordinate chart from \((U,x)_{p}\) to \((\overline {U}, \overline{x})_p\), follow from their pointwise definition. From \[ \overline{\gamma}^i(t) = \overline{X}^i(\gamma^i(t)), \] where \(\overline{X}^{i} = \overline{x}^i \circ x^{- 1}\), one obtains the transformation law for the 4-velocity and the 4-acceleration respectively: \[ \tag{32} \overline{\gamma}^i_1 &= \overline{X}^i_j \gamma^{\,j}_1 \\ \tag{33} \overline{\gamma}^i_2 &= \overline{X}^i_j \gamma^{\,j}_2 + \overline{X}^i_{jk} \gamma^{\,j}_1 \gamma^k_1. \] The \(\overline{X}^{i}_{j}\) and \(\overline{X}^{i}_{jk}\) denote the first and second partial derivatives of \(\overline{X}^{i}(x^{i})\) at \(x^{i} (p)\), namely, \[ \frac{\partial \overline{x}^i}{\partial x^{j}} \text{ and } \frac{\partial \overline{x}^i}{\partial x^{j} \partial x^{k} } \] The expression \(\overline{X}^{i}_{jk}\gamma^{\,j}_{1}\gamma^{k}_{1}\) in equation (33) represents the inhomogeneous term of the transformation of the 4-acceleration. The inhomogeneity of the transformation law entails that a 4-acceleration that is zero with respect to one coordinate system is not zero with respect to another coordinate system. This means that there does not exist a unique standard of zero 4-acceleration that is intrinsic to the differential topological structure of spacetime. Moreover, even the difference of the 4-accelerations of two bodies at the same spacetime point has no absolute meaning, unless their 4-velocities happen to be the same. This shows that while the differential topological structure of spacetime gives us sufficient structure to do calculus and to derive the transformation laws for 4-velocities and 4-accelerations by way of simple differentiation, it does not provide sufficient structure with which to determine a standard of zero 4-acceleration. Therefore, as Weyl repeatedly emphasized, no solution to the problem of motion is possible, unless “we dare to acknowledge the inertial structure as a real thing that not only exerts effects upon matter but in turn suffers such effects”. In other words there must exist a structure in addition to the differential topological structure in the form of a geometric structure field, or in Weyl’s words, geometrisches Strukturfeld, which constitutes the inertial structure of spacetime, and which provides the standard of zero 4-acceleration. Since this field provides the standard of zero 4-acceleration we can call it a geodesic 4-acceleration field, or simply, geodesic acceleration field. A particle in free motion is one that is exclusively governed by this geodesic acceleration field. An acceleration field, geodesic or non-geodesic, can be constructed in the following way. Since the terms that are independent of the 4-acceleration depend on both the spacetime location and on the corresponding 4-velocity of the particle, it is necessary to specify a geometric field standard for zero 4-acceleration that also depends on those independent variables. The transformation law for a 4-acceleration field can be obtained from (33) by replacing \(\overline{\gamma}^{i}_{2}\) by \(\overline{A}^{i}_{2}(\overline{x}^{i},\overline{\gamma}^{i}_{1})\) and \(\ gamma^{j}_{2}\) by \(A^{j}_{2}(x^{i}, \gamma^{i}_{1}\)) to yield \[ \overline{A}^{i}_{2}(\overline{x}^{i}, \overline{\gamma}^{i}_{1}) = \overline{X}^{i}_{j} A^{j}_{2}(x^{i},\gamma^{i}_{1}) + \overline{X}^{i}_{jk}\gamma^{j}_{1}\gamma^{k}_{1}. \] The important special case for which the function \(A^{i}_{2}(x^{i}, \gamma^{i}_{1})\) is a geodesic 4-acceleration field corresponds to the affine structure of spacetime. For this special case the function \(A^{i}_{2}(x^{i}, \gamma^{i}_{1}\)) is denoted by \(\Gamma^{i}_{2}(x^{i}, \gamma^{i}_{1}\)) and is given by \[ \Gamma^{i}_{2}(x^{i},\gamma^{i}_{1}) = -\Gamma^{i}_{jk}(x^{i},\gamma^{i}_{1}) \gamma^{j}_{1}\gamma^{k}_{1}. \] The familiar transformation law for the affine structure (geodesic 4-acceleration field) is then given by \[\tag{34} \overline{\Gamma}^{i}_{2}(\overline{x}^{1},\overline{\gamma}^{i}_{1}) = \overline{X}^{i}_{j}\Gamma^{j}_{2}(x^{i},\gamma^{i}_{1}) +\overline{X}^{i}_{jk}\gamma^{j}_{1}\gamma^{k}_{1}. \] Note that the inhomogeneous term \(\overline{X}^{i}_{jk}\gamma^{j}_{1}\gamma^{k}_{1}\) of the geodesic 4-acceleration field is identical to the inhomogeneous term of the transformation law (33) for the 4-acceleration of body motion. The differences \[\tag{35} \overline{\gamma}^{i}_{2} - \overline{\Gamma}^{i}_{2}(\overline{x}^{i}, \overline{\gamma}^{i}_{1}) = \overline{X}^{i}_{j}(\gamma^{j}_{2} - \Gamma^{j}_{2}(x^{i},\gamma^{i}_{1})) \] then transform linearly and homogeneously; consequently, the vanishing or non-vanishing of body accelerations relative to the standard of zero acceleration provided by the geodesic 4-acceleration field (the affine structure), is coordinate independent. That is, the 4-accelerations of bodies and the corresponding 4-forces, are tensorial quantities in concordance with experience. The above argument for the necessity of geometric fields also holds for 3-velocity and 3-acceleration, denoted respectively by \(\xi^{\alpha}_{1}\) and \(\xi^{\alpha}_{2}\). The transformation law for the 3-acceleration is much more complicated than that of the 4-acceleration. However, analogous to the case of 4-acceleration, the transformation law of 3-acceleration is linear and is inhomogeneous in the 3-acceleration variable \(\xi^{\alpha}_{2}\). Consequently, there does not exist a unique standard of zero 3-acceleration that is intrinsic to the differential topological structure of spacetime. The standard of zero 3-acceleration must be provided by a geodesic 3-acceleration field or geodesic directing field, or what Weyl calls the guiding field. The guiding field is also referred to as the projective structure of spacetime and is denoted by \(\Pi^{\alpha}_{2}(x^{i}, \xi^{\alpha}_{1}\)). It is a function of spacetime location and the 3-velocity, both variables of which are independent of the 3-acceleration, as is required. Since the transformation law of the projective structure \(\Pi^{\alpha}_{2}(x^{i}, \xi^{\alpha}_{1}\)) has the same inhomogeneous form as the 3-acceleration \(\xi^{\alpha}_{2}\), the difference \[\tag{36} \xi^{\alpha}_{2} - \Pi^{\alpha}_{2}(x^{i}, \xi^{\alpha}_{1}) \] also transforms linearly and homogeneously. The components \(\gamma^{i}_{2}\) and \(\xi^{\alpha}_{2}\) of the 4-acceleration and 3-acceleration can be thought of as the dynamic descriptors of a material body. On the other hand, the components \(\Gamma^{i}_{2}(x^{i},\gamma^{i}_{1}\)) and \(\Pi^{\alpha}_{2}(x^{i}, \xi^{\alpha}_{1}\)) of the geodesic acceleration field, and the geodesic directing field, respectively, are field quantities. The differences \[\tag{37} \gamma^{i}_{2} - \Gamma^{i}_{2}(x^{i},\gamma^{i}_{1}) \] \[\tag{38} \xi^{\alpha}_{2} -\Pi^{\alpha}_{2}(x^{i}, \xi^{\alpha}_{1}) \] denote the components of a coordinate independent field-body relation.^[71] Weyl (1924b) remarks: We have known since Galileo and Newton, that the motion of a body involves an inherent struggle between inertia and force. According to the old view, the inertial tendency of persistence, the “guidance”, which gives a body its natural inertial motion, is based on a formal geometric structure of the spacetime (uniform motion in a straight line) which resides once and for all in spacetime independently of any natural processes. This assumption Einstein rejects; because whatever exerts as powerful effects as inertia—for example, in opposition to the molecular forces of two colliding trains it rips apart their freight cars—must be something real which itself suffers effect from matter. Moreover, Einstein recognized that the guiding field’s variability and dependence on matter is revealed in gravitational effects. Therefore, the dualism between guidance and force is maintained; but (G) Guidance is a physical field, like the electromagnetic field, which stands in mutual interaction with matter. Gravitation belongs to the guiding field and not to force. Only thus is it possible to explain the equivalence between inertial and gravitational mass. To move from the old conception to the new conception (G) means, according to Weyl (1924b) to replace the geometric difference between uniform and accelerated motion with the dynamic difference between guidance and force. Opponents of Einstein asked the question: Since the church tower receives a jolt in its motion relative to the train just as the train receives a jolt in its motion relative to the church tower, why does the train become a wreckage and not the church tower which it passes? Common sense would answer: because the train is ripped out of the pathway of the guiding field, but the church tower is not. … As long as one ignores the guiding field one can neither speak of absolute nor of relative motion; only if one gives due consideration to the guiding field does the concept of motion acquire content. The theory of relativity, correctly understood, does not eliminate absolute motion in favour of relative motion, rather it eliminates the kinematic concept of motion and replaces it with a dynamic one. The worldview for which Galileo fought is not undermined by it [relativity]; to the contrary, it is more concretely interpreted. It is now possible to provide a reformulation of Newton’s laws of motion which explicitly takes account of Weyl’s field-body-relationalist spacetime ontology, and his analysis of the concept of motion. The law of inertia is an empirically verifiable statement^[72] which says The Law of Inertia: There exists on spacetime a unique projective structure \(\Pi_{2}\) or equivalently, a unique geodesic directing field \(\Pi_{2}\). Free motion is defined with reference to the projective structure \(\Pi_{2}\) as follows: Definition of Free Motion: A possible or actual material body is in a state of free motion during any part of its history just in case its motion is exclusively governed by the geodesic directing field (projective structure), that is, just in case the corresponding segment of its world path is a solution path of the differential equation determined by the unique projective structure of Newton’s second law of motion may be reformulated as follows: The Law of Motion: With respect to any coordinate system, the world line path of a possible or actual material body satisfies an equation of the form \[ m(\xi^{\alpha}_{2} - \Pi^{\alpha}_{2}(x^{i}, \xi^{\alpha}_{1})) = F^{\alpha}(x^{i},\xi^{\alpha}_{1}), \] where \(m\) is a scalar constant characteristic of the material body called its inertial mass, and \(F^{\alpha}(x^{i},\xi^{\alpha}_{1}\)) is the 3-force acting on the body. To emphasize, the Law of Inertia and the Law of Motion, as formulated above, apply to all, relativistic or non-relativistic, curved or flat, dynamic or non-dynamic, spacetime theories. The reason for the general character of these laws consists in the fact that they require for their formulation only the local differential topological structure of spacetime, a structure which is common to all spacetime theories. In addition, as was noted earlier in §4.2, the affine and projective spacetime structures are G-structures. Consequently, they may be flat or non-flat; but they can never vanish. In theories prior to the advent of general relativity, the affine and projective structures were flat. It was common practice, however, to use coordinate systems that were adapted to these flat G-structures. And since in such adapted coordinate systems the components of the affine and projective structures vanish, it was difficult to recognize and to appreciate the existence of these structures, and their important role in providing a coherent account of motion. We saw that Weyl forcefully advocated a field-body ontological dualism, according to which matter and the guiding field are independent physical realities that causally interact with each other: matter uniquely generates the various states of the guiding field, and the guiding field in turn acts on matter. Weyl did not always subscribe to this ontological dualist position. For a short period, from 1918 to 1920, he advocated a pure field theory of matter , developed in 1912 by Gustav Mie, in the context of Einstein’s special theory of relativity: Pure Field Theory of Matter: The physical field has an independent reality that is not reducible to matter; rather, the physical field is constitutive of all matter in the sense that the mass (quantity of matter) of a material particle, such as an electron, consists of a large field energy that is concentrated in a very small region of spacetime. Mie’s theory of matter is akin to the traditional geometric view of matter: matter is passive and pure extension. Weyl (1921b) remarks that he adopted the standpoint of the classical pure field theory of matter in the first three editions of Weyl (1923b) because of its beauty and unity, but then gave it up. Weyl (1931a) points out in the Rouse Ball Lecture that since the theory of general relativity geometrized a physical entity, the gravitational field, it was natural to try to geometrize the whole of physics. Prior to the advent of quantum physics one was justified in regarding gravitation and electromagnetism as the only basic entities of nature and to seek their unification by geometrizing both. One could hope, following the example of Gustav Mie, to construct elementary material particles as knots of energy in the gravitational-electromagnetic field, that is, tiny demarcated regions in which the field magnitudes attain very high values. Already in a letter to Felix Klein,^[73] toward the end of 1920, Weyl indicated that he had finally freed himself completely from Mie’s theory of matter. It now appeared to him that the classical field theory of matter is not the key to reality. In the Rouse Ball Lecture Weyl adduces two reasons for this. First, due to quantum mechanics, there are, in addition to electromagnetic waves, matter waves (Materiewellen) represented by Schrödinger’s wave function \(\psi\). And Pauli and Dirac recognized that \(\psi\) is not a scalar but a magnitude with several components. Thus, from the point of view of the classical field theory of matter not two but three entities would have to be unified. Moreover, given the transformation properties of the wave function, Weyl says it is certain that the magnitude \(\psi\) cannot be reduced to gravitation or electromagnetism. Weyl saw clearly, that this geometric view of matter or physics—which to a certain extent had also motivated his earlier construction and manner of presentation of a pure infinitesimal geometry—was untenable in light of the new developments in atomic physics. The second reason, Weyl says, consists in the radical new interpretation of the wave function, which replaces the concept of intensity with that of probability. It is only through such a statistical interpretation that the corpuscular and atomistic aspect of nature is properly recognized. Instead of a geometric treatment of the classical field theory of matter, the new quantum theory called for a statistical treatment of matter.^[74] Already in 1920, Weyl (1920) addressed the relationship between causal and statistical approaches to physics.^[75] The theory of general relativity, as well as early developments in atomic physics, clearly tell us, Weyl (1921b) suggests, that matter uniquely determines the field, and that there exist deeper underlying physical laws with which modern physics, such as quantum theory, is concerned, which specify “how the field is affected by matter”. That is, experience tells us that matter plays the role of a causal agent which uniquely determines the field, and which therefore has an independent physical reality that cannot be reduced to the field on which it acts. Weyl (1921b, 1924e) refers to his theory of matter as the Agenstheorie der Materie (literally, agent-theory of matter): Matter-Field Dualism (Weyl’s Agens Theory of Matter): Matter and field are independent physical realities that causally interact with each other: matter uniquely generates the various states of the field, and the field in turn acts on matter. To excite the field is the essential primary function of matter. The field’s function is to respond to the action of matter and is thus secondary. The secondary role of the field is to transmit effects (from body to body) caused by matter, thereby in return affecting matter. The view, that matter uniquely determines the field, was a necessary postulate of an opposing ontological standpoint, according to Weyl. The postulate essentially says that Matter is the only thing which is genuinely real. According to this ontological view, held to a certain degree by the younger Einstein and others who advocated a form of Machian empiricism, the field is relegated to play the role of a feeble extensive medium which transmits effects from body to body.^[76] According to this opposing ontological view, the field laws, that is, certain implicit differential connections between the various possible states of the field, on the basis of which the field alone is capable of transmitting effects caused by matter, can essentially have no more significance for reality than the laws of geometry could, according to earlier views. But as we saw earlier, Weyl held that no satisfactory solution can be given to the problem of motion as long as we adhere to the Einstein-Machian empiricist position that relegates the field to the role of a feeble extensive medium, and which does not acknowledge that the guiding field is physically real. However, from the standpoint of Weyl’s agens theory of matter, a satisfactory answer to Mach’s paradox can be given: the reason why a stationary, homogeneous elastic sphere will bulge at the equator and flatten at the poles, when set in rotation, is due to the fact that the complete physical system consisting of both the body and the guiding field, differs in the rotating case from the stationary one. The local guiding field is the real cause of the inertial forces. Weyl lists two reasons in support for his agens theory of matter. First, the agens theory of matter is the only theory which coheres with the basic experiences of life and physics: matter generates the field and all our actions ultimately involve matter. For example, only through matter can we change the field. Secondly, in order to understand the fact of the existence of charged material particles, we have two possibilities: either we follow Mie and adopt a pure field theory of matter, or we elevate the ontological status of matter and regard it as a real singularity of the field and not merely as a high concentration of field energy in a tiny region of spacetime. Since Mie’s approach is necessarily limited to the framework of the theory of special relativity, and since there is no room in the general theory of relativity for a generalization and modification of the classical field laws, as envisaged by Mie in the context of the special theory of relativity, Weyl adopted the second possibility. He was motivated to do so by his recognition that the field equation of an electron at rest contains a finite mass term \(m\) that appears to have nothing to do with the energy of the associated field. Weyl’s subsequent analysis of mass in terms of electromagnetic field energy provided a definition of mass and a derivation of the basic equations of mechanics, and led Weyl to the invention of the topological idea of wormholes in spacetime. Weyl did not use the term ‘wormholes’; it was John Wheeler who later coined the term ‘wormhole’ in 1957. Weyl spoke of one-dimensional tubes instead. “Inside” these tubes no space exists, and their boundaries are, analogous to infinite distance, inaccessible; they do not belong to the field. In a chapter entitled “Hermann Weyl and the Unity of Knowledge” Wheeler (1994) says, Another insight Weyl gave us on the nature of electricity is topological in character and dates from 1924. We still do not know how to assess it properly or how to fit it into the scheme of physics, although with each passing decade it receives more attention. The idea is simple. Wormholes thread through space as air channels through Swiss cheese. Electricity is not electricity. Electricity is electric lines of force trapped in the topology of space. A year after Einstein (1916) had established the field equations of his new general theory of relativity, Einstein (1917) applied his theory for the first time to cosmology. In doing so, Einstein made several assumptions: Cosmological Principle: Like Newton, Einstein assumed that the universe is homogeneous and isotropic in its distribution of matter. Static Universe: Einstein assumed, as did Newton and most cosmologists at that time, that the universe is static on the large scale. Mach’s Principle: Einstein believed that the metric field is completely determined through the masses of bodies. The metric field is determined through the energy-momentum tensor of the field equations. The cosmological principle continuous to play an important role in cosmological modelling to this day. However, Einstein’s second assumption that the universe is static was in conflict with his field equations, which permitted models of the universe that were homogeneous and isotropic, but not static. In this regard, Einstein’s difficulties were essentially the same that Newton had faced: A static Newtonian model involving an infinite container with an infinite number of stars was unstable; that is, local regions would collapse under gravity. Because Einstein was committed to Mach’s Principle he faced a problem concerning the boundary conditions for infinite space containing finite amount of matter.^[77] Einstein recognized that it was impossible to choose boundary conditions such that the ten potentials of the metric \(g_{ij}\) are completely determined by the energy-momentum tensor \(T_{ij}\), as required by Mach’s Principle. That is, the boundary conditions “flat at infinity” entail a global inertial frame that is tied to empty flat space at infinity, and hence is unrelated to the mass-energy content of space, contrary to Mach’s Principle, according to which only mass-energy can influence inertia. Einstein thought that he could solve the difficulties of an unstable non-static universe with boundary conditions at infinity that do not satisfy Mach’s Principle, by introducing the cosmological term \(\Lambda\) into his field equations. He showed that for positive values of the cosmological constant, his modified field equation admitted a solution for a static^[78] universe in which space is curved, unbounded and finite; that is, space is a hyper surface of a sphere in four dimensions. Einstein’s spatially closed universe is often referred to as Einstein’s “cylinder” world: with two of the spatial dimensions suppressed, the model universe can be pictured as a cylinder where the radius \(A\) represents the space and the axis the time coordinate. Figure 9: Einstein Universe According to Einstein’s Machian convictions, since inertia is determined only by matter, there can be no inertial structure or field in the absence of matter. Consequently, it is impossible, Einstein conjectured, to find a solution to the field equations—that is, to determine the metric \(g_{ij}\)—if the energy-momentum tensor \(T_{ij}\) representing the mass-energy content of the universe is zero. The non-existence of ‘vacuum solutions’ for a static universe demonstrated, Einstein thought, that Mach’s Principle had been successfully incorporated into his theory of general relativity. Einstein also believed that his solution was unique because of the assumptions of isotropy and homogeneity.^[79] However, Einstein was mistaken. In 1917, the Dutch astronomer Willem de Sitter published another solution to Einstein’s field equations containing the cosmological constant. De Sitter’s solution showed that Einstein’s solution is not a unique solution of his field equations. In addition, since de Sitter’s universe is empty it provided a direct counter-example to Einstein’s hope that Mach’s Principle had been successfully incorporated into his theory.^[80] There are cosmologists who, like Einstein, are favourably disposed towards some version of Mach’s Principle, and who believe that the local laws, which are satisfied by various physical fields, are determined by the large scale structure of the universe. On the other hand, there are those cosmologists who, like Weyl, take a conservative approach; they take empirically confirmed local laws and investigate what these laws might imply about the universe as a whole. Our understanding of the large scale structure of the universe, Weyl emphasized, must be based on theories and principles which are verified locally. Einstein’s general theory is a local field theory; like electromagnetism, it is a close action theory.^[81] Weyl (1924b) says: It appears to me that one can grasp the concrete physical content of the theory of relativity without taking a position regarding the causal relationship between the masses of the universe and And, referring to (G), (see citation at the end of §4.4.3), which says that “Guidance is a physical field, like the electromagnetic field, which stands in mutual interaction with matter. Gravitation belongs to the guiding field and not to force”, Weyl (1924b) says: What I have so far presented and briefly formulated in the two sentences of G, that alone impacts on physics and underlies the actual individual investigations of problems of the theory of relativity. Mach’s Principle, according to which the fixed stars intervene with mysterious power in earthly events, goes far beyond this [G] and is until now pure speculation; it merely has cosmological significance and does not become important for natural science until astronomical observations reach the totality of the cosmos [Weltganze], and not merely one island of stars [Sterneninsel]. We could leave the question unanswered if I did not have to admit that it is tempting to construct, on the basis of the theory of relativity, a picture of the totality of the Weyl’s claim is that because general relativity is an inherently local field theory, its validity and soundness is essentially independent of global cosmological considerations. However, if we wish to introduce such global considerations into our local physics, then we can do so only on the basis of additional assumptions, such as, for example, the Cosmological Principle, already mentioned. In 1923 Weyl (1923b, §39) introduced another cosmological assumption, namely, the so-called Weyl Postulate. De Sitter’s solution and the new astronomical discoveries in the early 1920’s, which suggested that the universe is not static but expanding, led to a drastic change in thinking about the nature of the universe and an increased scepticism towards Einstein’s model of a static universe. In 1923, Weyl (1923b, §39) notes in the fifth edition of Raum Zeit Materie, that despite its attractiveness, Einstein’s cosmology suffers from serious defects. Weyl begins by pointing out that spectroscopic results indicate that the stars have an age. Weyl continued, all our experiences about the distribution of stars show that the present state of the starry sky has nothing to do with a “statistical final state.” The small velocities of the stars is due to a common origin rather than some equilibrium; incidentally, it appears, based on observation, that the more distant configurations are from each other, the greater the velocities on average. Instead of uniform distribution of matter, astronomical facts lead rather to the view that individual clouds of stars glide by in vast empty space. Weyl further points out that de Sitter showed that Einstein’s cosmological equations of gravity have “a very simple regular solution” and that an empty spacetime, namely, “a metrically homogeneous spacetime of non-vanishing curvature,” is compatible with these equations after all. Weyl says that de Sitter’s solution, which on the whole is not static, forces us to abandon our predilection for a static universe. The Einstein and the de Sitter universe are both spacetimes with two separate fringes, the infinitely remote past and the infinitely remote future. Dropping two of its spatial dimensions we imagine Einstein’s universe as the surface of a straight cylinder of a certain radius and de Sitter’s universe as a one sheeted hyperboloid. Both surfaces are surfaces of infinite extent in both directions. Both the Einstein universe and the de Sitter universe spread from the eternal past to the eternal future. However, unlike de Sitter’s universe, in Einstein’s universe “the metrical relations are such that the light cone issuing from a world point is folded back upon itself an infinite number of times. An observer should therefore see infinitely many images of a star, showing him the star in states between which an eon has elapsed, the time needed by the light to travel around the sphere of the world.” Weyl (1930) says: … I start from de Sitter’s solution: the world, according to its metric constitution, has the character of a four-dimensional “sphere” (hyperboloid) \[\tag{39} x^{2}_{1} + x^{2}_{2} + x^{2}_{3} + x^{2}_{4} - x^{2}_{5} = a^{2} \] in a five-dimensional quasi-euclidean space, with the line element \[\tag{40} ds^{2} = dx^{2}_{1} + dx^{2}_{2} + dx^{2}_{3} + dx^{2}_{4} - dx^{2}_{5}. \] The sphere has the same degree of metric homogeneity as the world of the special theory of relativity, which can be conceived as a four-dimensional “plane” in the same space. The plane, however, has only one connected infinitely distant “seam,” while it is the most prominent topological property of the sphere to be endowed with two—the infinitely distant past and the infinitely distant future. In this sense one may say that space is closed in de Sitter’s solution. On the other hand, however, it is distinguished from the well-known Einstein solution, which is based on a homogeneous distribution of mass, by the fact that the null cone of future belonging to a world-point does not overlap with itself; in this causal sense, the de Sitter space is open. On this hyperboloid, a single star (nebula or galaxy, in later contexts) \(A\), also called “observer” by Weyl, traces a geodesic world line, and from each point of the star’s world line a light cone opens into the future and fills a region \(D\), which Weyl calls the domain of influence of the star. In de Sitter’s cosmology this domain of influence covers only half of the hyperboloid and Weyl suggests that it is reasonable to assume that this half of the hyperboloid corresponds to the real world. Figure 10: De Sitter’s hyperboloid with domain of influence \(D\) covering half of the hyperboloid and world lines of stars. There are innumerable stars or geodesics, according to Weyl, that have the same domain of influence as the arbitrarily chosen star \(A\); they form, he says, a system that has been causally interconnected since eternity. Such a system of causally interconnected stars Weyl describes as stars of a common origin that lies in an infinitely remote past. The sheaf of world-lines of such a system of stars converges, in the direction of the infinitely remote past, on an infinitely small part of the total extent of the hyperboloid, and diverges in the direction of the future on an ever increasing extent of the hyperboloid. Weyl’s choice of singling out a particular sheaf of non-intersecting timelike geodesics as constituting the cosmological substratum is the content of Weyl’s Postulate. Weyl (1923b, 295) says: The hypothesis is suggestive, that all the celestial bodies which we know belong to such a single system; this would explain the small velocities of the stars as a consequence of their common The transition from a static to a dynamic universe opens up the possibility of a disorderly universe where galaxies could collide, that is, their world lines might intersect. Roughly speaking, Weyl’s Postulate states that the actual universe is an orderly universe. It says that the world lines of the galaxies form a 3-sheaf of non-intersecting^[82] geodesics orthogonal to layers of spacelike Figure 11: Weyl’s Postulate Since the relative velocities of matter is small in each collection of galaxies extending over an astronomical neighbourhood, one can approximate a “smeared-out” motion of the galaxies and introduce a substratum or fluid which fills space and in which the galaxies move like “fundamental particles”.^[83] Weyl’s postulate says that observers associated with this smeared-out motion constitute a privileged class of observers of the universe. Since geodesics do not intersect, according to Weyl’s Postulate, there exist one and only one geodesic which passes through each spacetime point. Consequently, matter possesses a unique velocity at any spacetime point. Therefore, the fluid may be regarded as a perfect fluid; and this is the essential content of Weyl’s Postulate. Since the geodesics of the galaxies are orthogonal to a layer of spacelike hypersrfaces according to Weyl’s Postulate, one can introduce coordinates \((x^{0}, x^{1}, x^{2}, x^{3})\) such that the spacelike hypersurfaces are given by \(x^{0} =\) constant, and the spacelike coordinates \(x^{\alpha}\) \((\alpha = 1, 2, 3)\) are constant along the geodesics of each galaxy. Therefore, the spacelike coordinates \(x^{\alpha}\) are co-moving coordinates along the geodesics of each galaxy. The orthogonality condition permits a choice of the time coordinate \(x^{0}\) such that the metric or line element has the form \[ ds^{2} &= (dx^{0})^{2} - g_{\alpha \beta}dx^{\alpha}dx^{\beta} \\ \tag{41} &= c^{2}dt^{2} - g_{\alpha \beta}dx^{\alpha}dx^{\beta}, \] where \(ct = x^{0}, x^{0}\) is called the cosmic time, and \(t\) is the proper time of any galaxy. The spacelike hypsersurfaces are therefore the surfaces of simultaneity with respect to the cosmic time \(x^{0}\). The Cosmological Principle in turn tells us that these hypersurfaces of simultaneity are homogeneous and isotropic. Independently, Robertson and Walker, were subsequently able to give a precise mathematical derivation of the most general metric by assuming Weyl’s Postulate and the Cosmological Principle. Weyl’s introduction of his Postulate made it possible for him to provide the first satisfactory treatment of the cosmological redshift. Consider a light source, say a star \(A\), which emits monochromatic light that travels along null geodesics \(L, L',\ldots\) to an observer \(O\). Let \(s\) be the proper time of the light source, and let \(\sigma\) be the proper time of the observer \ (O\). Then to every point \(s\) on the world line of the light source \(A\) there corresponds a point on the world line of the observer \(O\), namely, \(\sigma = \sigma(s)\). Figure 12: A body or star \(A\) emits monochromatic light which travels along null geodesics \(L, L',\ldots\) to an observer \(O\). Consequently, if one of the generators of the light cone issuing from \(A\)’s world line at \(A\)’s proper time \(s_{0}\)—the null geodesic \(L\)—reaches observer \(O\) at the observer’s proper time \(\sigma(s_{0})\), then \[\tag{42} d\sigma = \left.\frac{d\sigma(s)}{ds}\right|_{s_0} ds. \] Therefore, the frequency \(\nu_{A}\) of the light that would be measured by some hypothetical observer on \(A\) is related to the frequency \(\nu_{O}\) measured on \(O\) by \[\tag{43} \frac{\nu_{O}}{\nu_{A}} = \frac{d\sigma(s)}{ds}. \] According to Weyl (1923c) this relationship holds in arbitrary spacetimes and for arbitrary motions of source and observer. Weyl (1923b, Anhang III) then applied this relationship to de Sitter’s world and showed, to lowest order, that the redshift is linear in distance; that is, Weyl theoretically derived, what was later called Hubble’s redshift law. Using Slipher’s redshift data Weyl estimated a Hubble constant six years prior to Hubble. Weyl (1923b, Anhang III) remarks: It is noteworthy that neither the elementary nor Einstein’s cosmology lead to such a redshift. Of course, one cannot claim today, that our explanation hits the right mark, especially since the views about the nature and distance of the spiral nebulae are still very much in need of further clarification. In 1933 Weyl gave a lecture in Göttingen in which Weyl (1934b) recalls According to the Doppler effect the receding motion of the stars is revealed in a redshift of their spectral lines which is proportional to distance. In this form, where De Sitter’s solution of the gravitational equation is augmented by an assumption concerning the undisturbed motion of the stars, I had predicted the redshift in the year 1923. During the period 1925–1926 Weyl published a sequence of groundbreaking papers (Weyl (1925, 1926a,b,c)) in which he presented a general theory of the representations and invariants of the classical Lie groups. In these celebrated papers Weyl drew together I. Schur’s work on invariants and representations of the \(n\)-dimensional rotation group, and É. Cartan’s work on semisimple Lie algebras. In doing so, Weyl utilized different fields of mathematics such as, tensor algebra, invariant theory, Riemann surfaces and Hilbert’s theory of integral equations. Weyl himself considered these papers his greatest work in mathematics. The central role that group theoretic techniques played in Weyl’s analysis of spacetime was one of several factors which led Weyl to his general theory of the representations and invariants of the classical Lie groups. It was in the context of Weyl’s investigation of the space-problem (see §4.2) that Weyl came to appreciate the value of group theory for investigating the mathematical and philosophical foundations of physical theories in general, and for dealing with fundamental questions motivated by the general theory of relativity, in particular. A motivation of quite another sort, which led Weyl to his general representation theory, was provided by Study when he attacked Weyl specifically, as well as other unnamed individuals, by accusing them “of having neglected a rich cultural domain (namely, the theory of invariants), indeed of having completely ignored it”.^[84] Weyl (1924c) replied immediately providing a new foundation for the theory of invariants of the special linear groups \(SL(n, \mathbb{C})\) and its most important subgroups, the special orthogonal group \(SO(n, \mathbb{C})\) and the special symplectic group \(SSp(\ bfrac{n}{2}, \mathbb{C})\) (for \(n\) even) based on algebraic identities due to Capelli. In a footnote, Weyl (1924c) sarcastically informed Study that “even if he [Weyl] had been as well versed as Study in the theory of invariants, he would not have used the symbolic method in his book Raum, Zeit, Materie and even with the last breath of his life would not have mentioned the algebraic completeness theorem for invariant theory”. Weyl’s point was that in the context of his book Raum-Zeit-Materie, the kernel-index method of tensor analysis is more appropriate than the methods of the theory of algebraic invariants.^[85] While this account of events leading up to Weyl’s groundbreaking papers on group theory seems reasonable enough, Hawkins (2000) has suggested a fuller account, which brings into focus Weyl’s deep philosophical interest in the mathematical foundations of the theory of general relativity by drawing attention to Weyl (1924d) on tensor symmetries, which, according to Hawkins, played an important role in redirecting Weyl’s research interests toward pure mathematics.^[86] Weyl (1949b, 400) himself noted that his interest in the philosophical foundations of the general theory of relativity motivated his analysis of the representations and invariants of the continuous groups: “I can say that the wish to understand what really is the mathematical substance behind the formal apparatus of relativity theory led me to the study of representations and invariants of groups; and my experience in this regard is probably not unique”. Weyl’s paper (Weyl (1924a)), and the first chapter Weyl (1925) of his celebrated papers on representation theory, have the same title: “The group theoretic foundation of the tensor calculus”. Hawkins (1998) says, Weyl had obtained through the theory of groups, and in particular through the theory of group representations—as augmented by his own contributions—what he felt was a proper mathematical understanding of tensors, tensor symmetries, and the reason they represent the source of all linear quantities that might arise in mathematics or physics. Once again, he had come to appreciate the importance of the theory of groups—and now especially the theory of group representation—for gaining insight into mathematical questions suggested by relativity theory. Unlike his work on the space problem …Weyl now found himself drawing upon far more than the rudiments of group theory. … And of course Cartan^[87] had showed that the space problem could also be resolved with the aid of results about representations. In short, the representation theory of groups had proved itself to be a powerful tool for answering the sort of mathematical questions that grew out of Weyl’s involvement with relativity theory. Somewhat later, Weyl (1939) wrote a book, entitled The Classical Groups, Their Invariants and Representations, in which he returned to the theory of invariants and representations of the semisimple Lie groups. In this work, he satisfied his ambition “to derive the decisive results for the most important of these groups by direct algebraic construction, in particular for the full group of all non-singular linear transformations and for the orthogonal group”. He intentionally restricted the discussion of the general theory and devoted most of the book to the derivation of specific results for the general linear, the special linear, the orthogonal and the symplectic groups. As far back as the 1920s, the great French mathematician and geometer Élie Cartan had recognized that the notions of parallelism and affine connection admit of an important generalization in the sense that (1) the spaces for which the notion of infinitesimal parallel transport is defined need not be the tangent spaces that intrinsically arise from the differential structure of a Riemannian manifold \(M\) at each of its points; rather, the spaces are general spaces that are not intrinsically tied to the differential manifold structure of \(M\), and (2) relevant groups operate on these general spaces directly and not on the manifold \(M\), and therefore groups play a dominant and independent role. Weyl (1938a) published a critical review of Cartan’s (1937) book in which Cartan further developed his notion of moving frames (“repères mobiles”) and generalized spaces (“espaces généralisés”). However, Weyl (1988) expressed some of his reservations to Cartan’s approach as early as 1925; and four years later Weyl (1929e) presented a more detailed critique. Cartan’s approach to differential geometry is in response to the fact that Euclidean geometry was generalized in two ways resulting in essentially two incompatible approaches to geometry.^[88] The first generalization occurred with the discovery of non-Euclidean geometries and with Klein’s (1921) subsequent Erlanger program in 1872, which provided a coherent group theoretical framework for the various non-Euclidean geometries. The second generalization of Euclidean geometry occurred when Riemann (1854) discovered Riemannian geometry. The two generalizations of Euclidean geometry essentially constitute incompatible approaches to applied geometry. In particular, while Klein’s Erlanger program provides an appropriate group theoretical framework for Einstein’s theory of special relativity, it is Riemannian geometry, and not Klein’s group theoretic approach, which provides the appropriate underlying geometric framework for Einstein’s theory of general relativity. As Cartan observes: General relativity threw into physics and philosophy the antagonism that existed between the two principle directors of geometry, Riemann and Klein. The space-times of classical mechanics and of special relativity are of the type of Klein, those of general relativity are of the type of Riemann.^[89] Cartan eliminated the incompatibility between the two approaches by synthesizing Riemannian geometry and Klein’s Erlanger program through a further generalization of both, resulting in what Cartan called, generalized spaces (or generalized geometries). In his Erlanger program, Klein provided a unified approach to the various “global” geometries by showing that each of the geometries is characterized by a particular group of transformations: Euclidean geometry is characterized by the group of translations and rotations in the plane; the geometry of the sphere \(S^{2}\) is characterized by the orthogonal group \(O(3)\); and the geometry of the hyperbolic plane is characterized by the pseudo-orthogonal group \(O(1, 2)\). In Klein’s approach each geometry is a (connected) manifold endowed with a group of automorphisms, that is, a Lie group \(G\) of “motions” that acts transitively on the manifold, such that two figures are regarded as congruent if and only if there exists an element of the appropriate Lie group \(G\) that transforms one of the figures into the other. A generalized geometry in Klein’s sense shifts the emphasis from the underlying manifold or space to the group. Thus a Klein geometry (space) consists of, (1) a smooth manifold, (2) a Lie group \(G\) (the principal group of the geometry), and (3) a transitive action of \(G\) on the manifold. Besides being “global”, a Klein geometry (space) is completely homogeneous in the sense that its points cannot be distinguished on the basis of geometric relations because the transitive group action preserves such relations. As Weyl (1949b) describes it, Klein’s approach to the various “global” geometries is very suited to Einstein’s theory of special relativity: According to Einstein’s special relativity theory the four-dimensional world of the spacetime points is a Klein space characterized by a definite group \(\Gamma\); and that group is the … group of Euclidean similarities—with one very important difference however. The orthogonal transformations, i.e., the homogeneous linear transformations which leave \[ x^{2}_{1} + x^{2}_{2} + x^{2}_{3} + x^{2}_{4} \] unchanged have to be replaced by the Lorentz transformations leaving \[ x^{2}_{1} + x^{2}_{2} + x^{2}_{3} - x^{2}_{4} \] However, with the advent of Einstein’s general theory of relativity the emphasis shifted from global homogeneous geometric structures to local inhomogeneous structures. Whereas Klein spaces are global and fully homogeneous, the Riemannian metric structure underlying Einstein’s general theory is local and inhomogeneous. A general Riemannian space admits of no isometry other than the Referring to Cartan (1923a), Weyl (1929e) says that Cartan’s generalization of Klein geometries consists in adapting Klein’s Erlanger program to infinitesimal geometry by applying Klein’s Erlanger program to the tangent plane rather than to the manifold itself.^[90] Cartan developed a general scheme of infinitesimal geometry in which Klein’s notions were applied to the tangent plane and not to the \(n\)-dimensional manifold \(M\) itself. Figure 13: Cartan’s generalization Figure 13 above, adapted from Sharpe (1997), may help in clarifying the discussion. The generalization of Euclidean geometry to a Riemannian space (the left vertical blue arrow) says: 1. A general Riemannian space approximates Euclidean space only locally; that is, at each point \(p \in M\) there exists a tangent space \(T(M_{p})\) that arises intrinsically from the underlying differential structure of \(M\). 2. In addition, a Riemannian space is inhomogeneous through the introduction of curvature. Analogously, Cartan’s generalization of a Klein space to a Cartan space (the right vertical blue arrow) says: 1. Cartan’s generalized space \(\Sigma(M)\) approximates a Klein space only locally; that is, at each point \(p \in M\) there exists a “Tangent Space”, that is, a Klein space \(\Sigma(M_{p})\). Note that a Klein space \(\Sigma(M_{p})\) is itself a generalized space (in the sense of Cartan) with zero curvature; it possesses perfect homogeneity. 2. In addition, Cartan’s generalized space \(\Sigma(M)\) is inhomogeneous by the introduction of curvature. Figure 14: Cartan’s generalized space Cartan’s generalized space \(\Sigma\)(M) is the space of all “Tangent Spaces” (i.e., all Klein spaces \(\Sigma(M_{p}))\) and contains a mixture of homogeneous and inhomogeneous spaces (see figure Finally, Cartan’s generalization of Riemannian space (lower horizontal red arrow) (figure 13) turns on the recognition that the “Tangent Space” in Cartan’s sense is not the same, or need not be the same, as the ordinary tangent space that arises naturally from the underlying differential structure of a Riemannian manifold. Cartan’s “Tangent Space” \(\Sigma(M_{p})\) at \(p \in M\) denotes what is known as a fiber in modern fiber bundle language, where the manifold \(M\) is called the base space of the fiber bundle. In Weyl (1929e, 1988) and to a lesser extent in Weyl (1938a), Weyl objected to Cartan’s approach by noting that Cartan’s “Tangent Space”, namely the Klein space \(\Sigma(M_{p})\) associated with each point of the manifold \(M\), does not arise intrinsically from the differential structure of the manifold the way the ordinary tangent vector space does. Weyl therefore noted that it is necessary to impose certain non-intrinsic embedding conditions on \(\Sigma(M_{p})\) that specify how the “Tangent Space” \(\Sigma(M_{p})\) is associated with each point of the manifold \(M\). Paraphrasing Weyl, the situation is as follows: We assume that we can associate a copy \(\Sigma(M_{p})\) of a given Klein space with each point \(p\) of the manifold \(M\) and that the displacement of the Klein space \ (\Sigma(M_{p})\) at \(p \in M\) to the Klein space \(\Sigma(M_{p'})\) associated with an infinitely nearby point \(p'\in M\), constitutes an isomorphic representation of \(\Sigma(M_{p})\) on \(\Sigma (M_{p'})\) by means of an infinitesimal action of the group \(G\). In choosing an admissible frame of reference \(f\) for each Klein space \(\Sigma(M_{p})\), their points are represented by normal coordinates \(\xi\). Any two frames \(f,f'\) are related by a group element \(s \in G\), and a succession of transformations \(f \rightarrow f'\) and \(f' \rightarrow f''\) by \(s \in G\) and \(t \in G\) respectively, relates \(f\) and \(f''\) by the group composition \(t \circ s \in G\). Nothing so far has been said about how specifically the “Tangent Space” \(\Sigma(M_{p})\) is connected to the manifold. Since \(\Sigma(M_{p})\) is supposed to be a generalization of the ordinary tangent space which arises intrinsically from the local differential structure of \(M\), Weyl suggests that certain embedding conditions have to be imposed on the normal coordinates \(\xi\) of the Klein space \(\Sigma(M_{p})\). Embedding Condition 1: We must first designate a point as the center of \(\Sigma(M_{p})\) and then require that they coincide or cover the point \(p \in M\). This leads, Weyl says, to a restriction in the choice of a normal coordinate system \(\xi\) on \(\Sigma(M_{p})\). And because \(G\) acts transitively, a normal coordinate system \(\xi\) on \(\Sigma(M_{p})\) can be chosen such that the normal coordinates \(\ xi\) vanish at the center, that is, \(\xi^{1} = \xi^{2} = \cdots = 0\). The group \(G\) is therefore restricted to the subgroup \(G_{0}\) of all representations of \(G\) which leave the center Embedding Condition 2: The notion of a tangent plane also requires that there is a one-to-one linear mapping between the line elements of \(\Sigma(M_{p})\) starting from 0, with the line elements of \(M\) starting from \(p \). This means that the number of dimension of the Klein space \(\Sigma(M_{p})\) has the same number of dimension as the manifold \(M\). Embedding Condition 3: The infinitesimal displacement \(\Sigma(M_{p}) \rightarrow \Sigma(M_{p'})\) will carry an infinitesimal vector at the center of \(\Sigma(M_{p})\), which is in one-to-one correspondence with a vector at \(p \in M\), to the center of \(\Sigma(M_{p'})\). No further conditions need be imposed according to Weyl. If we displace \(\Sigma(M_{p})\) by successive steps around a curve \(\gamma\) back to the point \(p \in M\) then the final position of \(\ Sigma(M_{p})\) is obtained from its original poition or orientation by a certain automorphism \(\Sigma(M_{p}) \rightarrow \Sigma(M_{p})\). This automorphism is Cartan’s generalization of Riemann’s concept of curvature along the curve \(\gamma\) on \(M\). According to Weyl, the “Tangent Space” \(\Sigma(M_{p})\) is not uniquely determined by the differential structure of \(M\). If \(G\) were the affine group, Weyl says, then the conditions above would fully specify the normal coordinate system \(\xi^{\alpha}\) on \(\Sigma(M_{p})\) as a function of the chosen local coordinates \(x^{i}\) on \(M\). Since this is not the case if \(G\) is a more extensive group than the affine group, Weyl concludes that the “Tangent Space” \(\Sigma(M_{p})\) “is not as yet uniquely determined by the nature of \(M\), and so long as this is not accomplished we can not say that Cartan’s theory deals only with the manifold \(M\).” Weyl adds: Conversely, the tangent plane in \(p\) in the ordinary sense, that is, the linear manifold of line elements in \(p\), is a centered affine space; its group \(G\) is not a matter of convention. This has always appeared to me to be a deficiency of the theory …. The reader may wish to consult Ryckman (2005, 171–173), who argues “that a philosophical contention, indeed, phenomenological one, underlies the stated mathematical reasons that kept him [Weyl] for a number of years from concurring with Cartan’s ”moving frame“ approach to differential geometry”. In 1949 Weyl explicitly acknowledged and praised Cartan’s approach. Unlike his earlier critical remarks, he now considered it to be a virtue that the frame of reference in \(\Sigma(M_{p})\) is independent of the choice of coordinates on \(M\). Weyl (1949b) says of the traditional approach and Cartan’s new approach to geometry: Hence we have here before us the natural general basis on which that notion rests. The infinitesimal trend in geometry initiated by Gauss’ theory of curved surfaces now merges with that other line of thought that culminated in Klein’s Erlanger program. It is not advisable to bind the frame of reference in \(\Sigma_{p}\) to the coordinates \(x^{i}\) covering the neighborhood of \(p\) in \(M\). In this respect the old treatment of affinely connected manifolds is misleading. … [I]n the modern development of infinitesimal geometry in the large, where it combines with topology and the associated Klein spaces appear under the name of fibres, it has been found best to keep the répères, the frames of the fibre spaces, independent of the coordinates of the underlying manifold. Moreover, in 1949, Weyl also emphasizes that it is necessary to employ Cartan’s method if one wishes to fit Dirac’s theory of the electron into general relativity. Weyl (1949b) says: When one tries to fit Dirac’s theory of the electron into general relativity, it becomes imperative to adopt the Cartan method. For Dirac’s four \(\psi\)-components are relative to a Cartesian (or rather a Lorentz) frame. One knows how they transform under transition from one Lorentz frame to another (spin representation of the Lorentz group); but this law of transformation is of such a nature that it cannot be extended to arbitrary linear transformations mediating between affine frames. Weyl is here referring to his three important papers, which appeared in 1929—the same year in which he had published his detailed critique of Cartan’s method—in which he investigates the adaptation of Dirac’s theory of the special relativistic electron to the theory of general relativity, and where he develops the tetrad or Vierbein formalism for the representation of local two-component spinor structures on Lorentz manifolds. Only a year after Pauli’s review article in 1921, in which Pauli had argued that Weyl’s defence of his unified field theory deprives it of its inherent convincing power from a physical point of view, Schrödinger (1922) suggested the possibility that Weyl’s 1918 gauge theory could suitably be employed in the quantum mechanical description of the electron.^[91] Similar proposals were subsequently made by Fock (1926) and London (1927). With the advent of the quantum theory of the electron around 1927/28 Weyl abandoned his gauge theory of 1918. He did so because in the new quantum theory a different kind of gauge invariance associated with Dirac’s theory of the electron was discovered which, as had been suggested by Fock (1926) and London (1927), more adequately accounted for the conservation of electric charge.^[92] Why did Weyl hold on to his gauge theory for almost a decade despite a preponderance of compelling empirical arguments that were mounted against it by Einstein, Pauli and others?^[93] In one of Weyl’s (1918/1998) last letters to Einstein concerning his unified field theory, Weyl made it clear that it was mathematics and not physics that was the driving force behind his unified field theory. Incidentally, you must not believe that it was because of physics that I introduced the linear differential form d\(\varphi\) in addition to the quadratic form. I wanted rather to eliminate this “inconsistency” which always has been a bone of contention to me.^[95] And then, to my surprise, I realized that it looked as if it might explain electricity. You clap your hands above your head and shout: But physics is not made this way! As London (1927, 376–377) remarks, one must admire Weyl’s immense courage in developing his gauge invariant interpretation of electromagnetism and holding on to it on the mere basis of purely formal considerations. London observes that the principle of equivalence of inertial and gravitational mass, which prompted Einstein to provide a geometrical interpretation of gravity, was at least a physical fact underlying gravitational theory. In contrast, an analogous fact was not known in the theory of electricity; consequently, it would seem that there was no compelling physical reason to think that rigid rods and ideal clocks would be under the universal influence of the electromagnetic field. To the contrary, London says, experience strongly suggests that atomic clocks exhibit sharp spectral lines that are unaffected by their history in the presence of a magnetic field, contrary to Weyl’s non-integrability assumption. London concludes, that in the face of such elementary empirical facts it must have been an unusually clear metaphysical conviction which prevented Weyl from abandoning his idea that nature ought to make use of the beautiful geometrical possibilities that a pure infinitesimal geometry offers. In 1955, shortly before his death, Weyl wrote an addendum^[96] to his 1918 paper Gravitation und Elektrizität, in which he looks back at his early attempt to find a unified field theory and explains why he reinterpreted his gauge theory of 1918, a decade later. This work stands at the beginning of attempts to construct a “unified field theory” which subsequently were continued by many, it seems to me, without decisive results. As is known, the problem relentlessly occupied Einstein in particular, until his end. … The strongest argument for my theory appeared to be that gauge invariance corresponds to the principle of the conservation of electric charge just as coordinate invariance corresponds to the conservation theorem of energy-impulse. Later, quantum theory introduced the Schrödinger-Dirac potential \(\psi\) of the electron-positron field; the latter revealed an experimentally based principle of gauge invariance which guaranteed the conservation of charge and which connected the \(\psi\) with the electromagnetic potentials \(\varphi_{i}\) in the same way that my speculative theory had connected the gravitational potentials \(g_{ik}\) with \(\varphi_{i}\), where, in addition, the \(\ varphi_{i}\) are measured in known atomic rather than unknown cosmological units. I have no doubts that the principle of gauge invariance finds its correct place here and not, as I believed in 1918, in the interaction of electromagnetism and gravity. By the late 1920s Weyl’s methodological approach to gauge theory underwent an “empirical turn”. In contrast to \(a\) priori geometrical reasoning, which guided his early unification attempts—Weyl calls it a “speculative theory” in the above citation—by 1928/1929 Weyl emphasized experimentally-based principles which underlie gauge invariance.^[97] In early 1928 P. A. M. Dirac provided the first physically compelling theoretical account of the dynamics of an electron in the presence of an electric field. The components \(\psi^{i} (x)\) of Dirac’s four-component wave function or spinor field in Minkowski space, \(\psi(x) = (\psi^{1}(x), \psi^{2}(x), \psi^{3}(x), \psi^{4}(x))\), are complex-valued functions that satisfy Dirac’s first-order partial differential equation and provide probabilistic information about the electron’s dynamical behaviour, such as angular momentum and location. Prior to the appearance of spinor fields \(\psi\) in Dirac’s equation, it was generally thought that scalars, vectors and tensors provided an adequate system of mathematical objects that would allow one to provide a mathematical description of reality independently of the choice of coordinates or reference frames.^[98] For example, spin zero particles \((\pi\) mesons, \(\alpha\) particles) could be described by means of scalars; spin 1 particles (deuterons) by vectors, and spin 2 particles (hypothetical gravitons) by tensors. However, the most frequently occurring particles in Nature are electrons, protons, and neutrons. They are spin \(\bfrac{1}{2}\) particles, called fermions that are properly described by mathematical objects called spinors, which are neither scalars, vectors or tensors.^[99] Weyl referred to the \(\psi(x)\) in Dirac’s equation as the “Dirac quantity” and von Neumann called it the “\(\psi(x)\)-vector”. Both von Neumann and Weyl, and others, immediately recognized that Dirac had introduced something that was new in theoretical physics. v. Neumann (1928, 876) remarks: … \(\psi\) does by no means have the relativistic transformation properties of a common four-vector. … The case of a quantity with four components that is not a four-vector is a case which has never occurred in relativity theory; the Dirac \(\psi\)-vector is the first example of this type. (Weyl (1929c)) notes that the spinor representation of the orthogonal group \(O(1, 3)\) cannot be extended to a representation of the general linear group \(GL(n)\), \(n = 4\), with the consequence that it is necessary to employ the Vierbein, tetrad or Lorentz-structure formulation of the theory of general relativity in order to incorporate Dirac’s spinor fields \(\psi(x)\): The tensor calculus is not the proper mathematical instrument to use in translating the quantum-theoretic equations of the electron over into the general theory of relativity. Vectors and terms [tensors] are so constituted that the law which defines the transformation of their components from one Cartesian set of axes to another can be extended to the most general linear transformation, to an affine set of axes. That is not the case for quantity \(\psi\), however; this kind of quantity belongs to a representation of the rotation group which cannot be extended to the affine group. Consequently we cannot introduce components of \(\psi\) relative to an arbitrary coordinate system in general relativity as we can for the electromagnetic potential and field strengths. We must rather describe the metric at a point \(p\) by local Cartesian axes \(e(a)\) instead of by the \(g_{pq}\). The wave field has definite components \(\psi^{+}_{1}, \psi^{+}_{2}, \psi^{-}_{1}, \psi^{-}_{2}\) relative to such axes, and we know how they transform on transition to any other Cartesian axes in \(p\). Impressed by the initial success of Dirac’s equation of the spinning electron within the special relativistic context, Weyl adapted Dirac’s special relativistic theory of the electron to the general theory of relativity in three groundbreaking papers (Weyl (1929b,c,d)). A complete exposition of this formalism is presented in (Weyl (1929b)). O’Raifeartaigh (1997) says of this paper: Although not fully appreciated at the time, Weyl’s 1929 paper has turned out to be one of the seminal papers of the century, both from the philosophical and from the technical point of view. In this ground braking paper, as well as in (Weyl (1929c,d)), Weyl explicitly abandons his earlier attempt to unify electromagnetism with the theory of general relativity. In his early attempt he associated the electromagnetic vector potential \(A_{j}(x)\) with the additional connection coefficients that arise when a conformal structure is reduced to a Weyl structure (see §4.1). The important concept of gauge invariance, however, is preserved in his 1929 paper. Rather than associating gauge transformations with the scale or gauge of the spacetime metric tensor, Weyl now associates gauge transformations with the phase of the Dirac spinor field \(\psi\) that represents matter. In the introduction of (Weyl (1929b)), which presents in detail the new formalism, Weyl describes his reinterpretation of the gauge principle as follows: The Dirac field-equations for \(\psi\) together with the Maxwell equations for the four potentials \(f_{p}\) of the electromagnetic field have an invariance property which, from a formal point of view, is similar to the one that I called gauge invariance in my theory of gravitation and electromagnetism of 1918; the equations remain invariant when one makes the simultaneous replacements \[\begin{array}{ccc} \psi \text{ by } e^{i\lambda}\psi & \text{and} & f_p \text{ by } f_p - \dfrac{\partial\lambda}{\partial x^p}, \end{array}\] where \(\lambda\) is understood to be an arbitrary function of position in the four-dimensional world. Here the factor \(\bfrac{e}{ch}\), where \(- e\) is the charge of the electron, \(c\) is the speed of light, and \(\bfrac{h}{\pi}\) is the quantum of action, has been absorbed in \(f_{p}\). The connection of this “gauge invariance” to the conservation of electric charge remains untouched. But an essential difference, which is significant for the correspondence to experience, is that the exponent of the factor multiplying \(\psi\) is not real but purely imaginary. \(\psi \) now assumes the role that \(ds\) played in Einstein’s old theory. It seems to me that this new principle of gauge invariance, which follows not from speculation but from experiment, compellingly indicates that the electromagnetic field is a necessary accompanying phenomenon, not of gravitation, but of the material wave field represented by \(\psi\). Since gauge invariance includes an arbitrary function \(\lambda\) it has the character of “general” relativity and can naturally only be understood in that context. Weyl then introduces his two-component spinor theory in Minkowski space. Since one of his aims is to adapt Dirac’s theory to the curved spacetime of general relativity, Weyl develops a theory of local spinor structures for curved spacetime.^[100] He achieves this by providing a systematic formulation of local tetrads or Vierbeins (orthonormal basis vectors). Orthonormal frames had already been introduced as early as 1900 by Levi-Civita and Ricci. Somewhat later, Cartan had shown the usefulness of employing local orthonormal-basis vector fields, the so-called “moving frames” in his investigation of Riemannian geometry in the 1920s. In addition, Einstein (1928) had used tetrads or Vierbeins in his attempt to unify gravitation and electricity by resorting to distant parallelism with torsion. In Einstein’s theory, the effects of gravity and electromagnetism are associated with a specialized torsion of spacetime rather than with the curvature of spacetime. Since the curvature vanishes everywhere, distant parallelism is a feature of Einstein’s theory. However, distant parallelism appeared to Weyl to be quite unnatural from the viewpoint of Riemannian geometry. Weyl expressed his criticism in all three papers (Weyl (1929b,c,d)) and he contrasted the way in which Vierbeins are employed in his own work with the way they were used by Einstein. In the introduction Weyl (1929b) says: I prefer not to believe in distant parallelism for a number of reasons. First, my mathematical attitude resists accepting such an artificial geometry; it is difficult for me to understand the force that would keep the local tetrads at different points and in rotated positions in a rigid relationship. There are, I believe, two important physical reasons as well. In particular, by loosening the rigid relationship between the tetrads at different points, the gauge factor \(e^{i\lambda}\), which remains arbitrary with respect to the quantity \(\psi\), changes from a constant to an arbitrary function of spacetime location; that is, only through the loosening of the rigidity does the actual gauge invariance become understandable. Secondly, the possibility to rotate the tetrads at different points independently from each other, is as we shall see, equivalent to the symmetry of the energy-momentum tensor or with the validity of its conservation law. Every tetrad uniquely determines the pseudo-Riemannian spacetime metric \(g_{ij}\). However, the converse does not hold since the tetrad has 16 independent components whereas the spacetime metric, \ (g_{ij} = g_{ji}\), has only 10 independent components. The extra 6 degrees of freedom of the tetrads that are not determined by the metric may be represented by the elements of a 6-parameter internal Lorentz group. That is, the local tetrads are determined by the spacetime metric up to local Lorentz transformations. The tetrad formalism made it possible, therefore, for Weyl to derive, as a special case of Noether’s second theorem^[101], the energy-momentum conservation laws for general coordinate transformations and the internal Lorentz transformations of the tetrads. Moreover, Weyl had always emphasized the strong analogy between gravitation and electricity. The tetrad formalism and the conservation laws both made explicit and supported this analogy. Weyl introduced the final section of his seminal 1929 paper saying “We now come to the critical part of the theory”, and presented a derivation of electromagnetism from the new gauge principle. The initial step in Weyl’s derivation exploits the intrinsic gauge freedom of his two-component theory of spinors for Minkowski space, namely \[ \psi(x) \rightarrow e^{i\lambda}\psi(x), \] where the gauge factor is a constant. Since Weyl wished to adapt his theory to the curved spacetime of general relativity, the above phase transformation must be generalized to accommodate local tetrads. That is, each spacetime point has its own tetrad and therefore its own point-dependent gauge factor. The phase transformation is thus given by \[ \psi(x) \rightarrow e^{i\lambda(x)}\psi(x), \] where the \(\lambda(x)\) is a function of spacetime. Weyl says: We come now to the critical part of the theory. In my view the origin and the necessity for the electromagnetic field lie in the following justification. The components \(\psi_{1},\psi_{2}\) are, in fact, not uniquely determined by the tetrad but only to the extent that they can still be multiplied by an arbitrary “gauge-factor” \(e^{i\lambda}\) of absolute value 1. The transformation of the \(\psi\) induced by a rotation of the tetrad is determined only up to such a factor. In the special theory of relativity one must regard this gauge factor as a constant, since we have here only a single point-independent tetrad. This is different in the general theory of relativity. Every point has its own tetrad, and hence its own arbitrary gauge factor, because the gauge factor necessarily becomes an arbitrary function of position through the removal of the rigid connection between tetrads at different points. Today, the concept of gauge invariance plays a central role in theoretical physics. Not until 1954 did Yang and Mills (1954) generalize Weyl’s electromagnetic gauge concept to the case of the non-Abelian group \(O(3)\).^[102] Although Weyl’s reinterpretation of gauge invariance had been preceded by suggestions from London and Fock, it was Weyl, according to O’Raifeartaigh and Straumann who emphasized the role of gauge invariance as a symmetry principle from which electromagnetism can be derived. It took several decades until the importance of this symmetry principle—in its generalized form to non-Abelian gauge groups developed by Yang, Mills, and others—also became fruitful for a description of the weak and strong interactions. The mathematics of the non-Abelian generalization of Weyl’s 1929 paper would have been an easy task for a mathematician of his rank, but at the time there was no motivation for this from the physics side. It is interesting in this context to consider the following remarks by Yang. Referring to Einstein’s objection to Weyl’s 1918 gauge theory, Yang (1986, 18) asked, “what has happened to Einstein’s original objection after quantum mechanics inserted an \(-i\) into the scale factor and made it into a phase factor?” Yang continuous: Apparently no one had, after 1929, relooked at Einstein’s objection until I did in 1983. The result is interesting and deserves perhaps to be a footnote in the history of science: Let us take Einstein’s Gedankenexperiment …. When the two clocks come back, because of the insertion of the factor \(-i\), they would not have different scales but different phases. That would not influence their rates of time-keeping. Therefore, Einstein’s original objection disappears. But you can ask a further question: Can one measure their phase difference? Well, to measure a phase difference one must do an interference experiment. Nobody knows how to do an interference experiment with big objects like clocks. However, one can do interference experiments with electrons. So let us change Einstein’s Gedankenexperiment to one of bringing electrons back along two different paths and ask: Can one measure the phase difference? The answer is yes. That was in fact a most important development in 1959 and 1960 when Aharonov and Bohm realized—completely independently of Weyl—that electromagnetism has some meaning which was not understood before.^[103] We end the discussion on Weyl’s gauge theory by quoting the following remarks by Dyson (1983). A more recent example of a great discovery in mathematical physics was the idea of a gauge field, invented by Hermann Weyl in 1918. This idea has taken only 50 years to find its place as one of the basic concepts of modern particle physics. Quantum chromodynamics, the most fashionable theory of the particle physicists in 1981, is conceptually little more than a synthesis of Lie’s group-algebras with Weyl’s gauge fields. The history of Weyl’s discovery is quite unlike the history of Lie groups and Grassmann algebras. Weyl was neither obscure nor unrecognized, and he was working in 1918 in the most fashionable area of physics, the newborn theory of general relativity. He invented gauge fields as a solution of the fashionable problem of unifying gravitation with electromagnetism. For a few months gauge fields were at the height of fashion. Then it was discovered by Weyl and others that they did not do what was expected of them. Gauge fields were in fact no good for the purpose for which Weyl invented them. They quickly became unfashionable and were almost forgotten. But then, very gradually over the next fifty years, it became clear that gauge fields were important in a quite different context, in the theory of quantum electrodynamics and its extensions leading up to the recent development of quantum chromodynamics. The decisive step in the rehabilitation of gauge fields was taken by our Princeton colleague Frank Yang and his student Bob Mills in 1954, one year before Hermann Weyl’s death [Yang and Mills, 1954]. There is no evidence that Weyl ever knew or cared what Yang and Mills had done with his brain-child. So the story of gauge fields is full of ironies. A fashionable idea, invented for a purpose which turns out to be ephemeral, survives a long period of obscurity and emerges finally as a corner-stone of physics. It is remarkable that Weyl’s (1929b) two-component spinor formalism led him to anticipate the existence of particles that violate conservation of parity, that is, left-right symmetry. In 1929 left-right symmetry was taken for granted and considered a basic fact of all the laws of Nature. Weyl formulated the four-component Dirac spinor \(\psi\) in terms of a two-component left-handed Weyl spinor \(\psi_{L}\) and a two-component right-handed Weyl spinor \(\psi_{R}\): \[ \psi &= (\psi^{1}, \psi^{2}, \psi^{3}, \psi^{4})^{T} \\ &= (\psi^{1}_{L}, \psi^{2}_{L}, \psi^{1}_{R}, \psi^{2}_{R})^{T} \\ &= (\psi_L, \psi_R)^T \] The four-component Dirac spinor, formulated in terms of the two Weyl spinors \[ \psi = \left[\matrix{\psi_L \\ \psi_R}\right] \] preserves parity; it applies to all massive spin \(\bfrac{1}{2}\) particles (fermions) and all massive fermions are known to obey parity conservation. However, a single Weyl spinor, either \(\psi_{L} \) or \(\psi_{R}\), does not preserve parity. Weyl noted that instead of the four-component Dirac spinor “two components suffice if the requirement of left-right symmetry (parity) is dropped”. A little later he added, “the restriction 2 removes the equivalence of left and right. It is only the fact that left-right symmetry actually appears in Nature that forces us to introduce a second pair of \(\psi\)-components”. Weyl’s two-spinor version of the Dirac equation is a coupled system of equations requiring both Weyl spinors \(\psi_{L}\) and \(\psi_{R}\) in order to preserve parity. Weyl considerd massless particles in his two-spinor version of the Dirac equation. In this case, the equations of the two-spinor version of Dirac’s equation decouple, yielding an equation for \(\psi_{L}\) and for \(\psi_{R}\). These equations are independent of each other, and the equation for the 2-component left-handed Weyl spinor \(\psi_{L}\) is called Weyl’s equation; it is applicable to the massless particle called the neutrino^[104], a spin \(\bfrac{1}{2}\) particle, that was discovered in 1956. Yang (1986, 12) remarks Now I come to another piece of work of Weyl’s which dates back to 1929, and is called Weyl’s two-component neutrino theory. He invented this theory in 1929 in one of his very important articles … as a mathematical possibility satisfying most of the requirements of physics. But it was rejected by him and by subsequent physicists because it did not satisfy left-right symmetry. With the realisation that left-right symmetry was not exactly right in 1957 it became clear that this theory of Weyl’s should immediately be re-looked at. So it was and later it was verified theoretically and experimentally that this theory gave, in fact, the correct description of the neutrino. During the interval from 1924–26, in which Weyl was intensely occupied with the pure mathematics of Lie groups, the essentials of the formal apparatus of the new revolutionary theory of quantum mechanics had been completed by Heisenberg, Schrödinger and others. As if to make up for lost time, Weyl immediately returned from pure mathematics to theoretical physics, and applied his new group theoretical results to quantum mechanics. As Yang (1986, 9, 10) describes it, In the midst of Weyl’s profound research on Lie groups there occurred a great revolution in physics, namely the development of quantum mechanics. We shall perhaps never know Weyl’s initial reaction to this development, but he soon got into the act and studied the mathematical structure of the new mechanics. There resulted a paper of 1927 and later a book, this book together with Wigner’s articles and Gruppen Theorie und Ihre Anwendung auf die Quanten Mechanik der Atome were instrumental in introducing group theory into the very language of quantum mechanics. Mehra and Rechenberg (2000, 482) note in this context: “Actually, we have mentioned in previous volumes Weyl’s early reactions to both matrix mechanics (in 1925) and wave mechanics (in early 1926), and they were very enthusiastic. Therefore, we have to assume quite firmly that it was only his deep involvement with the last stages of his work on the theory of semisimple continuous groups that prevented Weyl ‘to get in the act’ immediately.” Weyl was particularly well positioned to handle some of the mathematical and foundational problems of the new theory of quantum mechanics. Almost every aspect of his mathematical expertise, in particular, his recent work on group theory and his very early work on the theory of singular differential-integral equations (1908–1911), provided him with the precise tools for solving many of the concrete problems posed by the new theory: the theory of Hilbert space, singular differential equations, eigenfunction expansions, the symmetric group, and unitary representations of Lie groups. Weyl’s (1927) paper, referred to by Yang above, is entitled Quantenmechanik und Gruppentheorie (Quantum Mechanics and Group Theory). In it, Weyl provides an analysis of the foundations of quantum mechanics and he emphasizes the fundamental role Lie groups play in that theory.^[105] Weyl begins the paper by raising two questions: (1) how do I arrive at the self-adjoint operators, which represent a given quantity of a physical system whose constitution is known, and (2), what is the physical interpretation of these operators and which physical consequences can be derived from them? Weyl suggests that while the second question has been answered by von Neumann, the first question has not yet received a satisfactory answer, and Weyl proposes to provide one with the help of group In a way, Weyl’s 1927 paper was programmatic in character; nearly all the topics of that paper were taken up again a year later in his famous book (Weyl (1928)) entitled Gruppentheorie und Quantenmechanik (The Theory of Groups and Quantum Mechanics). The book emerged from the lecture notes taken by a student named F. Bohnenblust of Weyl’s lectures given in Zürich during the winter semester 1927–28. A revised edition of that book appeared in 1931. In the preface to the first edition Weyl says: Another time I venture on stage with a book that belongs only half to my professional field of mathematics, the other half to physics. The external reason is not very different from that which led some time ago to the origin of the book Raum Zeit Materie. In the winter term 1927/28 Zürich was suddenly deprived of all theoretical physics by the simultaneous departures of Debye and Schrödinger. I tried to fill the gap by changing an already announced lecture course on group theory into one on group theory and quantum mechanics. Since I have for some years been deeply occupied with the theory of the representation of continuous groups, it appeared to me at this point to be a fitting and useful project, to provide an organically coherent account of the knowledge in this field won by mathematicians, on such a scale and in such a form, that is suitable for the requirements of quantum physics. Weyl’s book is one of the first textbooks on the new theory of quantum mechanics. As Weyl indicates in the preface it was necessary for him to include a short account of the foundation of quantum theory in order to be able to show how the theory of groups finds its application in that theory. If the book fulfils its purpose, Weyl suggests, then the reader should be able to learn from it the essentials of both the theory of groups and quantum theory. Weyl’s aim was to explain the mathematics to the physicists and the physics to the mathematicians. However, as Yang (1986, 10) points out, referring to Weyl’s book: Weyl was a mathematician and a philosopher. He liked to deal with concepts and the connection between them. His book was very famous, and was recognized as profound. Almost every theoretical physicist born before 1935 has a copy of it on his bookshelves. But very few read it: Most are not accustomed to Weyl’s concentration on the structural aspects of physics and feel uncomfortable with his emphasis on concepts. The book was just too abstract for most physicists. Weyl’s book (Weyl (1931b, 2 edn)) is remarkably complete for such an early work and covers many topics. Chapters I and III are mainly concerned with preliminary mathematical concepts. The first chapter provides an account of the theory of finite dimensional Hilbert spaces and the third chapter is an exposition of the unitary representation theory of finite groups and compact Lie groups. Chapter II is entitled Quantum Theory; it is the earliest systematic and comprehensive account of the new quantum theory. Chapter IV, entitled Application of the Theory of Groups to Quantum Mechanics , is divided into four parts. In part A, entitled The Rotation Group, Weyl provides a systematic explanatory account of the theory of atomic spectra in terms of the unitary representation theory of the rotation group, followed by a discussion of the selection and intensity rules. Part B is entitled The Lorentz Group. After discussing the spin of the electron and its role in accounting for the anomalous Zeeman effect, Weyl presents Dirac’s theory of the relativistic quantum mechanics of the electron and develops in detail the theory of an electron in a spherically symmetric field, including an analysis of the fine structure of the spectrum. In part C, entitled The Permutation Group, Weyl applies the Pauli exclusion principle to explicate the periodic table of the elements. Next, Weyl develops the second quantization of the Maxwell and Dirac fields required for the analysis of many body relativistic systems. Weyl noted in the preface to the second edition that his treatment is in accordance with the recent work of Heisenberg and Pauli. It is now customary to include such a topic under the heading of relativistic quantum field theory. The final part of Chapter IV, part D, is entitled Quantum Kinematics; it provides an exposition of part II of Weyl’s (1927) paper, mentioned earlier. Chapter V, entitled The Symmetric Permutation Group and the Algebra of Symmetric Transformations, is for the most part pure mathematics. It is widely regraded to be the most difficult part of the Weyl’s book. Overall, Weyl’s treatment is quite modern except for the confusion regarding the positive electron (anti-electron) that at that time was identified with the proton rather than with the positron, which was discovered a few years later. Weyl was quite concerned about the identification of the proton with the positive electron because his analysis of the discrete symmetries \(\mathbf{C}, \ mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\) led him to conclude that the mass of the positive electron should equal the mass of the electron.^[106] Weyl (1931b, 2 edn) analyzed Dirac’s relativistic theory of the electron (Dirac (1928a,b)). Although this theory correctly accounted for the spin of the electron, there was however a problem because in addition to the positive-energy levels, Dirac’s theory predicted the existence of an equal number of negative-energy levels. Dirac (1930) reinterpreted the theory by assuming that all of the negative-energy levels were normally occupied. The Pauli Exclusion Principle, which asserts that it is impossible for two electrons to occupy the same quantum state, would prevent an electron with positive energy from falling into a negative- energy state. Dirac’s theory also predicted that one of the negative-energy electrons could be raised to a state of positive energy, thereby creating a ‘hole’ or unoccupied negative-energy state. Such a hole would behave like a particle with a positive energy and a positive charge, that is, like a positive electron. Because the only fundamental particles that were known to exist at that time were the electron and the proton, one was justifiably reluctant to postulate the existence of new particles that had not yet been observed experimentally; consequently, it was suggested that the positive electron should be identified with the proton. However, Weyl was quite concerned about the identification of the proton with the anti-electron. In the preface to the second German edition of his book Gruppentheorie und Quantenmechanik, Weyl (1928, 2 edn, 1931, VII) wrote The problem of the proton and the electron is discussed in connection with the symmetry properties of the quantum laws with respect to the interchange of right and left, past and future, and positive and negative electricity. At present no acceptable solution is in sight; I fear, that in the context of this problem, the clouds are rolling together to form a new, serious crisis in quantum physics. Weyl had good reasons for his concern. He analyzed the invariance of the Maxwell-Dirac equations under the discrete symmetries that correspond to the transformations now called \(\mathbf{C}, \mathbf {P}, \mathbf{T}\) and \(\mathbf{CPT}\) both for the case of relativistic quantum mechanics and for the case of relativistic quantum field theory, and concluded in both cases that the mass of the anti-electron should be the same as the mass of the electron. That the mass of the proton was so different from the mass of the electron, therefore, appeared to Weyl to constitute a new serious crisis in physics. In a lecture presented at the Centenary for Hermann Weyl held at the ETH in Zürich, Yang (1986, 10) says of the above quote from Weyl’s preface to the second edition of Gruppentheorie und This was a most remarkable passage in retrospect. The symmetry that he mentioned here, of physical laws with respect to the interchange of right and left, had been introduced by Weyl and Wigner independently into quantum physics. It was called parity conservation, denoted by the symbol \(P\). The symmetry between the past and future was something that was not well understood in 1930. It was understood later by Wigner, was called time reversal invariance, and was denoted by the symbol \(T\). The symmetry with respect to positive and negative electricity was later called charge conjugation invariance \(C\). It is a symmetry of physical laws when you change positive and negative signs of electricity. Nobody, to my knowledge, absolutely nobody in the year 1930, was in any way suspecting that these symmetries were related in any manner. I will come back to this matter later. What had prompted Weyl in 1930 to write the above passage is a great mystery to me. It would seem that Yang’s comment is misleading since it suggests that Weyl did not have a good reason for his remark. In fact, however, Weyl’s statement was firmly based on a detailed analysis of the discrete symmetries \(\mathbf{C}, \mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\). Coleman and Korté (2001) have shown in detail that Weyl’s treatment of these symmetries is the same as that used today except for the fact that the symmetry \(\mathbf{T}\) is treated by Weyl as linear and unitary, rather than as antilinear and antiunitary. Weyl had presented in 1931 a complete analysis, in the context of the quantized Maxwell-Dirac field equations, of the discrete symmetries that are now called \(\mathbf{C}, \mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\). His transformations \(\mathbf{C}\) and \(\mathbf{P}\) are the same as those used today. His transformations \(\mathbf{T}\) and \(\mathbf{CPT}\) are also very close to those used today except that Weyl’s transformations were linear and unitary rather than antilinear and antiunitary. Moreover, Weyl drew two very important conclusions from his analysis of these discrete symmetries. First, Weyl announced that the important question of the arrow of time had been solved because the field equations were not invariant under his time-reversal transformation \(\mathbf{T}\). Second, Weyl pointed out that the invariance of the field equations under his charge-conjugation transformation \(\mathbf{C}\) implied that the mass of the ‘anti-electron’ is necessarily the same as that of the electron; moreover, Weyl’s result is the primary reason that Dirac (1931, 61) abandoned the assignment of the proton to the role of the anti-electron. Many years later Dirac (1977, 145) recalled: Well, what was I to do with these holes? The best I could think of was that maybe the mass was not the same as the mass of the electron. After all, my primitive theory did ignore the Coulomb forces between the electrons. I did not know how to bring those into the picture, and it could be that in some obscure way these Coulomb forces would give rise to a difference in the masses. Of course, it is very hard to understand how this difference could be so big. We wanted the mass of the proton to be nearly 2000 times the mass of the electron, an enormous difference, and it was very hard to understand how it could be connected with just a sort of perturbation effect coming from Coulomb forces between the electrons. However, I did not want to abandon my theory altogether, and so I put it forward as a theory of electrons and protons. Of course I was very soon attacked on this question of the holes having different masses from the original electrons. I think the most definite attack came from Weyl, who pointed out that mathematically the holes would have to have the same mass as the electrons, and that came to be the accepted view. At another place Dirac (1971, 52–55) remarks: But still, I thought there might be something in the basic idea and so I published it as a theory of electrons and protons, and left it quite unexplained how the protons could have such a different mass from the electrons. This idea was seized upon by Herman [sic] Weyl. He said boldly that the holes had to have the same mass as the electrons. Now Weyl was a mathematician. He was not a physicist at all. He was just concerned with the mathematical consequences of an idea, working out what can be deduced from the various symmetries. And this mathematical approach led directly to the conclusion that the holes would have to have the same mass as the electrons. Weyl just published a blunt statement that the holes must have the same mass as the electrons and did not make any comments on the physical implications of this assertion. Perhaps he did not really care what the physical implications were. He was just concerned with achieving consistent mathematics. Dirac’s characterization of Weyl’s unconcern for physics seems unfair in light of Weyl’s own statement in the preface of the second edition of his book, cited earlier, where he expresses the fear “that in the context of this problem, the clouds are rolling together to form a new, serious crisis in quantum physics”; Weyl did care about the physics. Weyl’s analysis did have a significant impact on the development of the Maxwell-Dirac theory; however, as Coleman and Korté (2001) have argued, Weyl’s early analysis of the transformations \(\mathbf {C}, \mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\) was, for the most part, lost to subsequent researchers and had to be essentially re-invented. However, it should be noted in this context that Schwinger (1988, 107–129) was greatly influenced by Weyl’s book. Schwinger makes particular reference to Weyl’s work on the discrete symmetries and says that this work “… was the starting point of my own considerations concerning the connection between spin and statistics, which culminated in what is now referred to as the TCP—or some permutation thereof—theorem”. Weyl analyzed the foundations of both the general theory of relativity and the theory of quantum mechanics. For both theories, he provided a coherent exposition of the mathematical structure of the theory, elegant characterizations of the entities and laws postulated by the theory and a lucid account of how these postulates explain the most significant, more directly observable, lower-level phenomena. In both cases, he was also concerned with the constructive aspects of the theory, that is, with the extent to which the higher-level postulates of the theory are necessary. There is no doubt that with regard to the general theory of relativity, Weyl held strong philosophical views. Some of these views are couched in a phenomenological language and reveal Husserl’s influence on Weyl. Ryckman’s (2005) study The Reign of Relativity provides an extensive account of Weyl’s orientation to Husserl’s phenomenology. On the other hand, many of Weyl’s philosophical views are couched in an unequivocal empiricist-realist language. For example, Weyl rejected Poincaré’s geometrical conventionalism and forcefully argued that the spacetime metric field is physically real, that it is a physically real structural field (Strukturfeld), which is determined by the physically real causal (conformal) structure and the physically real inertial (projective) structure or guiding field (Führungsfeld) of spacetime. He was not deterred in putting forward such ontological claims about the metric structure of spacetime despite the fact that a complete epistemologically satisfactory solution to the measurement problem for the spacetime metric field was not then available. In the same manner, Weyl forcefully advanced a field-body-relationalist ontology of spacetime structure. He argued that a Leibnizian or Einstein-Machian form of relationalism that is based on a pure body ontology, is not tenable, indeed is incoherent within the context of general relativity, and he presented a reductio argument, the plasticine example, to underscore the necessity of the existence of a physically real guiding field in addition to the existence of bodies. However, in contrast to Weyl’s many philosophical views with regard to spacetime theories, Weyl’s philosophical positions regarding the status of quantum mechanics, while not absent, are not as transparent. There are passages, such as the following ((Weyl, 1931b, 2 edn, 44), which argue for the reality of photons. The intensity of the monochromatic radiation that is used to generate the photoelectric effect has no influence on the speed with which the electrons are ejected from the metal but affects only the frequency of this process. Even with intensities so weak that on the classical theory hours would be required before the electromagnetic energy passing through a given atom would attain to an amount equal to that of a photon, the effect begins immediately, the points at which it occurs being distributed irregularly over the entire metal plate. This fact is a proof of the existence of light quanta that is no less meaningful than the flashes of light on the scintillation screen are for the corpuscular-discontinuous nature of \(\alpha\)-rays. On the other hand, Weyl’s (1931b) discussion of the problem of ‘directional quantization’ in the old quantum theory and of the way that this problem is ‘resolved’ in the new quantum theory appears to have a distinctly instrumentalist flavour. In a number of places, he describes the essence of the dilemma posed by quantum mechanics with a dispassionate precision. Consider, for example, the following (Weyl, 1931b, 2 edn, 67): Natural science has a constructive character. The phenomena with which it deals are not independent manifestations or qualities which can be read off from nature, but can only be determined by means of an indirect method, through interaction with other bodies. Their implicit definition is bound up with definite natural laws which underlie the interactions. Consider, for example, the introduction of the Galilean concept of mass which essentially comes down to the following indirect definition: “Each body possesses a momentum, that is, a vector \(m\overline{v}\) which has the same direction as its velocity \(\overline{v}\)—the scalar factor \(m\) is called its mass. The law of momentum holds, according to which the sum of the momenta before a reaction between several bodies is the same as the sum of their momenta after the reaction.” By applying this law to the observed collision phenomena, one obtains data for the determination of the relative masses. The scientific consensus was, however, that such constructive phenomena can nevertheless be attributed to the things themselves even if the manipulations, which alone can lead to their recognition, are not being carried out. In Quantum Theory we encounter a fundamental limitation to this epistemological position of the constructive natural science. It is difficult for many people to accept quantum mechanics as an ultimate theory without at the same time giving up some form of realism and adopting something like an instrumentalist view of the theory. It is clear that Weyl was fully aware of this state of affairs, and yet in all of his published work, he refrained from making any bold statements of his views on the fundamental questions about quantum reality. He did not vigorously participate in the debate between Einstein and Schrödinger and the Copenhagen School nor did he offer decisive views concerning, for example, the Einstein, Podolsky, Rosen thought experiment or Schrödinger’s Cat. Since Weyl held strong philosophical views within the context of the general theory of relativity, it is therefore only natural that one might have expected him to take a stand with respect to Schrödinger’s cat and whether or not one should be fully satisfied with a theory according to which the cat is neither alive or dead but is in a superposition of these two states. The reason for Weyl’s seeming reticence concerning the ontological/epistemological questions about quantum reality was already hinted at in note 5 of §2, where it was suggested that Weyl was not especially bothered by the counterintuitive nature of quantum mechanics because he held the view that “objective reality cannot be grasped directly, but only through the use of symbols”. Although Weyl (1948, 1949a, 1953) did express his philosophical views about quantum theory, he did so cautiously. Weyl (1949a, 263) summarizes some of the features of quantum mechanics that he considered of “paramount philosophical significance”: the measurement problem, the incompatibility of quantum physics with classical logic, quantum causality, the non-local nature of quantum mechanics, the Leibniz-Pauli Exclusion Principle^[107], and the irreducible probabilistic nature of quantum mechanics. At the end of the summary Weyl remarks: It must be admitted that the meaning of quantum physics, in spite of all its achievements, is not yet clarified as thoroughly as, for instance, the ideas underlying relativity theory. The relation of reality and observation is the central problem. We seem to need a deeper epistemological analysis of what constitutes an experiment, a measurement, and what sort of language is used to communicate its result. According to Weyl (1948, 295), both the theory of general relativity and quantum mechanics force upon us the realization that “instead of a real spatio-temporal material being what remains for us is only a construction in pure symbols”. If it is necessary, Weyl (1948, 302) says, that our scientific grasp of an objective world must not depend on sense qualities, because of their inherent subjective nature, then it is for the same reason necessary to eliminate space and time. And Descartes gave us the means to do this with his discovery of analytic geometry. As Weyl (1953, 529) observes, when Newton explained the experienced world through the movements of solid particles in space, he rejected sense qualities for the construction of the objective world, but he held on to, and used an intuitively given objective space for the construction of a real world that lies behind the appearances. It was Leibniz who recognized the phenomenal character (Phenomenalität) of space and time as consisting in the mere ordering of phenomena; however, space and time themselves do not have an independent reality. It is the freely created pure numbers, that is, pure symbols, according to Weyl, which serve as coordinates, and which provide the material with which to symbolically construct the objective world. In symbolically constructing the objective world we are forced to replace space and time through a pure arithmetical construct. Instead of spacetime points, \(n\)-tuples of pure numbers corresponding to a given coordinate system are used. Weyl (1948, 303) says: … the laws of physics are viewed as arithmetic laws between numerical values of variable magnitudes, in which spatial points and moments of time are represented through their numerical coordinates. Magnitudes such as the temperature of a body or the field strength of an electric field, which have at each spacetime point a definite value, appear as functions of four variables, the spacetime coordinates \(x, y, z, t\). In systematic theorizing we construct a formal scaffold that consists of mere symbols, according to Weyl (1948, 311), without explaining initially what the symbols for mass, charge, field strength, etc., mean; and only toward the end do we describe how the symbolic structure connects directly with experience. It is certain, that on the symbolic side, not space and time but four independent variables \(x, y, z, t\) appear; one speaks of space, as one does of sounds and colours, only on the side of conscious experience. A monochromatic light signal … has now become a mathematical formula in which a certain symbol \(F\), called electromagnetic field strength, is expressed as a pure arithmetically constructed function of four other symbols \(x, y, z, t\), called spacetime coordinates. At another place Weyl (1949a, 113) says: Intuitive space and intuitive time are thus hardly the adequate medium in which physics is to construct the external world. No less than the sense qualities must the intuitions of space and time be relinquished as its building material; they must be replaced by a four-dimensional continuum in the abstract arithmetical sense. Weyl’s point is that while space and time exist within the realm of conscious experience, or, according to Kant, as \(a\) priori forms underlying all of our conscious experiences, they are unsuited as elements with which to construct the objective world and must be replaced by means of a purely arithmetical symbolic representation. All that we are left with, according to Weyl, is symbolic construction. If this still needed any confirmation, Weyl (1948, 313) says, it was provided by the theory of relativity and quantum theory.^[108] For ease of reference we repeat a citation of Weyl (1988, 4–5) in §4.3.1: Coordinates are introduced on the Mf [manifold] in the most direct way through the mapping onto the number space, in such a way, that all coordinates, which arise through one-to-one continuous transformations, are equally possible. With this the coordinate concept breaks loose from all special constructions to which it was bound earlier in geometry. In the language of relativity this means: The coordinates are not measured, their values are not read off from real measuring rods which react in a definite way to physical fields and the metrical structure, rather they are a priori placed in the world arbitrarily, in order to characterize those physical fields including the metric structure numerically. The metric structure becomes through this, so to speak, freed from space; it becomes an existing field within the remaining structure-less space. Through this, space as form of appearance contrasts more clearly with its real content: The content is measured after the form is arbitrarily related to coordinates.^[109] The last two sentences in the above quote suggest that, (a) Weyl embraces something close to Kant’s position, according to which space and time are “a priori forms of appearances”, or that (b) Weyl adheres to a position called spacetime substantivalism, according to which, in addition to body and fields and their relations, there also exists a ‘container’, the spacetime manifold, and this manifold, its points and the manifold differential-topological relations are physically real. However, this interpretation would contradict Weyl’s basic thesis that in the symbolic construction of the objective word we are left with nothing but symbolic arithmetic functional relations. Weyl’s phrases, do not denote either a physically real container or something like Kant’s a priori form of intuition. They merely denote a conceptual or formal scaffolding, a logical space, as it were, whose points are represented by purely formal coordinates \((n\)-tuple of pure numbers). It is such a formal space which is employed by the theorist in the initial stages of constructing an objective world. To emphasize, in modelling the objective world the theorist begins by constructing a formal scaffold which consists of mere symbols and formal coordinates, without explaining initially what the symbols for mass, charge, field strength, etc., mean; only toward the end does the theorist describe how the symbolic structure connects directly with experience ((Weyl, 1948, 311)). The four-dimensional space-time continuum must be replaced by a four-dimensional coordinate space \(\mathbb{R}^{4}\). However, the sheer arbitrariness with which we assign coordinates does not affect the objective relations and features of the world itself. To the contrary, it is only relative to a symbolic construction or modelling by means of an assignment of coordinates that the state of the world, its relations and properties, can be objectively determined by means of distinct, reproducible symbols. While our immediate experiences are subjective and absolute, our symbolic construction of the objective world is of necessity relative. Weyl (1949a, 116) says: Whoever desires the absolute must take the subjectivity and egocentricity into the bargain; whoever feels drawn toward the objective faces the problem of relativity. Weyl (1949a, 75) notes, “The objectification, by elimination of the ego and its immediate life of intuition, does not fully succeed, and the coordinate system remains as the necessary residue of the ego-extinction.” However, this residue of ego involvement is subsequently rendered harmless through the principle of invariance. The transition from one admissible coordinate system to another can be mathematically described, and the natural laws and measurable quantities must be invariant under such transformations. This, Weyl (1948, 336) says, constitutes the general principle of relativity. Weyl (1949a, 104) says: … Only such relations will have objective meaning as are independent of the mapping chosen and therefore remain invariant under deformations of the map. Such a relation is, for instance, the intersection of two world lines. If we wish to characterize a special mapping or a special class of mappings, we must do so in terms of the real physical events and of the structure revealed in them. That is the content of the postulate of general relativity. According to the special theory of relativity, it is possible in particular to construct a map of the world such that (1) the world line of each mass point which is subject to no external forces appears as a straight line, and (2) the light cone issuing from an arbitrary world point is represented by a circular cone with vertical axis and a vertex angle of 90°. In this theory the inertial and causal structure and hence also the metrical structure of the world have the character of rigidity, they are absolutely fixed once and for all. It is impossible objectively, without resorting to individual exhibition, to make a narrower selection from among the ‘normal mappings’ satisfying the above conditions (1) and (2). Weyl (1949a, 115) provides an illustration, which shows how a measurement by observer \(B\) of the angular distance \(\delta\) between two stars \(\Sigma\) and \(\Sigma^{*}\) can be constructed in the four-dimensional number space, and can be expressed as an invariant.^[110] Figure 15: Measurement of the angular distance \(\delta\) by an observer \(B\) between two stars In figure 15 the stars and observer are represented by their world lines, and the past light cone \(K\) issuing from the observation event \(O\) intersects the world lines of the stars \(\Sigma\) and \(\Sigma^{*}\) in \(E\) and \(E^{*}\) respectively. The light rays emitted at \(E\) and \(E^{*}\), which arrive at the observation event \(O\), are null geodesics laying on the past light cone and are respectively denoted by \(\Lambda\) and \(\Lambda^{*}\). This construction of the numerical quantity of the angle \(\delta\) observed by \(B\) at \(O\), which is describable in the form of purely arithmetical relations, is invariant under arbitrary coordinate transformations and constitutes an objective fact of the world.^[111] On the other hand, the angles between two stars determine the objectively indescribable subjective experience of the observer. Moreover, Weyl says, “there is no difference in our experiences to which there does not correspond a difference in the underlying objective situation.” And that difference is itself invariant under arbitrary coordinate transformations. In other words, an observer’s subjective experiences supervene on the invariant relationships and structures of a symbolically constructed objective world. Perhaps no statement captures the contrast between the objective-symbolic and the subjective-intuitive more vividly then Weyl’s famous statement The objective world simply \(is\), it does not happen. Only to the gaze of my consciousness, crawling upward along the life line of my body, does a section of this world come to life as a fleeting image in space which continuously changes in time.^[112] [1908] “Singuläre Integralgleichungen mit besonderer Berücksichtigung des Fourierschen Integraltheorems”. Dissertation, Göttingen. GA I, 1–87, [1]. [1910a] “Über die Definitionen der mathematischen Grundbegriffe”. Mathematisch-naturwissenschaftliche Blätter, 7: 93–95 and 109–113. GA I, 298–304, [9]. [1910b] “Über gewöhnliche Differentialgleichungen mit Singularitäten und die zugehörigen Entwicklungen willkürlicher Funktionen”. Mathematische Annalen, 68:220–269. GA I, 248–297, [8]. [1913] Die Idee der Riemannschen Fläche. B. G. Teubner, Leipzig, 1 edition. 2 edn, B. G. Teubner, Leipzig, 1923; Reprint of 2 edn, Chelsea Co., New York, 1951; 3 edn, revised, B. G. Teubner, Leipzig, 1955. English translation of 3 edn, The Concept of a Riemann Surface, Addison-Wesley, 1964. Dover edition 2009. [1918a] “Gravitation und Elektrizität”. Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin, pages 465–480. GA II, 29–42, [31]. [1918b] “Reine Infinitesimalgeometrie”. Mathematische Zeitschrift, 2:384–411. GA II, 1–28, [30]. [1918c] Das Kontinuum. Veit & Co., Leipzig, Reprinted 1987. 2 edn, de Gryter & Co., Berlin, 1932. English translation: The Continuum: A Critical Examination of the Foundation of Analysis, translated by Stephen Pollard and Thomas Bole, Thomas Jefferson University Press: 1987. Corrected re-publication, Dover 1994. [1918d] “Letter to Einstein, Zürich, 10 December”. In Robert Schulmann, A. J. Kox, Michel Janssen, and József Illy, editors, The Collected Papers of Albert Einstein: The Berlin Years: Correspondence, 1914–1918, volume 8, Part B. Princeton University Press, 1918/1998. [1919a] “Eine neue Erweiterung der Relativitätstheorie”. Annalen der Physik, 59:101–133. GA II, 55–87, [34]. [1919b] “Über die statischen kugelsymmetrischen Lösungen von Einsteins “kosmologischen” Gravitationsgleichungen”. Physikalische Zeitschrift, 20:31–34. GA II, 51–54, [33]. [1920] “Das Verhältnis der kausalen zur statistischen Betrachtungsweise in der Physik”. Schweizerische Medizinische Wochenschrift, 50:737–741. GA II, 113–122, [38]. [1921a] “Electricity and gravitation”. Nature, 106:800–802. GA II, 260–262, [48]. [1921b] “Feld und Materie”. Annalen der Physik, 65:541–563. GA II, 237–259, [47]. [1921c] “Zur Infinitesimalgeometrie: Einordnung der projektiven und konformen Auffassung”. Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen; Mathematisch-physikalische Klasse, pages 99–112. GA II, 195–207, [43]. [1921] “Über die neue Grundlagenkrise der Mathematik”. Mathematische Zeitschrift, 10:39–79, Reprinted 1998. GA II, 143–180, [41]. Reprinted by Wissenschaftliche Buchgesellschaft, Darmstadt, 1965. English translation On the New Foundational Crisis in Mathematics in Mancosu (1998), 86–122. [1922a] “Das Raumproblem”. Jahresbericht der Deutschen Mathematikervereinigung, 31:205–221. GA II, 328–344, [53]. [1922b] “Die Einzigartigkeit der Pythagoreischen Maßbestimmung”. Mathematische Zeitschrift, 12:114–146. GA II, 263–295, [49]. [1923a] Mathematische Analyse Des Raumproblems. J. Springer, Berlin. [1923b] Raum, Zeit, Materie. J. Springer, Berlin. 3 edn, essentially revised, J. Springer, Berlin 1919; 4 edn, essentially revised, J. Springer, Berlin 1921; 5 edn, revised, J. Springer, Berlin, 1923; 7 edn, edited (with notes) by J. Ehlers, Springer, Berlin 1988; Temps, espace, matière (from the 4th German edn), A. Blanchard, Paris, 1922; Space, Time, Matter, (from the 4th German edn), Methuen, London, 1922. [1923c] “Zur allgemeinen Relativitätstheorie”. Physikalische Zeitschrift, 24:230–232. GA II, 375–377, [56]. [1924a] “Das gruppentheoretische Fundament der Tensorrechnung”. Nachrichten der Gesellschaft der Wissenschaften zu Göttingen. Mathematisch-physikalische Klasse, pages 218–224. GA II, 461–467, [62]. [1924b] “Massenträgheit und Kosmos. Ein Dialog”. Die Naturwissenschaften, 12:197–204. GA II, 478–485, [65]. [1924c] “Randbemerkungen zu Hauptproblemen der Mathematik”. Mathematische Zeitschrift, 20:131–150. GA II, 433–452, [60]. [1924d] “Über die Symmetrie der Tensoren und die Tragwite der symbolischen Methode in der Invariantentheorie”. Rendiconti del Circolo Matematico di Palermo, pages 29–36. GA II, 468–475, [63]. [1924e] “Was ist Materie?” Die Naturwissenschaften, 12:561–568, 585–593, and 604–611. GA II, 486–510, [66]. Reprinted by J. Springer, Berlin, 1924 and Wissenschaftliche Buchgesellschaft, Darmstadt, [1924f] “Zur Theorie der Darstellung der einfachen kontinuierlichen Gruppen”. (Aus einem Schreiben an Herrn I. Schur). Sitzungsberichte der Preußischen Akademie der Wissenschaften zu Berlin, pages 338–345. GA II, 453–460, [61]. [1925a] “Theorie der Darstellung kontinuierlicher halbeinfacher Gruppen durch lineare Transformationen I”. Mathematische Zeitschrift, 23:271–309. GA II, 543–579, [68]. [1925b] “Die heutige Erkenntnislage in der Mathematik”. Symposion, 1:1–32, Reprinted 1998. GA II, 511–542, [67]. English translation On the Current Epistemological Situation in Mathematics, in Mancosu (1998), 123–142. [1926a] “Theorie der Darstellung kontinuierlicher halbeinfacher Gruppen durch lineare Transformationen II”. Mathematische Zeitschrift, 24:328–376. GA II, 580–605, [68]. [1926b] “Theorie der Darstellung kontinuierlicher halbeinfacher Gruppen durch lineare Transformationen III”. Mathematische Zeitschrift, 24:377–395. GA II, 606–645, [68]. [1926c] “Theorie der Darstellung kontinuierlicher halbeinfacher Gruppen durch lineare Transformationen (Nachtrag)”. Mathematische Zeitschrift, 24:789–791. GA II, 645–647, [68]. [1926d] “Universe, modern conceptions of”. In The Encyclopedia Britannica, pages 908–911. 13 edition. [1927] “Quantenmechanik und Gruppentheorie”. Zeitschrift für Physik, 46:1–46. GA III, 90–135, [75]. [1928a] Gruppentheorie und Quantenmechanik. S. Hirzel, Leipzig. [1928b] “Diskussionsbemerkungen zu dem zweiten Hilbertschen Vortrag über die Grundlagen der Mathematik”. Abhandlungen aus dem mathematischen Seminar der Hamburgischen Universität, 6:86–88, Reprinted 1967. GA III, 147–149, [77]. English translation Comments on Hilbert’s second lecture on the foundations of mathematics in van Heijenoort (1967), 480–484. [1929a] “Consistency in mathematics”. The Rice Institute Pamphlet, 16:245–265. GA III, 150–170, [78]. [1929b] “Elektron und Gravitation”. Zeitschrift für Physik, 56:330–352. GA III, 245–267, [85]. [1929c] “Gravitation and the electron”. The Rice Institute Pamphlet, 16:280–295. GA III, 229–244, [84]. [1929d] “Gravitation and the electron”. Proceedings of the National Academy of Sciences of the United States of America, 15:323–334. GA III, 217–228, [83]. [1929e] “On the foundations of infinitesimal geometry”. Bulletin of the American Mathematical Society, 35:716–725. Reprinted in Weyl (1968). [1930] “Redshift and relativistic cosmology”. The London, Edinburgh and Dublin philosophical Magazine and Journal of Science, 9:936–943. GA III, 300–307, [89]. [1931a] “Geometrie und Physik”. Die Naturwissenschaften, 19:49–58. GA III, 336–345, [93]. [1931b] Gruppentheorie und Quantenmechanik. S. Hirzel, Leipzig. (a) 2nd reworked edition, S. Hirzel, Leipzig 1931. (b) English translation: The Theory of groups and quantum mechanics, Dutten, New York, 1932. (c) Reprinting of (b): Dover Publications, New York, 1949. [1932] The Open World: Three Lectures on the Metaphysical Implications of Science. Yale University Press. [1934a] Mind and Nature. University of Pennsylvania Press. [1934b] “Universum und Atom”. Die Naturwissenschaften, 22:145–149. GA III, 420–424, [101]. [1938a] “Cartan on groups and differential geometry”. Bulletin of the American Mathematical Society, 44:598–601. [1938b] Symmetry. Journal of the Washington Academy of Sciences, 28:253–271. GA III, 592–610, [111]. [1939] The classical groups, their invariants and representations. Princeton University Press; Oxford University Press; H. Milford, London. 2 edn, Princeton University Press; Oxford University Press; H. Milford, London, 1946. [1946] “Mathematics and logic. A brief survey serving as a preface to a review of ‘The Philosophy of Bertrand Russell’”. The American Mathematical Monthly, 53:2–13. GA IV, 268–279, [138]. [1948] “Wissenschaft als symbolische Konstruktion des Menschen”. Eranos-Jahrbuch, pages 375–431. GA IV, 289–345, [142]. [1949a] Philosophy of Mathematics and Natural Science. Princeton University Press, 1 edition. 2 edn, Princeton University Press, 1950. 2009 edition, with a new introduction by Frank Wilczek. An expanded English version of Philosophie der Mathematik und Naturwissenschaft, München, Leibniz Verlag, 1927. [1949b] “Relativity theory as a stimulus in mathematical research”. Proceedings of the American Philosophical Society, 93: 535–541. GA IV, 394–400, [147]. [1950] Space-Time-Matter. Dover, New York. English translation of the 4th edition (1921) of Raum-Zeit-Materie by Henry L. Brose. [1952] Symmetry. Princeton University Press, Princeton. [1953] “Über den Symbolismus der Mathematik und mathematischen Physik”. Studium generale, 6:219–238. GA IV, 527–536, [156]. [1954a] “Address on the unity of knowledge”. GA IV, 623–649, [165]. Address delivered at the Bicentennial Conference of Columbia University. [1954b] “Erkenntnis und Besinnung (Ein Lebensrückblick)”. Studia Philosophica, 1954/1955. GA IV, 631–649, [166]. A talk given at the University of Lausanne, May 1954. English translation Insight and Reflection in T. L. Saaty and F. J. Weyl, eds., The Spirit and Uses of the Mathematical Sciences, 281–301, New York, McGraw-Hill, 1955. [1968] Gesammelte Abhandlungen, volume I–IV. Springer Verlag, Berlin. Edited by K. Chandrasekharan. [1985] “Axiomatic versus constructive procedures in mathematics”. The Mathematical Intelligencer, 7(4):12–17. A posthumous publication, edited by Tito Tonietti. [1988] Riemanns geometrische Ideen, ihre Auswirkung und ihre Verknüpfung mit der Gruppentheorie. Springer-Verlag. Posthumous publication; edited by K. Chandrasekharan. [2009] Mind and Nature: Selected Writings on Philosophy, Mathematics and Physics, Princeton University Press. Edited and with an introduction by Peter Pesic. • Barbour, J. and Pfister, H. (eds.). 1995. Mach’s Principle: From Newton’s Bucket to Quantum Gravity, volume 6 of Einstein Studies. Birkäuser, Basel. • Barbour, J. 2001. The Discovery of Dynamics. Oxford University Press, New York. • Becker, O. 1973. Beiträge zur phänomenologischen Begründung der Geometrie und ihrer physikalischen Anwendung. Max Niemeyer Verlag, Tübingen. • Bell, J. L. 2000. “Hermann Weyl on intuition and the continuum,” Philosophia Mathematica, 8: 259–273. • Bell, J. L. 2004. “Hermann Weyl’s later philosophical views: his divergence from Husserl,” In Feist [2004a], 173–185. • Bohr, N. and Rosenfeld. L. 1933. “Zur Frage der Messbarkeit der elektromagnetischen Feldgrössen”. Kgl. Danske Videnshab. Selskb, Mat.-Phys. Medd, 12(8). • Bohr, N. and Rosenfeld, L. 1950. “Field and charge measurements in qunatum electrodynamics”. Phys. Rev., 78:794–798. • Bondi, H. 1960. Cosmology. Cambridge University Press, London. • Brading, K. A. 2002. “Which symmetry? Noether, Weyl, and the conservation of charge”. Studies in the History and Philosophy of Modern Physics, 33:3–22. • Brading, K. and Brown, H. R. 2003. “Symmetries and Noether’s theorems”. In Katherine Brading and Elena Castellani, editors, Symmetries in Physics: Philosophical Reflections, chapter 5, pages 89–109. Cambridge University Press. • Brauer, R. and Weyl, H. 1935. “Spinors in \(n\) dimensions”. American Journal of Mathematics, 57:425–449. • Brown, H. R. 2005. Physical Relativity: Space-time Structure from a Dynamical Perspective. Clarendon Press, Oxford, New York. • Cao, T. N. 1997. Conceptual Developments of Twentieth Century Field Theories. Cambridge University Press, Cambridge. • Carnap, R. 1963. “Intellectual autobiography”. In Paul Arthur Schilpp, editor, The Philosophy of Rudolf Carnap, volume XI of The Library of Living Philosophers, pages 1–84. Open Court, La Salle, • Cartan, E. 1922. “Sur un théorème fondamental de M. H. Weyl dan la théorie de l’espace métrique”. Comptes Rendus de l’Academie des Sciences, 175:82–85. Reprinted in (Cartan, 1952–1955, 3.1, • Cartan, E, 1923a. “Sur les variétés à connexion affine et la théorie de la relativité générallisée”. Annales de l’ École Normale Supérieure, 40:325–412. Reprinted in (Cartan, 1952–1955, 3.1, • Cartan, E. 1923b. “Sur un théorème fondamental de M. H. Weyl”. Journal de Mathématique, II(2):167–192. Reprinted in (Cartan, 1952–1955, 3.1, 633–658). • Cartan, E. 1937. La théorie des groupes finis et continus et la géométrie différentielle traitées par la méthode du repère mobile. Cahiers scientifiques 18. Gauthier-Villars, Paris. • Cartan, E. 1952–1955. Oeuvres Complètes, volume 1–3. Gauthiers-Villars, Paris. • Chern, Shiing-Shen. 1996. “Finsler geometry is just Riemannian geometry without the quadratic restriction”. Notices of the American Mathematical Society, 43(9):959–963, September. • Coleman, R.A. and Korté, H. 1980. “Jet bundles and path structures”. The Journal of Mathematical Physics, 21(6):1340–1351. • Coleman, R.A. and Korté, H. 1981. “Spacetime G-structures and their prolongations”. The Journal of Mathematical Physics, 22 (11):2598–2611. • Coleman, R.A. and Korté, H. 1982. “The status and meaning of the laws of inertia”. In The Proceedings of the Biennial Meeting of the Philosophy of Science Association, pages 257–274, • Coleman, R.A. and Korté, H. 1984. “Constraints on the nature of inertial motion arising from the universality of free fall and the conformal causal structure of spacetime”. The Journal of Mathematical Physics, 25(12):3513–3526. • Coleman, R.A. and Korté, H. 1987. “Any physical, monopole, equation-of-motion structure uniquely determines a projective inertial structure and an \((n - 1)\)-force”. The Journal of Mathematical Physics, 28(7):1492–1498. • Coleman, R.A. and Korté, H. 1989. “All directing fields that are polynomial in the \((n - 1)\)-velocity are geodesic”. The Journal of Mathematical Physics, 30(5):1030–1033. • Coleman, R.A. and Korté, H. 1990. “Harmonic analysis of directing fields”. The Journal of Mathematical Physics, 31(1):127–130. • Coleman, R.A. and Korté, H. 1995. “A new semantics for the epistemology of geometry I, Modeling spacetime structure”. Erkenntnis, 42:141–160. • Coleman, R.A. and Korté, H. 2001. “Hermann Weyl: Mathematician, Physicist, Philosopher”. In Erhard Scholz, editor, Hermann Weyl’s Raum – Zeit – Materie and a General Introduction to His Scientific Work, volume 30 of Deutsche Mathematiker-Vereinigung Seminar, pages 161–386. Birkhäuser, Basel. • Da Silva, J. J. 1997. “Husserl’s phenomenology and Weyl’s predicativism,” Synthese, 110: 277–296. • Dirac, P. 1928a. “The quantum theory of the electron I”. Proceedings of the Royal Society (London) A, 117:610–624, February. • Dirac, P. 1928b. “The quantum theory of the electron II”. Proceedings of the Royal Society (London) A, 118:351–361, March. • Dirac, P. 1930. “A theory of electrons and protons”. Proceedings of the Royal Society (London) A, 126:360–365, January. • Dirac, P. 1931. “Quantised singularities in the electromagnetic field”. Proceedings of the Royal Society (London) A, 133: 60–72, September. • Dirac, P. 1973. “Long range forces and broken symmetries”. Proceedings of the Royal Society, 333A:403–418. • Dirac, P. 1977. “Recollections of an exciting era”. In C. Weiner, editor, History of Twentieth Century Physics, volume LVII of Proceedings of the International School of Physics “Enrico Fermi”, pages 109–146. Italian Physical Society, Academic Press. The summer school on the history of twentieth century of physics took place from July 31 to August 12. • Dirac, P. 1971. The Development of Quantum Theory. (J. Robert Oppenheimer Memorial Prize Acceptance Speech) Gordon and Breach Science Publishers, New York. • DiSalle, Robert. 2006. Understanding Space-Time: The Philosophical Development of Physics from Newton to Einstein. Cambridge University Press, Cambridge. • Dyson, J.D. 1983. Unfashionable pursuits. Alexander von Humboldt Stiftung Mitteilung, 41:12–18. • Eddington, A. 1933. The Expanding Universe. Cambridge University Press, Cambridge. • Ehlers, J. and Köhler, E. 1977. “Path structures on manifolds”. The Journal of Mathematical Physics, 18(10):2014–2018. • Ehlers, J., Pirani, R. A. E., and Schild, A. 1972. “The geometry of free fall and light propagation”. In L. O’ Raifeartaigh, editor, General Relativity, Papers in Honour of J. L. Synge, pages 64–84. Clarendon Press, Oxford. • Ehlers, J. 1988. “Hermann Weyl’s contributions to the general theory of relativity”. In Wolfgang Deppert and Kurt Hübner, editors, Exact Sciences and Their Philosophical Foundations: Exakte Wissenschaften und ihre philosophische Grundlegung. Vorträge des internationalen Hermann-Weyl-Kongresses, pages 83–105. Peter Lang Verlag, Frankfurt/M - Bern. • Ehresmann, C. 1951a. “Les prolongements d’ une variété différentiable I”. Comptes rendus des séances de l’ Académie des Sciences, 233:598–600. Reprinted in (Ehresmann, 1983, pp. 343–345). • Ehresmann, C. 1951b. “Les prolongements d’ une variété différentiable II”. Comptes rendus des séances de l’ Académie des Sciences, 233:777–779. Reprinted in (Ehresmann, 1983, pp. 346–348). • Ehresmann, C. 1951c. Les prolongements d’ une variété différentiable III. Comptes rendus des séances de l’ Académie des Sciences, 233:1081–1083. Reprinted in (Ehresmann, 1983, pp. 349–351). • Ehresmann, C. 1952a. “Les prolongements d’ une variété différentiable IV”. Comptes rendus des séances de l’ Académie des Sciences, 234:1028–1030. Reprinted in (Ehresmann, 1983, pp. 355–357). • Ehresmann, C. 1952b. “Les prolongements d’ une variété différentiable V”. Comptes rendus des séances de l’ Académie des Sciences, 234:1424–1425. Reprinted in (Ehresmann, 1983, pp. 358–360). • Ehresmann, C. 1983. “Charles Ehresmann œuvres complètes et commentées”. In A. C. Ehresmann, editor, Topologie Algébrique et Géométrie Différentielle, number Suppléments #1 et #2 of Vol. 24 in Cahiers de Topologie et Géométrie Différentielle. Evrard, Amiens. • Einstein, A. 1916. “Die Grundlage der allgemeinen Relativitätstheorie”. Annalen der Physik, 49(7):769–822. English translation “The Foundation of the General Theory of Relativity” in Lorentz et al. (1952). • Einstein, A. 1917. “Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie”. Königlich Preußische Akademie der Wissenschaften, pages 142–152, February. • Einstein, A. 1949. “Autobiographical notes”. In P. A. Schilpp, editor, Albert Einstein: Philosopher-Scientist, volume 7 of The Library of Living Philosophers, pages 1–96. Open Court, La Salle, 3 edition. 1970 edition. • Einstein, A. 1928. “Riemann-Geometrie mit Aufrechterhaltung des Begriffes des Fernparallelismus”. Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-Mathematische Klasse, • Einstein, A. 1954. “What is the theory of relativity”. In Ideas and Opinions, pages 227–232. Bonanza Books, New York. • Einstein, A. and Infeld, L. 1938. The Evolution of Physics. Simon & Schuster, New York. • Ewald, W. 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, Oxford: Clarendon Press. • Feferman, S., 1988. “Weyl vindicated: Das Kontinuum 70 years later,” in Temi e prospettive della logica e della filosofia della scienza contemporanee, I, Bologna (1988), 59–93. Reprinted in Feferman, S. In the Light of Logic, Oxford University Press, New York (1998), 249–283. • Feferman, S. 2000. “The significance of Hermann Weyl’s Das Kontinuum,” in Hendricks et al, (eds.), Proof Theory, Dordrecht: Kluwer. • Feist, R., 2002. “Weyl’s appropriation of Husserl’s and Poincaré’s thought,” Synthese, 132: 273–301. • Feist, R. (ed.), 2004a. Husserl and the Sciences, University of Ottawa Press. • Feist, R. 2004b. “Husserl and Weyl: phenomenology, mathematics, and physics,” In Feist [2004a], 153–172. • Fock, V. 1964. The Theory of Space, Time and Gravitation. The MacMillan Company, New York. • Fock, V. 1926. “Über die invariante Form der Wellen-und Bewegungsgleichungen für einen geladenen Massepunkt”. Zeitschrift für Physik, 39:226–232. • Folina, J. 2008. “Intuition between the analytic-continental divide: Hermann Weyl’s philosophy of the continuum,” Philosophia Mathematica, 16: 25–55. • Frankel, T. 1997. The Geometry of Physics. Cambridge University Press. • Frei, G. and Stammbach, U. 1992. H. Weyl und die Mathematik an der ETH Zürich 1913–1930. Birkhäuser Verlag, Basel. • Goodman, R. 2008. “Harmonic analysis on compact symmetric spaces: the legacy of Elie Cartan and Hermann Weyl”. In Katrin Tent, editor, Groups and Analysis, volume 354 of London Mathematical Society Lecture Note Series, chapter 1, pages 1–23. Cambridge, Cambridge. • Grünbaum, A. 1973. Philosophical Problems of Space and Time, volume XII of Boston Studies in the Philosophy of Science. Reidel, Dordrecht, 2 edition. • Hardy, G.H. 1967. A Mathematician’s Apology. Cambridge University Press. • Hawkins, T. 1998. “From general relativity to group representations: the background to Weyl’s papers of 1925–26”. Société Mathématique de France, pages 69–100. • Hawkins, T. 2000. Emergence of the Theory of Lie Groups. Sources and Studies in the History of Mathematics and Physical Sciences. Springer. • Hilbert, D. 1902. “Über die Grundlagen der Geometrie”. Math. Ann., 56:381–422. • Hilbert, D. 1922. “The new grounding of mathematics: first report,” translated from German original in Ewald [1996], 1117–1134. • Hilbert, D. 1926/1967. “Über das Unendliche”. Mathematische Annalen 95, 161–190. English translation “On the Infinite.” in van Heijenoort [1981], 369–392. • Hilbert, D. 1927/1967. “Die Grundlagen der Mathematik”. English translation The Foundations of Mathematics in van Heijenoort (1967), 464–479. • Husserl, E. 1931. Ideas: General Introduction to Pure Phenomenology, Tr. W.R. Boyce Gibson. New York: Collier Books. Fourth Printing, 1972. • Jackson, J.D. and Okun, L.B. 2001. “Historical roots of gauge invariance”. Reviews of Modern Physics, 73:663–680. • Kerszberg, Pierre. 1989. The Invented Universe: The Einstein-De Sitter Controversy (1916–17) and the Rise of Relativistic Cosmology. Clarendon Press, Oxford. • Kerszberg, Pierre. 2007. “Perseverance and adjustment: On Weyl’s phenomenological philosophy of nature”. In Luciano Boi, Pierre Kerszberg, and Frédéric Patras, editors, Rediscovering Phenomenology: Phenomenological Essays of Mathematical Beings, Physical Reality, Perception and Conscioussness, pages 173–194. Springer, Dordrecht. • Klein, Felix 1872/1921. “Vergleichende Betrachtungen über neuere geometrische Forschungen”. In R. Fricke and A. Ostrowski, editors, Felix Klein Gesammelte Mathematische Abhandlungen, volume 1. Springer, Berlin. (This is Klein’s inauguration paper upon his appointment to a professorship at Erlangen in 1872; it was originally published in 1872). [Available online]. • Kragh, Helge. 1990. Dirac A Scientific Biography. Cambridge University Press. • Kragh, Helge. 1996. Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Princeton University Press, Princeton, New Jersey. • Laugwitz, Detlef. 1958. “Über eine Vermutung von Hermann Weyl zum Raumproblem”. Archiv der Mathematik, IX:128–133. • Lie, S. 1886/1935. “Bemerkungen zu v. Helmholtzs Arbeit: Ueber die Tatsachen, die der Geometrie zu Grunde liegen”. In Lie, Gesammelte Abhandlungen, volume II, pages 374–379. Teubner, Leipzig. Originally published in Berichte über die Abhandlungen der Kgl. Sächsischen Gesellschaft der Wissenschaften in Leipzig, Math.-Phys. Klasse, Supplement, abgeliefert am 21.2.1887, pp. 337–342. • Lie, S. 1890a. “Über die Grundlagen der Geometrie (Erste Abhandlung)”. Berichte über d. Verh. d. Sächsischen Gesell. der Wiss., math.-phys. Klasse, pages 284–321. • Lie, S. 1890b. “Über die Grundlagen der Geometrie (Zweite Abhandlung)”. Berichte über d. Verh. d. Sächsischen Gesell. der Wiss., pages 355–418. • London, Fritz 1927. “Quantenmechanische Deutung der Theorie von Weyl”. Zeitschrift für Physik, 42:375–389. • Lorentz, H.A., Einstein, A., Minkowski, H., and Weyl, H. 1923/1952. The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity. Dover Publications, Inc., New York, 1952. Translated from the third and enlarged German edition of 1923 “Das Relativitätsprinzip, eine Sammlung von Abhandlungen” (Leibzig: Teubner) by W. Perrett and G. B. Jeffrey. • Mackey, G.W. 1988. “Hermann Weyl and the application of group theory to quantum mechanics”. In Wolfgang Deppert and Kurt Hübner, editors, Exact Sciences and Their Philosophical Foundations: Exakte Wissenschaften und ihre philosophische Grundlegung. Vorträge des internationalen Hermann-Weyl-Kongresses, pages 131–159. Peter Lang Verlag, Frankfurt/M - Bern. • Mancosu, P., 1998. From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920s, Oxford: Clarendon Press. • Mancosu, P. and Ryckman, T. 2002. “Mathematics and phenomenology: the correspondence between O. Becker and H. Weyl”. Philosophia Mathematica 3, 10:130–202. • Marzke, R.F. and Wheeler, J.A. 1964. “Gravitation as geometry, I: The geometry of space-time and the geometrical standard meter”. In Hong-Yee Chiu and W. F. Hoffmann, editors, Gravitation and Relativity, pages 40–64. Benjamin, Amsterdam. • Mehra, J. and Rechenberg, H. 2000. The Historical Development of Quantum Theory, volume 6, Part 1. Springer. • Mielke, E. and Hehl, F. 1988. “Die Entwickelung der Eichtheorien: Marginalien zu deren Wissenschaftsgeschichte”. In Exact Sciences and Their Philosophical Foundations: Exakte Wissenschaften und ihre philosophische Grundlegung. Vorträge des internationalen Hermann-Weyl-Kongresses, pages 191–231. Peter Lang Verlag, Frankfurt/M - Bern. • Misner, C.W., Thorne, K.S. and Wheeler, J.A. 1973. Gravitation. W. H. Freeman, San Francisco. • Moriyasu, K. 1983. An Elementary Primer For Gauge Theory. World Scientific, Singapore. • Muller, F.A. and Saunders, Simon. 2008. “Discerning fermions”. British Journal for the Philosophy of Science, 59:499–548. • Narlikar, Jayant Vishnu. 2002 . An Introduction to Cosmology. Cambridge, Cambridge, 3 edition. • Noether, Emmy. 1918. “Invariante Variationsprobleme”. Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen; Mathematisch-physikalische Klasse, pages 235–257. • North, J.D. 1965. The Measure of the Universe. Dover Publications, New York. • Norton, John D., 1999. “Geometries in collision: Einstein, Klein and Riemann”. In Jeremy J. Gray, editor, The Symbolic Universe; Geometry and Physics 1890–1930. Oxford University Press. • O’Raifeartaigh, L. and Straumann, N. 2000. “Gauge theory: origins and modern developments”. Rev. Mod. Phys., 72(1). • O’Raifeartaigh, Lochlainn 1997.. The Dawning of Gauge Theory. Princeton Series in Physics. Princeton University Press, Princeton. • Pauli, W. 1921/1958. Relativitätstheorie, volume 19 of Encyklopädie der mathematischen Wissenschaften. B. G. Teubner, Leipzig. English translation; 1958 Pergamon Press, Ltd. • Penrose, Roger. 2004. Hermann Weyl’s neighborhood. Vintage, London. • Pesic, Peter. 2013. “Hermann Weyl’s neighborhood”. Studis in History and Philosophy of Science, 44, 150–153. • Raman, V.V. and Forman, P. 1969. “Why was it Schrödinger who developed de Broglie’s Ideas?” In Russell McCormmach, editor, Historical Studies in the Physical Sciences, pages 291–314.. University of Pennsylvania Press. • Reichenbach, Hans. 1924. Axiomatik der relativistischen Raum-Zeit-Lehre. Vieweg, Braunschweig. Reprinted in Hans Reichenbach Gesammelte Werke, volume 3, edited by Andreas Kamlah and Maria • Reid, C. 1986. Hilbert-Courant. New York: Springer-Verlag. • Riemann, Bernhard. 1854. “Ueber die Hypothesen, welche der Geometrie zu Grunde liegen”. Abhandlungen der Königlichen Gesellschaft der Wissenschaften zu Göttingen, 13. Reproduced in Riemann • Riemann, Bernhard. 1876/1953. Gesammelte Mathematische Werke. Dover, New York, 2 edition. Edited by Heinrich Weber with the assistance of Richard Dedekind; with a supplement edited by M. Noether and W. Wirtinger and with a new introduction by Professor Hans Lewy. • Ryckman, Thomas. 1994. “Weyl, Reichenbach and the epistemology of geometry”. Studies in the History and Philosophy of Modern Physics, 25:831–870. • Ryckman, Thomas. 1996. “Einstein agonists: Weyl and Reichenbach on geometry and the general theory of relativivty”. In Ronald N. Giere and Alan W. Richardson, editors, Origins of Logical Empiricism, volume XVI of Minnesota Studies in the Philosophy of Science, pages 165–209. University of Minnesota Press. • Ryckman, Thomas. 2003. “The philosophical roots of the gauge principle: Weyl and transcendental phenomenological idealism”. In Katherine Brading and Elena Castellani, editors, Symmetries in Physics: Philosophical Reflections. Cambridge. • Ryckman, Thomas. 2005. The Reign of Relativity: Philosophy in Physics 1915–1925. Oxford Studies in Philosophy of Science. Oxford University Press. • Ryckman, Thomas. 2009. “Hermann Weyl and “First Philosophy”: Constituting gauge invariance”. In Michel Bitbol, Pierre Kerszberg, and Jean Petitot, editors, Constituting Objectivity: Transcendental Perspectives on Modern Physics, The Western Ontario Series in Philosophy of Science, pages 281–298. Springer. • Salmon, Wesley C. 1977. “The curvature of physical space”. In J. Earman, C. N. Glymour, and J. J. Stachel, editors, Foundations of Space-Time Theories, volume VIII of Minnesota Studies in the Philosophy of Science, pages 281–302. University of Minnesota Press, Minneapolis. • Scheibe, Erhard. 1957 . “Über das Weylsche Raumproblem”. Journal für Mathematik, 197(3/4):162–207. (Dissertation Göttingen 1955). • Scheibe, Erhard. 1988. “Hermann Weyl and the nature of spacetime”. In Wolfgang Deppert and Kurt Hübner, editors, Exact Sciences and Their Philosophical Foundations: Exakte Wissenschaften und ihre philosophische Grundlegung. Vorträge des internationalen Hermann-Weyl-Kongresses, pages 61–82. Peter Lang Verlag, Frankfurt/M - Bern. • Scholz, Erhard. 1992. “Riemann’s vision of a new approach to geometry”. In D. Flament L. Boi and J.-M. Salanskis, editors, 1830–1930: A Century of Geometry, volume 402 of Lecture Notes in Physics , pages 22–34. Springer Verlag, Berlin. • Scholz, Erhard. 1999a. “Weyl and the theory of connections”. In Jeremy Gray, editor, The Symbolic Universe: Geometry and Physics 1890–1930, pages 260–284. Oxford University Press, Oxford. • Scholz, Erhard. 1999b. “Weyl and the theory of connections”. In Jeremy Gray, editor, The Symbolical Universe, pages 260–284. Oxford University Press. • Scholz, Erhard. 2001. “Weyl’s infinitesimalgeometrie, 1917–1925”. In Erhard Scholz, editor, Hermann Weyl’s Raum – Zeit – Materie and a General Introduction to His Scientific Work, pages 48–104. Birkäuser, Basel. • Scholz, Erhard. 2004. “Hermann Weyl’s Analysis of the “Problem of Space” and the Origin of Gauge Structures”. Science in Context, 17:165–197. • Scholz, Erhard. 2005. “Local spinor structures in V. Fock’s and H. Weyl’s work on the Dirac equation (1929)”. In Joseph Kouneiher, Philippe Nabonnand, and Jean-Jacques Szczeciniarz, editors, Géométrie au XXe siècle, 1930–2000: histoire et horizons, pages 284–301. Presses internationales Polytechnique. • Scholz, Erhard. 2006. “Introducing groups into quantum theory (1926–1930)”. Historia Mathematica, 33(4):440–490, November. • Schrödinger, Erwin. 1922. “Über eine bemerkenswerte Eigenschaft der Quantenbahnen eines einzelnen Elektrons”. Zeitschrift für Physik, 12:13–23. • Schwinger, Julian. 1988. “Hermann Weyl and quantum kinematics”. In Wolfgang Deppert and Kurt Hübner, editors, Exact Sciences and Their Philosophical Foundations: Exakte Wissenschaften und ihre philosophische Grundlegung. Vorträge des internationalen Hermann-Weyl-Kongresses, pages 107–129. Peter Lang Verlag, Frankfurt/M - Bern. • Sharpe, R.W. 1997. Differential Geometry; Cartan’s Generalization of Klein’s Erlangen Program. Graduate Texts in Mathematics. Springer Verlag. • Sieroka, Norman. 2006. “Weyl’s ‘agens theory’ of matter and the Zurich Fichte”. Studies in History and Philosophy of Science, 38: 84–107. • Sieroka, Norman. 2010.Umgebungen:Symboliscer Konstruktivismus im Anschluss an Hermann Weylund Fritz Medicus. Chronos Verlag, Zurich. • Sigurdsson, Skúli. 1991. Hermann Weyl, Mathematics and Physics, 1900–1927. Ph.D., Harvard University, Cambridge, Massachusetts. Department of the History of Science. • Sigurdsson, Skúli. 2001. “Journeys in spacetime”. In Erhard Scholz, editor, Hermann Weyl’s Raum – Zeit – Materie and a General Introduction to His Scientific Work, volume 30 of Deutsche Mathematiker-Vereinigung Seminar, pages 15–47. Birkhäuser, Basel. • Sklar, L. 1974. Space, Time, and Spacetime. University of California Press, Berkeley. • Sklar, L. 1977. “Facts, conventions and assumptions”. In J. Earman, C. N. Glymour, and J. J. Stachel, editors, Foundations of Space-Time Theories, volume VIII of Minnesota Studies in the Philosophy of Science, pages 206–274. University of Minnesota Press, Minneapolis. • Speiser, David. 1988. “Gruppentheorie und Quantenmechanik: the book and its position in Weyl’s work”. In Wolfgang Deppert and Kurt Hübner, editors, Exact Sciences and Their Philosophical Foundations: Exakte Wissenschaften und ihre philosophische Grundlegung. Vorträge des internationalen Hermann-Weyl-Kongresses, pages 161–189. Peter Lang Verlag, Frankfurt/M - Bern. • Straumann, N. 2001. “Ursprünge der Eichtheorien”. In Erhard Scholz, editor, Hermann Weyl’s Raum – Zeit – Materie and a General Introduction to His Scientific Work, pages 138–155. Birkäuser, • Tieszen, R., 2000. “The philosophical background of Weyl’s mathematical constructivism.” Philosophia Mathematica, 8: 274–301. • Thomas, T.Y. 1925. “On the projective and equi-projective geometries of paths”. Proceedings of the National Academy of Sciences, 2(4):199–209. • Van Atten, M., D. Van Dalen, and R. Tieszen. 2002. “The phenomenology and mathematics of the intuitive continuum” Philosophia Mathematica, 10: 203–226. • van Dalen, Dirk. 1995. “Hermann Weyl’s intuitionistic mathematics”. The Bulletin of Symbolic Logic, 1(2): 145–169. • van Heijenoort, J. (ed). 1967. From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931, Cambridge Massachusetts: Harvard University Press. • Veblen, O. and Thomas, J. M. 1926. “Projective invariants of the affine geometry of paths”. Annals of Mathematics, 27:279–296. • Vizgin, V. 1994. Unified Field Theories in the First Third of the Twentieth Century. Birkhäuser Verlag, Basel. Translated from the Russian original by J. Barbour. • von Helmholtz, H. 1868. “Über die Thatsachen, die der Geometrie zum Grunde liegen”. Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen, pages 192–222. Reprinted in Wissenschaftliche Abhandlungen (1883) vol. II, pp. 618–639. • von Neumann, J. 1928. “Einige Bemerkungen zur Diracschen Theorie des relativistischen Drehelektrons”. Zeitschrift für Physik, 48:868–881. • Weinberg, Steven 1972. Gravitation and Cosmology: Principles and Applications of the Geeral Theory of Relativity. John Wiley & Sons, New York. • Wheeler, John A. 1994. At Home in the Universe. The American Institute of Physics, New York, 1994. • Winnie, John A. 1977. “The causal theory of space-time”. In J. Earman, C. N. Glymour, and J. J. Stachel, editors, Foundations of Space-Time Theories, volume VIII of Minnesota Studies in the Philosophy of Science, pages 134–205. University of Minnesota Press, Minneapolis. • Yang, Chen Ning. 1986. “Hermann Weyl’s Contribution to Physics”. In K. Chandrasekharan, editor, H. Weyl, pages 7–21. Springer-Verlag, Berlin. Centenary Lectures delivered by C. N. Yang, R. Penrose, and A. Borel at the ETH Zürich. • Yang, Chen Ning. 1987. “Square root of minus one, complex phases and Erwin Schrödinger”. In C. W. Kilmister, editor, Schrödinger; Centenary celebration of a polymath, chapter 5, pages 53–64. Cambridge University Press. • Zee, A. 2003. Quantum Field Theory in a Nutshell. Princeton University Press. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. [Please contact the author with suggestions.] We would like to thank Thomas Ryckman for his invaluable comments and suggestions for this entry.
{"url":"https://plato.stanford.edu/archivES/FALL2017/Entries/weyl/","timestamp":"2024-11-02T07:33:03Z","content_type":"text/html","content_length":"390979","record_id":"<urn:uuid:dd5a9192-dd38-4e6c-98c5-fa4b48685324>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00265.warc.gz"}
Unification And Distant Scales- Unified Theory Why can’t we just build an accelerator to test the predictions of string theory? Because the obstacles are much bigger than money or social commitment. This section uses units where (Planck’s constant)/2p and the speed of light = 1. This choice of units is called natural units. With this choice, mass has units of inverse length, and vice versa. The conversion factor is 2×10^-7 eV = 1/meter. Spontaneous symmetry breaking On the previous page we mentioned that spontaneous symmetry breaking was the phenomenon that allowed gauge bosons to acquire mass without spoiling the gauge invariance that protects quantum consistency of the theory. But this trick is not special to electroweak theory; spontaneous symmetry breaking is a powerful phenomenon that is tremendously important in understanding unified particle theories in general. So we will explain this phenomenon in more detail here. The simplest example begins with a complex scalar field f(x) with the Lagrangian The potential V(f) has a strange looking shape: the minimum is not at the center, but in a circle around the center, as shown below. The scalar field f(x) can be written in terms of real and imaginary components, as below top, or expressed in terms of radial and angular degrees of freedom, shown on the bottom. The minimum values of the potential lie along the circle where The problem with describing f(x) in terms of f1 and f2 is that f1 and f2 don’t describe the normal modes of oscillation around the minimum of the potential. The normal modes for this potential are illustrated in the animation above by the two distinct motions of the yellow ball. One normal mode goes around and around the circle at the bottom of the potential. The other normal mode bobs up and down in the radial direction at a fixed value of angle, oscillating about the minimal value. Written in terms of the normal modes, the field becomes The physical states in the theory are the massive field r(x) with mass r[0], and the massless field b(x). The radial oscillations are resisted by the curved sides of the scalar potential in the radial direction. That’s why the radial field is massive. But the minimum of the potential is flat in the angular direction. That’s why the angular mode is massless. This is called a flat direction. Flat directions in the surface that form the minimum of the scalar potential lead to massless scalars. This issue comes up again in string theory in not a good way. The most crucial chapter in this story is what happens when this scalar field is coupled to a massless gauge boson A with a local U(1) gauge invariance. The Lagrangian is The story for the scalar field is as before. The physical scalar fields that oscillate as normal modes about the potential minimum are the massless angular mode and the massive radial mode. But the plot thickens with the addition of the massless gauge boson. At the minimum of the scalar potential, the Lagrangian above remains invariant under the transformation This transformation relates the normal modes of both the scalar and vector fields so that they can be written as The most important thing to notice about the redefined fields above is that the angular oscillations b(x) of the scalar field end up as part of the the physical gauge boson Ã(x). This is the secret behind the power of spontaneous symmetry breaking. The massless normal mode of the scalar field winds up mixed into the definition of the physical gauge boson, because of gauge symmetry. The oscillations of the scalar field around the flat angular direction of the scalar potential turn into longitudinal oscillations of the physical gauge field. A massless particle travels at the speed of light and cannot oscillate in the direction of motion. Therefore, the addition of a longitudinal mode of oscillation means the gauge field has become massive. The gauge field has a mass, but the gauge invariance has not been spoiled in the process. The value of the scalar field at the potential minimum determines the mass of the gauge boson, and hence the range of the force carried by the gauge boson. This whole coupled system is called the Higgs mechanism, and the massive scalar field that remains in the end is called a Higgs field. This section uses units where (Planck’s constant)/2p and the speed of light = 1. This choice of units is called natural units. With this choice, mass has units of inverse length, and vice versa. The conversion factor is 2×10^-7 eV = 1/meter. Electroweak unification The Higgs mechanism forms the basis of the experimentally well-tested theory of the weak and electromagnetic interactions that is referred to as electroweak theory. The initial gauge invariance in the theory is SU(2)xU(1), with three massless gauge bosons from SU(2) and one from U(1). In the end there has to be only one massless gauge boson — the photon that carries the electromagnetic force — and three massive gauge bosons mediating the short range weak nuclear force. Therefore, three massless scalar normal modes (also known as Goldstone bosons) are needed to serve as longitudinal modes to turn the four massless gauge bosons into one massless gauge boson and three massive gauge bosons. Remember that for a single complex scalar field, the massless mode, or Goldstone boson, comes from the angular normal mode that oscillates around the flat circle at the potential minimum. A circle is just a one-dimensional sphere, or a “one sphere”. In general, an N-dimensional sphere has N angular directions, and for oscillations about the sphere, there is one radial direction. We need a set of scalar fields that transform under the group SU(2) with a potential whose minimum has the geometry of a three sphere. This can be accomplished by using two complex scalar fields, transforming as a two-component object under transformations by the group SU(2), so that f(x) is given by The potential minimum is at which is the equation of a three sphere in f-space. The normal modes for this potential will consist of one radial mode and three angular modes, just enough to create one massive Higgs boson, and give mass to the three of the four massless gauge bosons in the SU(2)xU(1) theory. This leaves leaving one massless gauge boson for the remaining unbroken U(1) gauge invariance. A complicating factor in electroweak theory is the presence of electroweak mixing. The four massless gauge bosons in the unbroken SU(2)xU(1) theory are the three SU(2) bosons, let’s called them W^+, W^– and W^0, and the massless U(1) gauge boson, let’s call it B. The spontaneous symmetry breaking winds up mixing the W^0 and the B, into two different gauge bosons — the massless photon that carries the electromagnetic force, and the massive Z^0 boson that carries the weak nuclear force. The mixing is described by the weak mixing angle q[w] as shown below The final physical states of this theory are the massless photon, and the massive neutral weak boson, the Z^0. The distance scale of the electroweak mixing is roughly 100 GeV, or about 10^-17 m. At scales smaller than that distance scale, or equivalently, at energy scales much above 100 GeV, the weak gauge bosons look massless and the full SU(2)xU(1) symmetry is restored. But at larger distance scales, or lower energy, only the U(1) symmetry of electromagnetism is apparent in the conservation laws and The mathematical beauty and experimental success of this idea have led physicists to extend it to higher energies and possible higher symmetries, as will be described below. Running coupling constants In quantum field theory, when computing a particle scattering amplitude, one has to sum over all possible intermediate interactions, including those that happen at zero distance, or, expressed in terms of momentum space according to the de Broglie rule, at infinite momentum. These calculations lead to integrals of the form which diverge at infinite momentum for n=0,1,2. The limit has to be approached through the use of a momentum cutoff of some kind. But the physical quantities must be independent of the cutoff, so that they remain finite as the cutoff is removed. This procedure is called renormalization, and it cannot be done for any quantum field theory, just those theories whose divergences obey certain patterns that allow them to be added consistently to the definition of a finite number of physical quantities, namely the masses and coupling constants, or charges, in the theory. The end result is that the masses and charges of elementary particles are dependent on the momentum scale at which they are measured. For example, the coupling strength of a renormalizable gauge theory has the mass dependence where M and m are two mass scales at which the coupling strength is being measured and compared. The function f(n) depends on the number of degrees of freedom in the theory. For electromagnetism, f (n) = 1, but for QCD with six flavors of quarks, f(n) =-5.25. Notice that this means electromagnetism gets stronger at higher energies, while the strong nuclear force gets weaker as the energy of the particle scattering increases. This is very important for understanding what physics might look like at higher energies than we can currently measure, see below. Quantum field theories whose divergences can be hidden in a finite number of physical quantities are called renormalizable quantum field theories. Quantum field theories that are not renormalizable are regarded as being physically realizable theories. Note that the list of unrenormalizable quantum field theories includes Einstein’s theory of gravity, which is one reason why string theory became Unification and group theory The success of spontaneous symmetry breaking in explaining electroweak physics led physicis to wonder whether the three particle theories of the SU(3)xSU(2)xU(1) model could be the spontaneously broken version of a higher unified theory at some higher energy scale, a single theory with only one gauge group and one coupling constant. This type of theory is called a Grand Unified Theory, or GUT for short. The quantum behavior of the known particle coupling constants supports the idea of Grand Unification. Because of renormalization, the electromagnetic coupling constant grows larger at high energies, whereas the coupling constants for the weak and strong nuclear interactions grow smaller at higher energies. At the mass scale the three coupling constants become equal. Therefore, this ought to be the mass scale where the single gauge symmetry of a Grand Unified Theory would become spontaneously broken into the three distinct gauge symmetries of the SU(3)xSU(2)xU(1) model. The single gauge group of a GUT has to be mathematically capable of containing the group product SU(3)xSU(2)xU(1) of the three gauge groups relevant to low energy particle physcs. The best candidate for such a theory is unitary group SU(5), which would give 24 gauge bosons mediating the single unified force, but there are also other GUT models based on other groups, such as the orthogonal group SO(10), which would give 45 gauge bosons and contain the SU(5) theory as a subgroup. The problem with Grand Unification is that the unified gauge bosons allow quarks to couple to leptons in such a way that two quarks can be converted into an antiquark and an antilepton. For example, two up quarks would be allowed to turn into a positron and and a down antiquark. A proton consists of two up quarks and and down quark. A neutral pion consists of a down quark and a down antiquark. Therefore the unified gauge boson in a GUT could mediate proton decay by the and other related decays. The proton lifetime predicted in a GUT is about whereas the current best measurement of the proton lifetime is It’s important to note here that proton decay can happen through radiative corrections even in the Standard Model, so we don’t expect the proton lifetime to be infinite. However, it seems that the proton doesn’t decay as quickly as predicted by a GUT. This situtation is improved when supersymmetry is added to the GUT, and this will be explained in next section. What about gravity? Einstein’s elegant and experimentally tested theory of gravity called General Relativity is not a normal gauge theory like electromagnetism. The symmetry is not a unitary group symmetry like U(1) or SU(3), but instead a symmetry under general coordinate transformations in four spacetime dimensions. This does not lead to a renormalizable quantum field theory, and so gravity cannot be unified with the other three known physical forces in the context of a Grand(er) Unified Theory. But string theory claims to be a unified theory encompassing all known forces including gravity. How can that be? The main symmetry apparent in string theory is conformal invariance, or superconformal invariance, on the string world sheet. This symmetry dicates the spectrum of allowed mass and spin states in the theory. The spin two graviton and the spin one gauge bosons exist within this framework naturally as part of the tensor structure of the quantized string spectrum. This is another reason why physicists have become so impressed by string theory. There exists a completely novel way of putting gravity and the other known forces together in the context of a single symmetry, that is much more powerful than the ordinary quantum gauge theory of particles. But the question is — is this really the way that nature does it? The answer to that may take a long time to sort out. Symmetry breaking in string theory The two string theories that have shown the most promise for yielding a pattern of symmetry breaking that is like Grand Unification plus gravity are the heterotic superstring theories based on the groups SO(32) and E[8]xE[8]. However, these are supersymmetric theories in ten spacetime dimensions, so the symmetry breaking scheme also has to be involved with breaking the supersymmetry (because fermions and bosons don’t come in pairs in the real world) and dealing with the extra six space dimensions in some manner. So the possibilities, and the possible complications, are much wider in string theory than in ordinary quantum gauge field theories. Forgetting these complications for a moment, focus on the group theory of the E[8]xE[8] model. The group E[8] is an exceptional group with interesting properties too complex to explain here. The common suppostion is that one of the E[8] groups remains unbroken, and decouples from physical observation as a kind of shadow matter. The other E[8] has the right mathematical structure to break down to an SU(5) GUT via E[8] -> E[6] -> SO(10) -> SU(5). The symmetry breaking scale would presumably start somewhere near the Planck scale and end up at the GUT scale of about 10^14 GeV. The spontaneous symmetry breaking mechanism would presumably be scalar field potentials of the form shown above, where a subset of the scalar fields with normal modes like the radial mode become massive, and the remaining massless scalar fields become longitudinal modes of massive gauge bosons to break the gauge symmetry down to the next level. But — in string theory, at the level of perturbation theory where the physics is most understood — the scalar potentials seem to be flat in all directions and hence the scalar fields all remain massless. The solution to symmetry breaking in string theory has to be nonperturbative and is still regarded as an unsolved problem. Leave a Comment Cancel Reply Facts of Universe Cycle/Oscillating Model Leave a Comment / Mysteries & Hypothesis, Matter & Energy, Physics & Cosmos, Space & Time / By Deep Prakash / July 8, 2020 / astronomy, astrophysics, balance, cosmologist, cosmology, cosmos, cycle of the universe, cycle of universe, cycle universe, cycles of the universe, cyclic model, cyclic model of the universe, cyclic model of universe, cyclic model theory, cyclic theory, cyclic theory of the universe, cyclic universe, cyclic universe theory, cyclical time theory, cyclical universe theory, cycling universe, einstein, equation, explaied, galaxy, is the universe cyclical, oscilating theory, oscillating model, oscillating model theory, oscillating theory, oscillating universe, phenomenon, physics, physics equation, reincarnation, repeating, repeating universe theory, research, science, scientist, solar system, space, surprise, symbolize, symbols, the cyclic universe theory, theoretical, understand, understand universe, universe, universe cycle, universe cycle theory, universe cycles, universe examples, what is cyclic universe theory Cyclic Model of Universe Leave a Comment / Mysteries & Hypothesis, Matter & Energy, Physics & Cosmos, Space & Time, Uncommon & Remarkable / By Deep Prakash / July 10, 2020 / astronomy, astrophysics, big bang, black, black hole, cosmology, cosmos, cycle of the universe, cycle of universe, cycle universe, cycles of the universe, cyclic model, cyclic model of the universe, cyclic model of universe, cyclic model theory, cyclic theory, cyclic theory of the universe, cyclic universe, cyclic universe theory, cyclical time theory, cyclical universe theory, cycling universe, einstein, energy, equation, expanding universe , explaied, galaxy, gravitational, hole, hypothesis, is the universe cyclical, mass, matter, oscilating theory, oscillating model, oscillating model theory, oscillating theory, oscillating universe, physics, quasar, repeating universe theory, research, science, scientist, space, the cyclic universe theory, theoretical, theory, universe, universe cycle, universe cycle theory, universe cycles, universe examples, what is cyclic universe theory, whirling disk
{"url":"https://cosmos.theinsightanalysis.com/unification-distant-scales-unified-theory/","timestamp":"2024-11-14T03:49:36Z","content_type":"text/html","content_length":"188068","record_id":"<urn:uuid:8970ffce-4710-44b7-96a5-21b4c55f7f70>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00136.warc.gz"}
Buckling of Columns - Definition, Meaning, Calculation, Examples, Euler Theory - www.mechstudies.comBuckling of Columns – Definition, Meaning, Calculation, Examples, Euler Theory | www.mechstudies.com Buckling of Columns – Definition, Meaning, Calculation, Examples, Euler Theory In this article, we will learn the buckling of columns, along with the definition, meaning, calculations, examples, Euler’s theory, and many more. Let’s explore! Buckling of Columns – Definition & Meaning Let’s try to understand the buckling of columns along with basics, definition & meaning. Buckling Basics Buckling has undeniably been observed since prehistoric times and mathematically described in some form or another since the late 1700s. As a result, it appears that we may already know everything there is to know about buckling, but the answer is No. Yes, for a given material and geometry, we have a formalism to define the critical buckling condition. We have a good understanding of how certain boundary conditions define the most probable post-critical state. Buckling columns definition meaning examples Euler’s theory However, there are relatively few exceptions to these generalizations. • This level of current understanding has evolved over two centuries, but it’s only in the last decade that scientists and engineers have begun to push the boundaries and realize the plethora of outstanding problems about buckling. • Buckling is usually linked to the development of elastic instability, which occurs when an in-plane compression causes an out-of-plane deformation. • Buckling knowledge has been created in the context of structural failure and prevention for the majority of its quantitative history. • Buckling, on the other hand, has long been linked to the production of morphological and structural traits in nature. For this blog only let’s get into a greater aspect of what Buckling actually is! What is buckling? Buckling is a mathematical instability that leads to a failure mode in science. Material failure and structural instability, also known as buckling, are the two major causes of mechanical component failure. For material failures, the yield stress for ductile materials and the ultimate stress for brittle materials must be considered. Buckling occurs when a structural member is exposed to high compressive stresses, likely to result in a sudden sideways deflection. • The term “deflection” in structural engineering refers to the displacement of a member caused by bending forces. This type of deflection is predictable and can be calculated. Buckling, on the other hand, causes unstable lateral deflection. • Columns are typically subjected to buckling checks because buckling is caused by compressive or axial forces, which are more frequent in columns than beams. • Any additional load will cause significant and unpredictable deformations once a member begins to buckle. Buckling frequently resembles crumpling (imagine squashing an empty can of soda). • The Further load can be sustained in one of two states of equilibrium at a certain stage under increasing load: an undeformed state or a laterally-deformed state. • The unexpected failure of a structural member under high compressive stress, where the actual compressive stress at the failure point is lower compared to the ultimate compressive stresses that the material can withstand is in practice known as Buckling. • Reinforced concrete members, for example, may experience lateral deformation of the longitudinal reinforcing bars during earthquakes. Failure due to elastic instability is another name for this mode of failure. • When a continuous load is applied to a member, such as a column, the load will eventually become powerful enough to cause the member to become unstable. • The Additional load will result in significant and somewhat unexpected deformations, potentially resulting in a complete loss of load-carrying capacity. It is said that the member buckled and • For many types of structures, buckling is a significant failure condition. Buckling of Columns Leonhard Euler a famed swiss mathematician developed the Euler theory of column buckling in 1742. Column buckling is a type of deformation caused by axial compression forces. Because of the column’s instability, this causes it to bend. Buckling columns definition meaning calculation examples • This mode of failure is rapid and thus dangerous. A column’s length, strength, and other factors determine how or if it will buckle. • Long columns will buckle elastically when compared to their thickness, similar to bending a spaghetti noodle. This will happen at a stress level lower than the column’s ultimate stress. The following assumptions underpin Euler’s theory: axial load application point, column components, cross-section, stress limits, and column failure. • The validity of Euler’s theory is conditional on failure occurring due to buckling. • This theory disregards the effect of direct stress in the column, crookedness in the column, which exists at all times, and possible changes of the axial load application point away from the centre of the column cross-section. As a result, the critical buckling load may be overestimated by the theory. Euler’s Theory – Buckling of Columns According to Euler’s theory, the stress in the column caused by direct loads is small in relation to the stress caused by buckling failure. A formula was developed based on this statement to calculate the critical buckling load of a column. As a result, the equation is based on bending stress and disregards direct stress caused by direct loads on the column. Euler theory for elastic buckling: This phenomenon is explained by the Euler equation, also known as Euler’s equation, P = nπ^2 EI/L^2 • L= Length (m) • P= Allowable load before buckle. • n= Factor accounting for end conditions • E= Elasticity Module, Pa (N/m^2 ) • I= Moment of Inertia (m^4 ) The allowable load decreases as the length increases. • With shorter columns compared to their thickness, the allowable stress on a column before buckling rises as length reduces, according to the same equation above. • Another important factor in determining buckling stress is the type of end connections for the column. • In the Euler equation, each connection, from pinned-pinned to fixed-fixed to fixed-pinned, is represented by a different value of ‘n.’ • Of all the end connections, the fixed-fixed connection has the highest allowable stress before buckling. It takes place differently for different materials in construction. • This material factor is represented in Euler’s equation by the letters “E” and “I,” which represent different material properties. In steel columns, this occurs elastically. Reinforced concrete, on the other hand, does not fall into this category. Buckling of Columns Terms Buckling Strength The Buckling strength σ[cr] is calculated by dividing the Euler Buckling Load by the cross-sectional area of the column: Radius of Gyration If all of a column’s cross-sectional area was massed r away from the neutral axis, The Moment of Inertia of the lumped-area cross-section would be the same as the Moment of Inertia of the real cross-section. The RADIUS OF GYRATION, denoted by r, is given by: • r[x] = (Ixx/A)^1/2 • r[y] = (Iyy/A)^1/2 Slenderness Ratio The SLENDERNESS RATIO measures the length of a column in relation to the effective width of its cross-section (resistance to bending or buckling). It is abbreviated as s, is equal to the length of the column divided by the Radius of Gyration. The Euler Buckling Formula is reduced to: by using the Slenderness Ratio and the Radius of Gyration. • [Pcr] = π^2 EI/L^2 • [Pcr] = π^2 E (r/L)^2 • [Pcr] = π^2 E/s^2 Effective Length The buckling strength of a column is determined by how it is supported. The EFFECTIVE LENGTH, Le, is used to account for variations in end supports. The Effective Length is the length at which a pinned-pinned column would buckle if it were to buckle. For any column, the Buckling Formula is as follows: Buckling Load Factor Buckling load factor is defined as the indicator of the factor of safety against buckling. It can be also defined as the ratio of buckling load to the currently applied load. Local Buckling The possibility of local buckling is arising only when the condition is satisfied which is that the component must have a region where relative thickness to the depth ratio is less than 1/10. In general, it is a rare occurrence but when this condition occurs the result could be sudden and catastrophic. Euler’s Theory – Assumptions & Limitations Euler’s Theory – Assumptions Let’s see the assumptions for Euler’s theory, • The column is at first perfectly straight. • The column’s cross-section is consistent along its length. • The load is axial and moves through the section’s centerline. • The column’s stresses are within the elastic limit. • The column’s materials are homogeneous and isotropic. • The column’s self-weight is ignored. • The failure of the column is solely due to buckling. • The column’s length is long in comparison to its cross-sectional dimensions. • The column’s ends are frictionless. • The axial compression shortening of the column is negligible. Euler’s Theory – Limitations • This theory does not account for the possibility of crookedness in the column, and the load may not be axial. • The formula derived from the Euler theory of column buckling does not account for axial stress, Furthermore, the critical buckling load may exceed the actual buckling load. Buckling of Columns Euler’s Theory Examples Buckling of Columns calculation Examples The length of an aluminium (E = 70 GPa) column buried in the earth is L = 2.2 m. and is under axial compressive load P, b = 210 mm and d = 280 mm are the cross-sectional dimensions. Buckling columns calculation examples Euler’s theory a. The critical load that causes the column to buckle. b. Is the column more likely to bow or yield if the allowed compressive stress in the Aluminum is 240 MPa? c. What is the allowed buckling load if the factor of safety is F.S. =1.95? Buckling of Columns Solutions Step#1: Euler Buckler Formula is, • L[e] = Effective Length of the column In this case, the column is fixed-free in both the x and y directions. A fixed-free column’s effective length is: Le • Le = 2 L • Le = 2 x 2.2 [L=2.2m] • Le = 4.4 m The x- or y-axis of the column may buckle. The Moment of Inertia of a rectangle is: I = (base) (height) 3/12 For our values of b and d, We have: • I[x] = 3.84 x 10^8mm^4 • I[y] = 2.16 x 10^8 mm^4 Because of the smaller Euler buckling load, the smaller Moment of Inertia rules. This is the load: Step#2: The Critical Buckling Stress is calculated by dividing the Euler Buckling Load by the area, A=bd. As a result, the Buckling Stress is calculated to be: If σ[cr] <240 MPa, the column will buckle (since the buckling tension is attained first as the load is applied); If σ[cr] >240 MPa, since the yield stress, S[y], is reached first, the column will yield. Step#3: The Allowable Load on the column, P[allow], for a Factor of Safety of F.S. = 1.95 is exclusively for buckling. The formula of the Factor of Safety, F.S. = Failure Load / Allowable Load • P[allow] = P[cr]/F.S • P[allow] = 7711 kN/95 • P[allow] = 3954 kN Buckling of Columns More Examples • Metal building columns that are overburdened. • Bridge members that are compressive. • Submarine hull and roof trusses. • Excessive torsional and/or compressive pressure on metal skin on aircraft fuselages or wings. • An I-thin beam’s web with significant shear force. • An I-beam with a narrow flange that has been subjected to extreme compressive bending. Buckling Analysis • Buckling Analysis is a finite element analysis technique that can address all buckling problems that cannot be solved by human calculations. The most frequent Buckling Analysis is Linear Buckling Analysis (LBA). • In contrast to Linear Buckling, the nonlinear technique provides more robust solutions. Short columns failing material buckling failure • The linear finite element analysis method can be used to predict material failure. • To put it another way, K δ = F is obtained by solving a linear algebraic problem for the unknown displacements. • The resulting strains and stresses are compared to the component’s design stress (or strain) allowables across the component. • It is presumed that material failure has occurred if the finite element solution shows places where these allowables have been exceeded. • The stiffness of a component, not the strength of its materials, determines the load at which it buckles. • Buckling is the loss of component stability that is usually unrelated to material strength. This loss of stability usually happens within the material’s elastic range. Different differential equations govern the two phenomena. • Buckling failure is associated with structural stiffness loss and is symbolized by a finite element eigenvalue eigenvector solution rather than the standard linear finite element analysis. |K + λm KF| δm = 0 λm =Buckling load Factor (BLF) for m-th mode KF = Additional Geometric stiffness due to the stresses caused by loading δm = Buckling displacement for m-th mode. • A multiplier is obtained from the buckling calculation that increases the magnitude of the load (up or down) to the required buckling force. • Under compressive stress, slender or thin walled components are prone to buckling. • Most people have seen “Euler buckling,” which occurs when a long, slender part is compressed and slides lateral to the direction of the strain as shown in the diagram below. • The force required to cause such buckling, F, differs by a factor of four depending solely on how well the two ends are restrained. • As a result, buckling investigations are far more sensitive to component restrictions than standard stress analyses. • The theoretical Euler solution would result in infinite forces in extremely short columns, which would plainly exceed the material stress limit. • In practice, Euler column buckling can only be used in a few areas and empirical transition equations are necessary for intermediate-length columns. • The loss of stiffness in very long columns happens at stresses well below the material breakdown point. Local Buckling of a Cantilever In cantilever beam, if complete plane stress analysis is carried over the stress value comes out is relatively very low. To save the material cost in the buckling experiments it is decided to reduce the thickness of the beam. The factor of safety in a ductile material is defined as the ratio of yield stress to von misses effective stress. To show the distribution of factor of safety the default result plot is opened with results à defined factor of safety plotàmaximum von misses stress. • Here the factor of safety of the material is quite high which ranges from the value of 10 to 100. • It should be noted that the factor of safety is a unitless quantity this value can be reduced by simple redesign which will save the material and which will also result in a decrement in the cost of the material. • The geometric moment of inertia of a beam is equivalent to the load-carrying capacity of the beam. • So it is also proportional to the thickness of the beam therefore we can say that by reducing the thickness of the material, a factor of safety would still be above the unity. Thus, based on the above equations, theories, and formulas, different materials have different mechanical properties that must be tested before being used in construction and other engineered structures in order to determine the various aspects of buckling.
{"url":"https://www.mechstudies.com/buckling-columns-definition-meaning-calculation-examples-eulers-theory/","timestamp":"2024-11-13T11:37:57Z","content_type":"text/html","content_length":"106701","record_id":"<urn:uuid:236416cf-4b69-4cb0-bbd8-a158c2030b1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00116.warc.gz"}
\}$, find: Average Quadratic deviation. First, we will arrange the given series in ascending order and note down the number of data points. The average Quadratic deviation can be calculated using the Arithmetic Mean of the data as given in the series. We will use the number of terms in the series, i.e., the number of data points in the series to calculate the Arithmetic Mean. Complete step-by-step answer: We are given a numeric series in the problem which is given by $M = \left\{ {1;45;23;34;9} \right\}$ Before going further, we will sort the data given and then further calculations will be done. Sorted series - $1;9;23;34;45$ Number of data points, $n = 5$ We will now calculate the Arithmetic Mean in order to evaluate the average Quadratic deviation. Let us write down the data in symbolic form to make it easy for us to proceed: The first term of the sorted series $ = {x_1} = 1$ The second term of the sorted series $ = {x_2} = 9$ The third term of the sorted series $ = {x_3} = 23$ The fourth term of the sorted series $ = {x_4} = 34$ The fifth term of the sorted series $ = {x_5} = 45$ The arithmetic mean is given by the formula: $\bar x = \dfrac{{{x_1} + ... + {x_{_n}}}}{n}$ Putting the values in the above equation, $\bar x = \dfrac{{1 + 9 + 23 + 34 + 45}}{5} = \dfrac{{112}}{5} = 22.4$ Thus, arithmetic mean $ = \bar x = 22.4$ Now, we will calculate the difference between each of the data points in the sorted series and the arithmetic mean calculated. For ease, we are tabulating the process: The average Quadratic deviation is square root of the mean squared deviation and is given as: $\sigma = \sqrt {\dfrac{{\sum\limits_{i = 1}^n {{{\left( {{x_i} - \bar x} \right)}^2}} }}{n}} = \sqrt {\dfrac{{{{\left( {{x_1} - \bar x} \right)}^2} + {{\left( {{x_2} - \bar x} \right)}^2} + ... + {{\left( {{x_n} - \bar x} \right)}^2}}}{n}} $ Using the table, we have found out the value of $\sum\limits_{i = 1}^n {{{\left( {{x_i} - \bar x} \right)}^2}} $= 1283.20 Thus, $\sigma = \sqrt {\dfrac{{1283.20}}{5}} = \sqrt {256.64} = 16.02$ By the use of Mean, Median and Mode, our approach was more of a concentration of data not caring much about the extremes of our distribution. Average Quadratic deviation, also referred to as Standard Deviation, is a measure of dispersion of data. When we are interested in the estimating the spread of a distribution about an estimate of the central tendencies (mostly, the Mean), the answer is Average Quadratic Deviation. It gives us some sort of idea about the expanse about the measures of central tendency (usually, mean). This will be clear by taking the example of a probabilistic Model,i.e., the Gaussian (Normal) Distribution as indicated below: Higher the Average Quadratic Deviation, narrower is the curve, i.e. more data is centralised towards mean.
{"url":"https://www.vedantu.com/question-answer/for-numeric-series-m-left-14523349-right-find-class-11-maths-cbse-5f5e76616e663a29cc56266b","timestamp":"2024-11-10T22:13:48Z","content_type":"text/html","content_length":"183512","record_id":"<urn:uuid:4cd45870-d9e8-4c98-931f-c7b6601dc4a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00715.warc.gz"}
Two statements next to curly brace in an equation - Config Router You can try the cases env in amsmath. 1, & text{if $x<0$}.\ 0, & text{otherwise}. end{cases} end{equation} end{document} That can be achieve in plain LaTeX without any specific package. documentclass{article} begin{document} This is your only binary choices begin{math} left{ begin{array}{l} 0\ 1 end{array} right. end{math} end{document} This code produces something which looks what you seems to need. The same example as in the @Tombart can be obtained with similar code. documentclass{article} begin{document} begin{math} f(x)=left{ begin{array}{ll} 1, & mbox{if $x<0$}.\ 0, & mbox{otherwise}. end{array} right. end{math} end {document} This code produces very similar results.
{"url":"https://www.configrouter.com/two-statements-next-to-curly-brace-in-an-equation-26520/","timestamp":"2024-11-06T02:11:00Z","content_type":"text/html","content_length":"33783","record_id":"<urn:uuid:79eb9e39-3140-4e0f-ba52-0c936c2fdf92>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00837.warc.gz"}
End-to-End Optimization of Bionic Vision | Bionic Vision Lab Our lack of understanding of multi-electrode interactions severely limits current stimulation protocols. For example, current Argus II protocols simply attempt to minimize electric field interactions by maximizing phase delays across electrodes using ‘time-multiplexing’. The assumption is that single-electrode percepts act as atomic ‘building blocks’ of patterned vision. However, these building blocks often fail to assemble into more complex percepts. The goal of this project is therefore to develop new stimulation strategies that minimize perceptual distortions. One potential avenue is to view this as an end-to-end optimization problem, where a deep neural network (encoder) is trained to predict the electrical stimulus needed to produce a desired percept (target). Importantly, this model would have to be trained with the phosphene model in the loop, such that the overall network would minimize a perceptual error between the predicted and target output. This is technically challenging, because a phosphene model must be: 1. simple enough to be differentiable such that it can be included in the backward pass of a deep neural network, 2. complex enough to be able to explain the spatiotemporal perceptual distortions observed in real prosthesis patients, and 3. amenable to an efficient implementation such that the training of the network is feasible.
{"url":"https://bionicvisionlab.org/research/end-to-end-optimization/","timestamp":"2024-11-08T01:58:50Z","content_type":"text/html","content_length":"31003","record_id":"<urn:uuid:957ae325-2794-444d-9c32-db89744e70b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00712.warc.gz"}
Bhāskara I Everipedia is now - Join the IQ Brainlist and our for early access to editing on the new platform and to participate in the beta testing. Bhāskara I Bhāskara I Bhāskara I Born c.600CE Died c.680CE Nationality Indian Occupation Mathematician; scientist Known for Bhaskara I's sine approximation formula Bhāskara (c.600 – c.680) (commonly called Bhaskara I to avoid confusion with the 12th-century mathematician Bhāskara II) was a 7th-century mathematician, who was the first to write numbers in the Hindu decimal system with a circle for the zero, and who gave a unique and remarkable rational approximation of the sine function in his commentary on Aryabhata's work.^[1] This commentary, Āryabhaṭīyabhāṣya, written in 629 CE, is among the oldest known prose works in Sanskrit on mathematics and astronomy. He also wrote two astronomical works in the line of Aryabhata's school, the Mahābhāskarīya and the Laghubhāskarīya.^[2] On 7 June 1979 the Indian Space Research Organisation launched Bhaskara I honouring the mathematician.^[3] Bhāskara I Born c.600CE Died c.680CE Nationality Indian Occupation Mathematician; scientist Known for Bhaskara I's sine approximation formula Little is known about Bhāskara's life. He was probably an astronomer.^[4] He was born in India in the 7th century. His astronomical education was given by his father. Bhaskara is considered the most important scholar of Aryabhata's astronomical school. He and Brahmagupta are two of the most renowned Indian mathematicians who made considerable contributions to the study of fractions. Representation of numbers Bhaskara's probably most important mathematical contribution concerns the representation of numbers in a positional system. The first positional representations had been known to Indian astronomers approximately 500 years prior to this work. However, these numbers, prior to Bhaskara, were written not in figures but in words or allegories and were organized in verses. For instance, the number 1 was given as moon, since it exists only once; the number 2 was represented by wings, twins, or eyes since they always occur in pairs; the number 5 was given by the (5) senses. Similar to our current decimal system, these words were aligned such that each number assigns the factor of the power of ten correspondings to its position, only in reverse order: the higher powers were right from the lower ones. His system is truly positional since the same words representing, can also be used to represent the values 40 or 400.^[5] Quite remarkably, he often explains a number given in this system, using the formula ankair api ("in figures this reads"), by repeating it written with the first nine Brahmi numerals, using a small circle for the zero . Contrary to his word system, however, the figures are written in descending values from left to right, exactly as we do it today. Therefore, at least since 629, the decimal system is definitely known to the Indian scientists. Presumably, Bhaskara did not invent it, but he was the first having no compunctions to use the Brahmi numerals in a scientific contribution in Sanskrit. Bhaskara wrote three astronomical contributions. In 629 he annotated the Aryabhatiya, written in verses, about mathematical astronomy. The comments referred exactly to the 33 verses dealing with mathematics. There he considered variable equations and trigonometric formulae. His work Mahabhaskariya divides into eight chapters about mathematical astronomy. In chapter 7, he gives a remarkable approximation formula for sin x, that is which he assigns toAryabhata. It reveals a relative error of less than 1.9% (the greatest deviationat). Moreover, relations between sine and cosine, as well as between the sine of an angle >90° >180° or >270° to the sine of an angle <90° are given. Parts of Mahabhaskariya were later translated intoArabic. Bhaskara already dealt with the assertion that if p is a prime number, then 1 + (p–1)! is divisible by p. It was proved later by Al-Haitham, also mentioned by Fibonacci, and is now known as Wilson's Moreover, Bhaskara stated theorems about the solutions of today so-calledPell equations. For instance, he posed the problem: "Tell me, O mathematician, what is that square which multiplied by 8 becomes - together with unity - a square?" In modern notation, he asked for the solutions of thePell equation. It has the simple solution x = 1, y = 3, or shortly (x,y) = (1,3), from which further solutions can be constructed, e.g., (x,y) = (6,17). • Bhaskara I's sine approximation formula • List of Indian mathematicians Citation Linkwww.britannica.comBhaskara I, Britannica.com Sep 19, 2019, 5:12 AM Citation Linkopenlibrary.orgKeller, Agathe (2006), Expounding the Mathematical Seed. Vol. 1: The Translation: A Translation of Bhaskara I on the Mathematical Chapter of the Aryabhatiya, Basel, Boston, and Berlin: Birkhäuser Verlag, 172 pages, ISBN 3-7643-7291-5Keller, Agathe (2006), Expounding the Mathematical Seed. Vol. 2: The Supplements: A Translation of Bhaskara I on the Mathematical Chapter of the Aryabhatiya, Basel, Boston, and Berlin: Birkhäuser Verlag, 206 pages, ISBN 3-7643-7292-3, p. xiii) Sep 19, 2019, 5:12 AM Citation Linknssdc.gsfc.nasa.govBhaskara NASA 16 September 2017 Sep 19, 2019, 5:12 AM Citation Linkopenlibrary.org, p. xiii) cites [K S Shukla 1976; p. xxv-xxx], and Pingree, Census of the Exact Sciences in Sanskrit, volume 4, p. 297. Sep 19, 2019, 5:12 AM Citation Linkopenlibrary.orgB. van der Waerden: Erwachende Wissenschaft. Ägyptische, babylonische und griechische Mathematik. Birkäuser-Verlag Basel Stuttgart 1966 p. 90 Sep 19, 2019, 5:12 AM Citation Linkwww-history.mcs.st-andrews.ac.uk"Bhāskara I" Sep 19, 2019, 5:12 AM Citation Linkwww-history.mcs.st-andrews.ac.uk"Bhāskara I" Sep 19, 2019, 5:12 AM Citation Linken.wikipedia.orgThe original version of this page is from Wikipedia, you can edit the page right here on Everipedia.Text is available under the Creative Commons Attribution-ShareAlike License.Additional terms may apply.See everipedia.org/everipedia-termsfor further details.Images/media credited individually (click the icon for details). Sep 19, 2019, 5:12 AM
{"url":"https://everipedia.org/wiki/lang_en/Bh%25C4%2581skara_I","timestamp":"2024-11-05T13:48:37Z","content_type":"text/html","content_length":"100622","record_id":"<urn:uuid:71050fae-af73-4d2a-9830-a47e50cf86c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00121.warc.gz"}
What is Algorithm Design and How is it Used in Computer Science What is an algorithm? An algorithm is a set of instructions. You follow these instructions when solving a particular problem. Steps within an algorithm are in order for a reason. You must follow the series of instructions or list of rules to complete a desired task. If you skip a step, your end result won’t be the one you desire. You use algorithms when you follow a recipe to bake a cake or do a load of laundry. You don’t put the soap in the dryer, now do you? Instead, you follow a series of steps or a set of instructions until you complete the task. You also use algorithms in math and computers. Sure, they’re a bit more complicated than washing your clothes. But they carry you through a series of steps and lead you to an end result. So, what is algorithm design and what’s important about designing algorithms? This article digs into the algorithm definition in computer science. Read on to find out what is algorithm design and how to design algorithms. Algorithm Basics There are many different types of algorithms. But at their core, they are all the same. Even in computer programming and science. The algorithm definition computer science states that an algorithm is a list or set of rules used to perform tasks or solve problems. It has the same meaning in CS that it does in the kitchen while baking a cake. You’re given a set of variables and a list of steps. It’s up to you to follow them. Types of Algorithms Now that we’ve answered the question: What is an algorithm in computer science? Let’s look at the different algorithm types. Brute force algorithms These are computer algorithms that try all possible solutions until you find the right one. Algorithms work by computing and problem solving. Because of the way they work, this algorithm is widely Divide and conquer algorithms This algorithm divides the problem into smaller problems (subproblems) of the same type. You then solve the smaller problems and combine the solutions to solve the original problem. Dynamic programming algorithms This is a lot like the algorithm that divides and conquers. It breaks a complex problem into a series of smaller subproblems. Then it solves each of the subproblems once. It stores the solutions for future use. Greedy algorithms Greedy algorithms are used to find the best solution at the local level, but with the intent on locating a solution for the entire problem. Of the many algorithms out there, this one is good for solving optimization issues. Randomized algorithms A good example of an algorithm that uses random numbers for computation problems is randomized algorithms. They use random numbers once to find a viable solution. Recursive algorithms You use a recursive algorithm to solve the simplest version of a problem to then solve more complicated versions of the problem. You do this until you find a solution for the original problem. Search algorithms A search algorithm solves a search problem. It retrieves stored information within a particular data structure. It locates where it is stored. Sorting algorithms You use sorting algorithms to rearrange a well defined list of elements into an order. Quicksort is one of the most efficient algorithms of this kind. Algorithm Design Techniques Now that you know what is a computer algorithm, you should learn about the algorithm design definition. Because you use algorithm design when you solve problems. Algorithm design refers to a method or process of solving a problem. It is the design of algorithms that is part of many solution theories. In short, your design is what you use to solve the problem. Algorithms get you to the solution you desire. Your design techniques are the algorithms you use. They can be any of the different types of algorithms, from sorting to dynamic programming. Algorithms Design and Analysis Algorithms are a set of rules or instructions needed to complete a task. Long before the computer age, people established set routines for how to carry out their daily tasks. People listed the steps needed to accomplish a goal. Lists help you reduce the risk of forgetting something important. Designers take a similar approach to the development of algorithms for computational purposes. They first look at a problem. Next, they outline the steps required to resolve it. They develop a series of mathematical operations to accomplish those steps. You call this development algorithm design. When to Use Algorithm Design You use algorithms when you want to solve problems. Developers use them in programming languages and machines. Machines, such as a search engine, use search algorithms to find information. The point is you use algorithm design to solve a problem. Other uses include: • Automate reasoning through a finite number of steps • Computer program storage issues • Data processing • Find a solution Jobs in Computers that Use Algorithms You now understand what is algorithm in computer science. You’ve also learned how it’s used. But did you know different jobs use algorithms? Below are some of the occupations where you use algorithms on the job. Computer Design Engineers When you conduct engineering tasks in a programming language you use algorithms. Computer engineers receive a list of instructions so they can find a solution to a primary problem. They use algorithm designs to solve the problem. Data Scientists With the large number of data science algorithms available, it’s no wonder that data scientists use them to do their job. They use algorithms to solve the following kinds of problems: • Classification problems • Regression problems Since the job entails working with data structures, you should have experience with useful algorithms. Machine Learning Developers This field uses algorithms. Developers examine huge data sets and find patterns within that data. They make predictions based on the patterns they find. All of this is possible because of algorithms. Efficiency in Solving Complex Problems You can use algorithms to complete the simplest of tasks. But you can also use them to solve complex problems. For centuries, mathematicians have solved complicated problems using algorithms. But they did the work with no computers. Time’s have changed. Computer scientists and developers have it easier today thanks to modern computers. From cyber security to big data, solving problems using algorithms is easier with the use of computers. Computers make finding a solution more efficient. Instead of sorting through large sets of data by hand, a computer processes the information within seconds. Algorithm Engineering Algorithm engineering is a field of engineering that focuses on: • Analysis • Design • Implementation • Optimization • Profiling Algorithm engineers work for large employers, such as Amazon and Google. These companies create specialized algorithms. They come up with specialized ways of collecting data. Some companies refer to their algorithm engineers as algorithm developers. Their job is to design and integrate algorithms to find solutions to problems. Developers also use algorithms to extrapolate data. They then use the new algorithm created to install in software or in the computer environment. Education and Skills for Algorithm Engineers To become an algorithm engineer or developer, you should first earn a bachelor’s degree in computer science, software engineering, or a related field. A four-year degree is the foundation you need to get your foot in the door. To advance, you need more training and experience. Some aspiring algorithm engineers earn a master’s degree in a related field. Popular graduate degrees include: • Computer technology or science • Information technology • Software development • Software engineering Because algorithm engineers use specialized skills to find solutions in data sets, you need training. To qualify for a job as an algorithm engineer, you need the following skills. Advanced coding skills You need the ability to code algorithms to assess data sets. It is helpful to know different languages in computers like C++ and Python. Algorithm deployment skills Since the job includes creating and implementing algorithms to solve problems that improve AI functionality, you need strong algorithm development skills. Analytical skills The word algorithm means process. To process, you must be analytical. It should come as no surprise that you need strong analytical skills to work as an algorithm engineer. Your work includes creating an algorithm that will comb through data sets. Your job is to solve problems with the algorithms you create. To do this, you must be analytical. Communication skills Good communication and reporting skills are a must when working as an algorithm engineer. In this role, you provide real-time algorithm results to company administration and executives. You should know how to speak clearly, sharing complex information to a wide audience. Signal processing skills Signal processing skills help you analyze and synthesize signals. These skills also help improve efficiency of data quality. You use your processing skills to help improve storage and transmissions. You need these skills to succeed on the job. Team management skills Most algorithm design work is team-oriented. It takes a village, they say. And this is no different when working with algorithms. You should be able to assist other algorithm engineers and help team members fulfill their project schedules. Now that you know what skills and education you need to work with algorithms, are you ready to take the plunge? CSDH Staff November 2022 Related Resources: This concludes our article on what is algorithm design and how is it used.
{"url":"https://www.computersciencedegreehub.com/faq/what-is-algorithm-design/","timestamp":"2024-11-07T21:55:37Z","content_type":"text/html","content_length":"68425","record_id":"<urn:uuid:cebd74d1-9d09-451c-b06f-5c1b6864148b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00119.warc.gz"}
Estimation Theory - (Advanced Signal Processing) - Vocab, Definition, Explanations | Fiveable Estimation Theory from class: Advanced Signal Processing Estimation theory is a branch of statistics and signal processing that focuses on estimating the values of parameters based on measured data, particularly when the data is affected by noise or uncertainty. It involves techniques to derive estimators that can provide the best approximation of unknown parameters while minimizing error. This concept is deeply connected to understanding random signals, applying probabilistic models, optimizing estimation accuracy, and implementing adaptive techniques for improving signal reception. congrats on reading the definition of Estimation Theory. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Estimation theory is crucial in fields like communications and control systems where accurate parameter estimation directly impacts system performance. 2. The Mean Square Error (MSE) is a common criterion used in estimation theory to measure the accuracy of an estimator by quantifying the average squared difference between estimated and true 3. In many applications, estimators are designed to be unbiased, meaning their expected values equal the true parameter values. 4. Minimum Mean Square Error (MMSE) estimation provides a way to achieve optimal trade-offs between bias and variance in estimating unknown parameters. 5. Adaptive techniques leverage estimation theory to dynamically adjust system parameters based on changing environments or signal conditions. Review Questions • How does estimation theory apply to analyzing random signals, and why is it important for signal processing? □ Estimation theory provides essential tools for analyzing random signals by allowing for the quantification of uncertainties and noise in signal measurements. In signal processing, it enables practitioners to estimate signal characteristics such as power spectral density or frequency components effectively. By applying these estimation techniques, engineers can improve the reliability and clarity of communications systems, ensuring better performance even in challenging conditions. • Discuss how the concepts of bias and variance interact in the context of Minimum Mean Square Error (MMSE) estimation. □ In MMSE estimation, there is a fundamental trade-off between bias and variance when estimating unknown parameters. While an estimator can be designed to minimize bias, doing so may increase variance, leading to less reliable estimates overall. MMSE aims to find the optimal balance where both bias and variance are minimized collectively, resulting in more accurate and stable parameter estimates that are crucial for applications requiring high precision. • Evaluate how adaptive beamforming utilizes estimation theory principles to enhance signal reception in varying environments. □ Adaptive beamforming employs estimation theory principles by continuously adjusting antenna patterns based on real-time estimates of incoming signal characteristics. This approach allows systems to dynamically adapt to interference, noise, or changes in signal direction. By leveraging estimators that track these variations, adaptive beamforming significantly improves signal quality and reception performance, making it invaluable in modern communication systems where conditions can rapidly change. "Estimation Theory" also found in: © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/advanced-signal-processing/estimation-theory","timestamp":"2024-11-07T12:08:45Z","content_type":"text/html","content_length":"162467","record_id":"<urn:uuid:8f05d86d-c3bb-47b8-bc73-db101d466e04>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00323.warc.gz"}
Table Load Capacity Calculator - GEGCalculators Table Load Capacity Calculator Table Load Capacity Calculator How do you calculate load capacity? Load capacity is calculated by considering factors such as material strength, support structure, and the intended use of an object or structure. The formula may vary depending on the specific application. How do you calculate weight bearing capacity of wood? To calculate the weight-bearing capacity of wood, you need to know the type of wood, its dimensions, and the load configuration. Use engineering tables and formulas specific to wood species and grades. How do you calculate the load capacity of wood? Calculate the load capacity of wood by using structural engineering principles and equations that consider factors like wood type, size, span, and load What is the formula for maximum load capacity? The formula for maximum load capacity varies depending on the material and structure. It typically involves considerations of strength, stress, and safety factors. What is an example of load capacity? An example of load capacity is determining how much weight a wooden beam can support in a house’s ceiling without collapsing. How much weight can 1 inch wood hold? The weight 1-inch wood can hold depends on the type of wood and the span. For example, a 1×6 pine board may have a different capacity than a 1×12 oak board. How do you calculate load bearing capacity? Load-bearing capacity is calculated by analyzing the structural components, material properties, and the applied loads. This often requires consulting engineering specifications and codes. How much weight can a 1×12 pine board hold? The weight a 1×12 pine board can hold depends on factors like wood quality, span, and load distribution. Specific calculations are needed for an accurate How much weight can a 2×10 joist hold? The weight a 2×10 joist can hold depends on various factors, including the wood type, span, spacing, and load distribution. Engineering tables and calculations are necessary. How much weight can a 2×8 hold? The weight a 2×8 can hold depends on factors like the type of wood, span, spacing, and load distribution. Engineering calculations are needed for precise results. How much weight can a 2×6 hold? The weight a 2×6 can hold varies based on factors such as wood type, span, spacing, and load distribution. Engineering calculations are required for accurate What is the load capacity limit? The load capacity limit is the maximum weight or load that a structure, material, or component can safely support without failure. How do you calculate maximum weight? The calculation of maximum weight depends on the context, such as the material, structure, and intended use. Engineering principles and safety factors are What is load bearing capacity? Load-bearing capacity refers to the maximum load that a structure, component, or material can support safely without experiencing failure or deformation. What is the difference between load and capacity? A load is the external force or weight applied to a structure or object, while capacity refers to the ability of that structure or object to withstand or carry that load. How do you determine load rating on shelves? The load rating for shelves is typically provided by the manufacturer based on the shelf’s design and materials. It’s important to follow the manufacturer’s guidelines for safe use. What is load vs load capacity? Load is the actual force applied to an object, while load capacity is the maximum force an object or structure can handle without failure. How much weight can a nightstand hold? The weight a nightstand can hold varies depending on its design and materials. Check the manufacturer’s specifications for the nightstand’s load capacity. What wood can hold 500 pounds? The type of wood that can hold 500 pounds depends on various factors. Oak and maple are hardwoods known for their strength, but the load capacity also depends on the wood’s dimensions and support structure. How much weight can a coffee table hold? A coffee table’s weight capacity depends on its design and materials. It can typically hold items like books, decor, and beverages, but not heavy loads. How much load can a column support? The load a column can support depends on its material, dimensions, and design. Engineering calculations are needed to determine the specific load capacity. What is the maximum load on a column? The maximum load on a column depends on its specifications and the building’s design. Engineers calculate this based on structural requirements. How much weight can a beam support? The weight a beam can support depends on factors like its material, size, span, and load distribution. Engineering calculations are necessary for precise values. How much weight can a 2×12 board support? The weight a 2×12 board can support depends on factors such as wood type, span, spacing, and load distribution. Engineering calculations are required for accurate determination. How much weight can a 1×6 support horizontally? The weight a 1×6 board can support horizontally depends on the wood type, span, and load distribution. Engineering calculations are needed for precise What wood can hold the most weight? Hardwood species like oak, maple, and hickory are known for their high load-bearing capacities, but specific load limits depend on factors like wood quality, size, and design. How far can 2×10 span without support? The maximum span for 2×10 boards without support depends on factors like wood type, load, and local building codes. Engineering calculations are necessary for precise values. Is a gun safe too heavy for my floor? The weight of a gun safe can exceed the load-bearing capacity of some floors. Consult an engineer or contractor to assess your floor’s ability to support the How much weight can a 4×8 beam support? The weight a 4×8 beam can support depends on factors like wood type, span, load distribution, and safety factors. Engineering calculations are required for accurate determination. How much weight can a 2×6 floor joist hold? The weight a 2×6 floor joist can hold depends on factors like wood type, span, spacing, and intended use. Engineering calculations provide accurate load capacity values. Are two 2×4 as strong as a 2×8? Two 2x4s can be as strong as a 2×8 when properly joined together. Engineering considerations and load distribution play a role in determining strength. How much weight can a 2×8 ceiling joist hold? The weight a 2×8 ceiling joist can hold depends on factors like wood type, span, spacing, and load distribution. Engineering calculations provide precise load capacity values. Are 2×6 stronger than 2×4? 2×6 boards are generally stronger than 2×4 boards due to their larger size and greater cross-sectional area. However, specific load capacity depends on other factors as Which is stronger 2 2×4 or 1 2×6? Two 2x4s joined together can be stronger than a single 2×6, but the strength also depends on the method of joining and the load distribution. Is 2×6 a load bearing? 2×6 boards can be used as load-bearing elements in a structure, but their load-bearing capacity depends on factors like span, spacing, wood type, and support. What is weight capacity? Weight capacity refers to the maximum weight or load that an object, structure, or material can safely support without experiencing failure or deformation. What is the maximum load of a material? The maximum load of a material is the highest amount of force or weight that the material can withstand without failure. It varies depending on the material type and its properties. What is maximum permissible load weight? Maximum permissible load weight is the highest weight that can be safely applied to a structure or material without exceeding its designed capacity. How do you calculate the weight of a wood structure? Calculating the weight of a wood structure involves summing the weights of its individual components, including beams, joists, studs, and other How to estimate weight? To estimate weight, use measurements and known densities or weight values for the materials involved. Add up the weights of all components for an overall estimate. What is the load capacity of furniture? The load capacity of furniture varies depending on its design and materials. It’s typically specified by the manufacturer and should not be exceeded to ensure What is the safe load bearing capacity? The safe load-bearing capacity is the maximum weight that a structure or component can safely support without risking failure or safety hazards. How do you calculate the load carrying capacity of a pile? Calculating the load-carrying capacity of a pile involves considering factors like pile type, soil conditions, and engineering equations to determine the safe load it can support. What is capacity vs weight? Capacity refers to the potential to hold or support a certain amount, while weight is the actual force of gravity acting on an object. What does high load capacity mean? High load capacity means that a structure, material, or component can support a significant amount of weight or load without failure. What factors affect load capacity? Factors affecting load capacity include material strength, structural design, dimensions, support conditions, and safety factors. Where can I find load rating? Load ratings for materials and products can often be found in manufacturer specifications, engineering documents, or product labels. How much weight can a warehouse rack hold? The weight a warehouse rack can hold depends on its design, materials, and load rating specified by the manufacturer. What is shelf load? Shelf load refers to the weight that a shelf or shelving unit can safely support. It varies based on the shelf’s design and materials. GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations. Leave a Comment
{"url":"https://gegcalculators.com/table-load-capacity-calculator/","timestamp":"2024-11-08T21:03:59Z","content_type":"text/html","content_length":"178545","record_id":"<urn:uuid:bd48b6f2-40ec-4d7f-b40a-be3f8a827daa>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00426.warc.gz"}
The Evolution of AI: Ray Kurzweil's Vision of the Singularity Written on Chapter 1: Understanding Artificial Intelligence Artificial intelligence (AI) encompasses the branch of computer science focused on developing machines capable of performing tasks that typically require human intellect. These tasks include perception, reasoning, learning, decision-making, and creativity. Rapid advancements in AI have been fueled by the explosion of data, enhanced computing power, and innovative algorithms. One of the most influential figures in the field is Ray Kurzweil, an inventor, entrepreneur, author, and futurist. For decades, Kurzweil has made predictions about technological advancements and their implications for humanity. He is particularly renowned for his concept of the singularity, a future time when technological growth becomes so rapid that human life will be fundamentally Kurzweil anticipates that the singularity will occur around 2045, when AI will exceed human intelligence in all areas and autonomously generate even more advanced machines. This scenario could lead to an exponential increase in intelligence that extends throughout the universe. He envisions a future where humans integrate with this superintelligence, thereby enhancing our abilities and transcending biological constraints. Kurzweil's Insights on the Singularity Kurzweil's forecast about the singularity stems from his analysis of the law of accelerating returns. This principle suggests that the pace of change in any evolutionary process accelerates over time. He applies this concept to technological history, highlighting how various computational paradigms have consistently demonstrated exponential performance improvements. For instance, he charts the calculations per second per dollar for different computing technologies since the 19th century, revealing a steady exponential trend, despite interruptions from technological shifts. He posits that while each computational paradigm eventually encounters diminishing returns, this does not halt the overall trajectory of exponential growth. Instead, new paradigms emerge, allowing continued advancement. Kurzweil predicts that the next major paradigm, following integrated circuits, will be nanotechnology, enabling the construction of molecular-scale computers that can manipulate matter at the atomic level. He also foresees quantum computing enhancing future machines' power and efficiency. Kurzweil further applies the law of accelerating returns to several technological domains, including biotechnology, robotics, and AI. He illustrates that these fields are also experiencing exponential growth and convergence, resulting in unprecedented innovations. For example, he envisions biotechnology enabling genetic reprogramming to combat aging and diseases; nanotechnology facilitating the creation of new materials; robotics producing autonomous agents; and AI developing machines capable of understanding natural language and generating novel solutions. Kurzweil identifies AI as the primary catalyst for the singularity, suggesting that it will lead to machines that surpass human intelligence across all domains. His predictions are based on several 1. The increasing sophistication of AI systems, exemplified by IBM's Deep Blue defeating world chess champion Garry Kasparov in 1997, Google's AlphaGo besting Lee Sedol in Go in 2016, and OpenAI's GPT-3 generating coherent text on any subject in 2020. 2. The growing volume and quality of data available for training AI through machine learning. Kurzweil estimates the internet housed around 10²¹ bits of information by 2020, equivalent to approximately 10 billion books. 3. The enhanced speed and affordability of computing hardware necessary for executing AI systems. As of 2020, worldwide computers performed about 10¹⁸ calculations per second, roughly equal to one human brain's capability. 4. The increasing miniaturization and integration of computing devices, crucial for embedding AI into various environments. Kurzweil predicts that by the 2030s, we will see the emergence of nanobots capable of traversing our bloodstream and connecting us to the cloud for vast intelligence access. He extrapolates these trends to assert that by 2029, AI will achieve the capability to pass the Turing Test, a benchmark for human-like intelligence. Furthermore, he foresees that by 2045, AI will autonomously create more intelligent machines, marking the singularity as a turning point in human history. The first video, "The Last 6 Decades of AI — and What Comes Next," features Ray Kurzweil discussing his predictions and insights into the future of artificial intelligence. How to Construct a Neural Network from the Ground Up Deep learning, a subset of machine learning, is among the most effective methods for developing AI. This technique utilizes neural networks to learn from data and generate predictions. A neural network mimics biological neurons, the fundamental units of the nervous system, comprising layers of artificial neurons interconnected by weights that signify the strength of connections. The network learns from data by adjusting these weights using algorithms like gradient descent. To illustrate the workings of a neural network, let’s construct a simple one using Python. We'll employ NumPy for numerical computations and Matplotlib for visualization. First, we begin by importing the necessary libraries: import numpy as np import matplotlib.pyplot as plt Next, we define the input data and output labels. For simplicity, we will generate a synthetic dataset with two features (x1 and x2) and one label (y), where the label is either 0 or 1, indicating class membership. We will use 100 samples for each class and display them in a scatter plot: # Generate random input data np.random.seed(42) # Set random seed for reproducibility x1 = np.random.randn(100) # Generate 100 random numbers from a normal distribution x2 = np.random.randn(100) # Generate another 100 random numbers from a normal distribution # Generate output labels y = np.zeros(200) # Initialize an array of zeros with length 200 y[:100] = 1 # Set the first 100 elements to 1 # Plot the input data plt.scatter(x1, x2, c=y) # Plot x1 and x2 with colors according to y plt.xlabel('x1') # Set x-axis label plt.ylabel('x2') # Set y-axis label plt.show() # Show the plot The output plot illustrates that the input data is not linearly separable, indicating that a nonlinear classifier is necessary to solve this problem. A neural network serves as an example of such a Next, we will establish the architecture of the neural network. We will create a simple feedforward network featuring one hidden layer with four neurons and an output layer with one neuron. The activation function for both layers will be the sigmoid function, defined as follows: The sigmoid function transforms any input into a value between 0 and 1, making it suitable for binary classification. The neural network's architecture can be diagrammed as follows: Now, we need to initialize the weights and biases of the neural network. We will assign random values from a normal distribution with a mean of zero and a standard deviation of 0.01 for the weights, while biases will be initialized to zero. NumPy arrays will be utilized to store these parameters: # Initialize weights and biases W1 = np.random.normal(0, 0.01, (2, 4)) # Hidden layer weights b1 = np.zeros(4) # Hidden layer biases W2 = np.random.normal(0, 0.01, (4, 1)) # Output layer weights b2 = np.zeros(1) # Output layer biases Next, we define the forward propagation function, which computes the output of the neural network given an input vector. This function consists of two steps: 1. Calculate the linear combination of the input vector and the weights for each layer, adding the corresponding biases. 2. Apply the activation function to each layer's net input to derive the output. Using NumPy's vectorized operations, we can perform these steps efficiently, while also storing intermediate values for backpropagation: # Define forward propagation function def forward_propagation(x): z1 = x.dot(W1) + b1 # Hidden layer net input a1 = 1 / (1 + np.exp(-z1)) # Hidden layer output z2 = a1.dot(W2) + b2 # Output layer net input a2 = 1 / (1 + np.exp(-z2)) # Output layer output return a2, z1, a1, z2 Next, we define the loss function to evaluate the neural network's prediction accuracy. We will employ the binary cross-entropy loss function: # Define loss function def loss_function(y_true, y_pred): loss = -np.mean(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred)) return loss Subsequently, we define the backpropagation function to compute the gradients of the loss function concerning the neural network's weights and biases. This function incorporates two key steps: 1. Calculate the error term for each layer, which involves the derivative of the activation function and the derivative of the loss concerning each layer's net input. 2. Determine the gradient for each weight and bias, derived from the error term and the previous layer's output. Using NumPy's vectorized operations and the chain rule, we can carry out these steps efficiently, with a learning rate parameter governing the weight and bias updates: # Define backpropagation function def backpropagation(x, y_true, y_pred, z1, a1, z2): lr = 0.01 # Learning rate delta2 = y_pred - y_true # Output layer error dW2 = a1.T.dot(delta2) # Gradient for output layer weights db2 = np.sum(delta2, axis=0) # Gradient for output layer biases delta1 = delta2.dot(W2.T) * a1 * (1 - a1) # Hidden layer error dW1 = x.T.dot(delta1) # Gradient for hidden layer weights db1 = np.sum(delta1, axis=0) # Gradient for hidden layer biases # Update weights and biases using gradient descent W1 -= lr * dW1 b1 -= lr * db1 W2 -= lr * dW2 b2 -= lr * db2 Finally, we define the training loop, which iterates through a specified number of epochs, executing forward propagation and backpropagation for each sample in each epoch. We will also track and visualize the loss value over time: # Define training loop def train(x, y, epochs): losses = [] for epoch in range(epochs): y_pred, z1, a1, z2 = forward_propagation(x) backpropagation(x, y, y_pred, z1, a1, z2) loss = loss_function(y, y_pred) if epoch % 10 == 0: print(f'Epoch {epoch}, Loss: {loss}') Now, we can train our neural network using the input data and output labels, specifying 100 epochs for training: # Train neural network on input data and output labels x = np.column_stack((x1, x2)) # Combine x1 and x2 into a 200x2 matrix y = y.reshape(-1, 1) # Reshape y to form a 200x1 vector train(x, y, epochs=100) # Train neural network for 100 epochs The output will display the loss value decreasing over epochs, indicating that the neural network is learning effectively. To assess the neural network's performance, we can use the trained weights and biases for predictions on new input data. For instance, we can generate a grid of points in the feature space and visualize them according to the predicted labels: # Generate grid of points in feature space xx1 = np.linspace(-3, 3, 100) # x1 axis xx2 = np.linspace(-3, 3, 100) # x2 axis xx1, xx2 = np.meshgrid(xx1, xx2) # Create grid from xx1 and xx2 # Make predictions on grid points xx = np.column_stack((xx1.ravel(), xx2.ravel())) # Combine xx1 and xx2 yy_pred, _, _, _ = forward_propagation(xx) # Forward propagation on grid yy_pred = yy_pred.reshape(xx1.shape) # Reshape output to match grid # Visualize predicted labels plt.contourf(xx1, xx2, yy_pred) # Plot filled contours plt.scatter(x1, x2, c=y) # Original input data plt.xlabel('x1') # x-axis label plt.ylabel('x2') # y-axis label plt.show() # Display plot The resulting plot illustrates that the neural network has effectively learned a nonlinear decision boundary separating the two classes. This article has delved into the future of artificial intelligence and Ray Kurzweil's predictions regarding the singularity. We have also provided a comprehensive guide to constructing a neural network from scratch using Python and NumPy. Through forward propagation and backpropagation, we have demonstrated how a neural network learns from data and generates predictions. Additionally, we explored how it can effectively address nonlinear classification problems through the use of a sigmoid activation function and binary cross-entropy loss. The second video, "Ray Kurzweil Q&A - The Singularity, Human-Machine Integration & AI," features Kurzweil addressing questions about his vision for the future of AI and humanity's integration with
{"url":"https://diet-okikae.com/evolution-of-ai-ray-kurzweil-singularity.html","timestamp":"2024-11-12T23:08:13Z","content_type":"text/html","content_length":"21091","record_id":"<urn:uuid:5ef538cb-2106-4803-8d21-301b27a11d4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00659.warc.gz"}
Home Field The question of home field advantage deserves some mention early on. There are three ways to treat home field advantage: as a score adjustment, as a schedule strength adjustment, or ignoring it altogether. The last option is unacceptable, given that teams clearly perform better at home than they do on the road. In my many years of ranking various sports and leagues, I have solved for the home field factor and always found it to be significantly positive. (If home field advantage didn't measurably help teams play better, I would find negative values as often as positive.) Of the remaining options, adjusting game scores is also troublesome. If the home field factor is 3.5 points, does this mean a 3-point win become a loss, while a 4-point win remains a win? What if the team with the 3-point win scored a touchdown with PAT in the final 30 seconds? Had they known (and cared) that I would consider a 3-point home win a loss, wouldn't they have gone for a 2-point conversion instead? In short, a win needs to remain a win because that's what the teams are worried about. This leaves the final option of treating home field as an adjustment to the quality of one's opponent. This intuitively makes sense; playing the #25 team on the road may be like playing the #10 team at home. Likewise, a team that played an excess of road games indeed faced tougher competition than they would have with a balanced schedule. This may seem like an obvious and trivial point, but it is something that the most accepted computer ratings do not agree on. Other Sports In hockey and soccer, tere are many possessions, with either a score (1 point) or not (0 points). In other sports, more than one point value per possession is possible. (Note: in baseball, one possession is one inning, not one at-bat.) There are two ways of addressing this complication. One is to consider all possible combinations of scores that would create S points; to do so is quite difficult but possible. A simpler solution is to divide S1 and S2 by a "typical" number of points per score. Tests of the full-blown probabilities indicate that the "typical" number of points is not the average, but rather the average weighted by the number of points scored. For example, if a football teams scores field goals (3 points) and touchdowns (7 points) with equal frequency, the scaling factor is 5.8. Using actual frequencies of scoring types and factoring in safeties, missed PATs, and two-point conversions, the value is 6.2. In basketball, I use a value of 2.15; in baseball 2.65. (To take the factorial of a non-integer, replace it with the gamma function.) Overtime Games Another note involves the treatment of overtime games. My research has found that overtime results are 50-50 propositions. In other words, the better team wins only 50% of overtime games. Perhaps it is 50.5%, but the deviation from 50% is too small to be accurately measured. Thus G(sa,sb) is set to zero for an overtime game, regardless of the final score. Strategy Adjustments Implicit in the binomial statistics is the assumption that the odds of scoring on any possession is constant over the course of the game. This is not true, as players and coaches will adjust their tactics based on how the game is progressing. This is modeled in two different ways. The first way reflects the changes in coaching strategy -- the winning team will try to protect the lead by playing conservatively, while the losing team will try any measure to get back into the game. Effectively, this means that both teams' scoring odds are lowered by the winning team's strategy changes, and both are raised by the losing team's strategy changes. In football and basketball, the changes made by the losing team tend to outweigh those made by the winning team; this raises F since a team in either of those sports can effectively prolong the game. The defensive changes tend to be more important in the other sports, lowering F values. A second element of adjustments for a lead is the tendency of players to play up or down somewhat to the level of their opponent. This is seen empirically by the fact that it is much easier to predict a basketball game's margin of victory than it is to predict the total score. In other words, a team with a 15-point lead will tend to let up a little bit, which prevents the lead from getting much bigger. Unfortunately, this lowers the leading team's X value and raises the opposing team's X value. For the purposes of this section, we can model this also by changes (reductions) to F. In most sports the change is quite small (around 0-30%); in basketball it is nearly a factor of two. Note that this correction assumes that all teams use similar measures to avoid running up the score. This is true for the most part, but points out the reason why margin of victory should not be used in postseason selections such as the BCS or NCAA basketball tournament. If teams know that margin of victory is important, they will try to run up the score, thus destroying the validity of a margin-of-victory rating. Preseason Rankings A discussion of the need for and calculation of priors is given in the constructing a ranking page. As described there, priors are used to constrain the overall set of team rankings to be within a typical range. An alternate use of priors is to enhance the numerical stability early in the season, when insufficient data exists to draw significant statistical conclusions. If one has a guess of how good the team actually is, then the team's rating should equal that guess before games have been played and move away from that as data is collected. I treat this prior data the same as games, in that a team that has not yet played some minimum number of games has its schedule padded with ties against a team of its guessed strength. Thus, if the minimum is 6 games (the value used for my football ratings) and a team with a preseason rating of +0.9 has played 4 games, then 2 games are added that are treated as ties against a team of strength 0.9. This adjustment is made in the three main ratings (standard, simple, and predictive) and for the improved RPI. An analogous adjustment is made in the college pseudo-poll. Teams with priors in use have a "P" shown at the end of their rating lines. Schedule and Conference Strength Schedule strengths have been calculated several ways, but what is the best way of doing it? Consider two cases. Team 1 is an outstanding team that plays most of its games against mediocre teams (i.e. teams ranked near the middle of the set). Team 2 is also an outstanding team, but it plays half of its games against other outstanding teams and half against horrible teams. According to an RPI ranking, the two teams would have the same (or nearly identical) schedule strengths, since the RPI uses the straight average of the opponents' winning averages. However, it is clear that Team 2 challenged itself much more. Given average luck, team 2 probably beat all of the horrible teams but only half of the excellent teams, thus winning 75% of its games. Team 1, on the other hand, probably won 90% of the games against mediocre teams. Thus the key factor is the team's most likely winning percentage against its schedule. To calculate this, use the principles described above. Given the team's ranking, its opponents' rankings, and the home field advantage, the odds of winning is: P(win) = integral(x=-inf,dr) exp(-0.5*(x^2)) / sqrt(2*pi), where dr is the difference between the team's ranking and its opponent ranking, accounting for home field advantage. One must then make this calculation for each game the team has played, giving the average number of wins. Dividing by the number of games gives the team's average winning percentage against its schedule. Setting P(win) to this value and solving for dr by inverting the integral calculation thus gives the "dr" value of a "typical" opponent, where "typical" means an opponent that, if played in every single game at a neutral site, would give the same average winning percentage as the set of opponents a team actually played. I define this as the schedule strength. Conference ratings are calculated in a similar manner. A conference's rating equals the rating of a team that would be expected to go 0.500 against the teams in the conference with all games played at a neutral site. Again, this calculation is most sensitive to teams in the middle of the conference, which are the games that can go either way. Because the schedule strength depends on a team's own strength, two teams playing identical schedules will not have the same schedule strength rating. While this is intentional (what matters is how the schedule affects the team playing that schedule), there may be times in which it is important to have all schedules rated on the same scale. To do this, a second schedule measure is provided, expected losses (ELOSS). For pro sports, ELOSS gives the number of losses an average team would have if playing the team's schedule. For college sports, ELOSS gives the number of losses an average ranked (top 25 except for hockey, which is top 15) team would have against the schedule. Head-to-Head Games A common complaint about computer rankings is that they appear to completely overlook head-to-head game results, and will happily put one team immediately ahead of a team it lost to. As I have shown in a study, this is indeed the statistically correct treatment. Given three games between three teams, each going 1-1, it is more likely that there were two minor upsets and one favorite winning than that there was one significant upset and two favorites winning. That said, if one wishes to accurately rank a pair of teams, the head-to-head result should be given greater importance. The reason has nothing to do with perceived importance of such games, but rather the fact that teams match up better against some teams than they do against others. So back to the three team example, where team A beat team B, B beat team C, and C beat A. If matchup effects do not exist, team A is more likely than not worse than team B and better than team C. However, accounting for matchups, team A is probably better than team B, in the sense that they would probably win the rematch. In my rankings, I account for this factor only in the probable superiority (standard) ranking, where it can be done fairly easily because the ranking is based on team-by-team comparisons. Return to ratings main page Note: if you use any of the facts, equations, or mathematical principles on this page, you must give me credit. copyright ©2001-2003 Andrew Dolphin
{"url":"http://dolphinsim.com/ratings/info/details.html","timestamp":"2024-11-03T15:36:57Z","content_type":"text/html","content_length":"11791","record_id":"<urn:uuid:3348c848-d2bc-4625-9551-ef5aa5b7821e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00469.warc.gz"}
Stochastic Dual Dynamic Integer Programming Multistage stochastic integer programming (MSIP) combines the difficulty of uncertainty, dynamics, and non-convexity, and constitutes a class of extremely challenging problems. A common formulation for these problems is a dynamic programming formulation involving nested cost-to-go functions. In the linear setting, the cost-to-go functions are convex polyhedral, and decomposition algorithms, such as nested Benders’ decomposition and its stochastic variant - Stochastic Dual Dynamic Programming (SDDP) - that proceed by iteratively approximating these functions by cuts or linear inequalities, have been established as effective approaches. It is difficult to directly adapt these algorithms to MSIP due to the nonconvexity of integer programming value functions. In this paper we propose an extension to SDDP – called stochastic dual dynamic integer programming (SDDiP) – for solving MSIP problems with binary state variables. The crucial component of the algorithm is a new class of cuts, termed Lagrangian cuts, derived from a Lagrangian relaxation of a specific reformulation of the subproblems in each stage, where local copies of state variables are introduced. We show that the Lagrangian cuts satisfy a tightness condition and provide a rigorous proof of the finite convergence of SDDiP with probability one. We show that, under fairly reasonable assumptions, a MSIP problem with general state variables can be approximated by one with binary state variables to desired precision with only a modest increase in problem size. Thus our proposed SDDiP approach is applicable to very general classes of MSIP problems. Extensive computational experiments on three classes of real-world problems, namely electric generation expansion, financial portfolio management, and network revenue management, show that the proposed methodology is very effective in solving large-scale, multistage stochastic integer optimization problems. Submitted for publication, 2016.
{"url":"https://optimization-online.org/2016/05/5436/","timestamp":"2024-11-03T03:14:22Z","content_type":"text/html","content_length":"85914","record_id":"<urn:uuid:ead83ed0-2368-4f29-8808-08b5ee6e307e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00217.warc.gz"}
max-consecutive-ones | Leetcode Similar Problems Similar Problems not available Max Consecutive Ones - Leetcode Solution LeetCode: Max Consecutive Ones Leetcode Solution Difficulty: Easy Topics: array Problem Statement: Given a binary array nums, return the maximum number of consecutive 1's in the array. Example 1: Input: nums = [1,1,0,1,1,1] Output: 3 Explanation: The first two digits or the last three digits are consecutive 1s. The maximum number of consecutive 1s is 3. Example 2: Input: nums = [1,0,1,1,0,1] Output: 2 1 <= nums.length <= 105 nums[i] is either 0 or 1. We can solve this problem by traversing the array and keeping track of the current length of the consecutive 1's sequence and the maximum length of consecutive 1's sequence seen so far. We start with a variable to keep track of the current length of consecutive 1's sequence and a variable to keep track of the maximum length of consecutive 1's sequence seen so far. Then, we iterate through the array and if we encounter a 1, we increase the current length of consecutive 1's sequence by 1. If we encounter a 0, we reset the current length of consecutive 1's sequence to 0. Finally, on each iteration, we update the maximum length of consecutive 1's sequence seen so far if the current length is greater than the maximum length. Here's the Python code to implement the above algorithm: def findMaxConsecutiveOnes(nums): max_ones = 0 cur_ones = 0 for num in nums: if num == 1: cur_ones += 1 cur_ones = 0 max_ones = max(max_ones, cur_ones) return max_ones nums = [1,1,0,1,1,1] print(findMaxConsecutiveOnes(nums)) # Output: 3 nums = [1,0,1,1,0,1] print(findMaxConsecutiveOnes(nums)) # Output: 2 Complexity Analysis: Time Complexity: O(n), where n is the length of the array nums. We traverse the entire array once. Space Complexity: O(1). We use constant extra space to store the variable max_ones and cur_ones. Max Consecutive Ones Solution Code
{"url":"https://prepfortech.io/leetcode-solutions/max-consecutive-ones","timestamp":"2024-11-08T23:43:14Z","content_type":"text/html","content_length":"56135","record_id":"<urn:uuid:bb20291a-e99d-4791-9d2c-79d2057a9bb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00453.warc.gz"}
How to Calculate the Variance of a Frequency Distribution in Excel Frequency data, in simple terms, records how often specific values or categories appear in a dataset. For instance, it could show how many students scored within certain test score ranges or how many cars were sold by different brands in a month. It’s important for understanding the distribution of values in your dataset. To calculate the variance of frequency data, you’ll use a formula that’s a bit different from the standard one. The formula for variance of a frequency distribution looks like this: Variance = ∑ f(x – 𝑥̄)² / ∑ f Here’s what each part of the formula represents: • f: The frequency of each value or category. • x: The value or category. • 𝑥̄: The mean of the frequency distribution. • ∑: Indicates summation, meaning adding up all the values. Method 1: Using Mathematical Formulas Excel can automate these calculations, making the process more efficient. Here are the steps: 1. Calculate the Mean: Use the formula =SUMPRODUCT(B2:B10, C2:C10) / SUM(C2:C10), where B2:B10 has the values or categories, and C2:C10 has their frequencies. 2. Squared Differences: Calculate the squared difference between each value or category and the mean using the formula =C2*(B2-$D$2)^2. Fill this formula down to calculate squared differences for all data points. 3. Sum of Squared Differences: Find the sum of the squared differences with the formula =SUM(D2:D10). 4. Calculate Variance: For sample data, use =SUMPRODUCT((midpoints – mean)^2, frequencies) / (SUM(frequencies) – 1). For population data, use =SUMPRODUCT((midpoints – mean)^2, frequencies) / SUM This method offers transparency and control over the calculations. Method 2: Using Built-in Functions The second method is quicker and simpler, but it doesn’t show each calculation step. You can use Excel’s built-in functions: • VAR.S: Calculates sample variance for numerical values. • VARA: Calculates sample variance for numerical and logical values (TRUE and FALSE). • VAR.P: Computes population variance for numerical values. • VARPA: Calculates population variance using numerical and logical values. To use these functions for frequency data, enter the values or cell ranges containing values and frequencies, separated by commas. For example, to calculate sample variance, use =VARA(B4:B11, C4:C11). For population variance, use =VARPA(B4:B11, C4:C11). These functions simplify the process without showing each calculation step.
{"url":"https://best-excel-tutorial.com/how-to-calculate-the-variance-of-a-frequency-distribution-in-excel/?amp=1","timestamp":"2024-11-03T00:30:35Z","content_type":"text/html","content_length":"43227","record_id":"<urn:uuid:99c85ec0-84c2-4aa0-b3c4-281499f5f032>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00008.warc.gz"}
In Python, how to add the values of a single row of a NumPy array? Question #6939 Submitted by Answiki on 10/16/2022 at 07:38:03 PM UTC In Python, how to add the values of a single row of a NumPy array? Answer Submitted by Answiki on 10/16/2022 at 07:38:49 PM UTC In Python, to add the values of a row of a NumPy array, the easiest way is to isolate the concerned row and to calculate the sum with the numpy.sum() function: >>> import numpy as np >>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a[0,:].sum() >>> a[1,:].sum() Answer by Answiki on 10/16/2022 at 07:38:49 PM In Python, to add the values of a row of a NumPy array, the easiest way is to isolate the concerned row and to calculate the sum with the numpy.sum() function: >>> import numpy as np >>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a[0,:].sum() >>> a[1,:].sum() Question by Answiki 10/16/2022 at 07:38:03 PM In Python, how to add the values of a single row of a NumPy array? Question by Answiki 10/16/2022 at 07:37:56 PM In Python, how to add the values of a row of a NumPy array? Icons proudly provided by
{"url":"https://en.ans.wiki/6939/in-python-how-to-add-the-values-of-a-single-row-of-a-numpy-array/","timestamp":"2024-11-06T07:46:19Z","content_type":"text/html","content_length":"42955","record_id":"<urn:uuid:b33b1f17-e332-4829-bd18-6defb91ed51a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00137.warc.gz"}
Convergence rate of general entropic optimal transport costs Published Paper Inserted: 7 jun 2022 Last Updated: 9 apr 2024 Journal: Calculus of Variations and Partial Differential Equations Volume: 62 Year: 2023 Doi: 10.1007/s00526-023-02455-0 Links: HAL repository We investigate the convergence rate of the optimal entropic cost $v_\varepsilon$ to the optimal transport cost as the noise parameter $\varepsilon \downarrow 0$. We show that for a large class of cost functions $c$ on $\mathbb{R}^d\times \mathbb{R}^d$ (for which optimal plans are not necessarily unique or induced by a transport map) and compactly supported and $L^{\infty}$ marginals, one has $v_\varepsilon-v_0= \frac{d}{2} \varepsilon \log(1/\varepsilon)+ O(\varepsilon)$. Upper bounds are obtained by a block approximation strategy and an integral variant of Alexandrov's theorem. Under an infinitesimal twist condition on $c$, i.e.\ invertibility of $\nabla_{xy}^2 c(x,y)$, we get the lower bound by establishing a quadratic detachment of the duality gap in $d$ dimensions thanks to Minty's trick. Keywords: Optimal transport, Entropic regularization, Schrödinger problem, convex analysis, entropic dimension
{"url":"https://cvgmt.sns.it/paper/5579/","timestamp":"2024-11-07T22:24:15Z","content_type":"text/html","content_length":"9381","record_id":"<urn:uuid:1a3fbbd0-d992-431b-a210-2af3d2e1cace>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00231.warc.gz"}